WorldWideScience

Sample records for scene categories revealed

  1. Category-length and category-strength effects using images of scenes.

    Science.gov (United States)

    Baumann, Oliver; Vromen, Joyce M G; Boddy, Adam C; Crawshaw, Eloise; Humphreys, Michael S

    2018-06-21

    Global matching models have provided an important theoretical framework for recognition memory. Key predictions of this class of models are that (1) increasing the number of occurrences in a study list of some items affects the performance on other items (list-strength effect) and that (2) adding new items results in a deterioration of performance on the other items (list-length effect). Experimental confirmation of these predictions has been difficult, and the results have been inconsistent. A review of the existing literature, however, suggests that robust length and strength effects do occur when sufficiently similar hard-to-label items are used. In an effort to investigate this further, we had participants study lists containing one or more members of visual scene categories (bathrooms, beaches, etc.). Experiments 1 and 2 replicated and extended previous findings showing that the study of additional category members decreased accuracy, providing confirmation of the category-length effect. Experiment 3 showed that repeating some category members decreased the accuracy of nonrepeated members, providing evidence for a category-strength effect. Experiment 4 eliminated a potential challenge to these results. Taken together, these findings provide robust support for global matching models of recognition memory. The overall list lengths, the category sizes, and the number of repetitions used demonstrated that scene categories are well-suited to testing the fundamental assumptions of global matching models. These include (A) interference from memories for similar items and contexts, (B) nondestructive interference, and (C) that conjunctive information is made available through a matching operation.

  2. Feature diagnosticity and task context shape activity in human scene-selective cortex.

    Science.gov (United States)

    Lowe, Matthew X; Gallivan, Jason P; Ferber, Susanne; Cant, Jonathan S

    2016-01-15

    Scenes are constructed from multiple visual features, yet previous research investigating scene processing has often focused on the contributions of single features in isolation. In the real world, features rarely exist independently of one another and likely converge to inform scene identity in unique ways. Here, we utilize fMRI and pattern classification techniques to examine the interactions between task context (i.e., attend to diagnostic global scene features; texture or layout) and high-level scene attributes (content and spatial boundary) to test the novel hypothesis that scene-selective cortex represents multiple visual features, the importance of which varies according to their diagnostic relevance across scene categories and task demands. Our results show for the first time that scene representations are driven by interactions between multiple visual features and high-level scene attributes. Specifically, univariate analysis of scene-selective cortex revealed that task context and feature diagnosticity shape activity differentially across scene categories. Examination using multivariate decoding methods revealed results consistent with univariate findings, but also evidence for an interaction between high-level scene attributes and diagnostic visual features within scene categories. Critically, these findings suggest visual feature representations are not distributed uniformly across scene categories but are shaped by task context and feature diagnosticity. Thus, we propose that scene-selective cortex constructs a flexible representation of the environment by integrating multiple diagnostically relevant visual features, the nature of which varies according to the particular scene being perceived and the goals of the observer. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. Semantic memory for contextual regularities within and across scene categories: evidence from eye movements.

    Science.gov (United States)

    Brockmole, James R; Le-Hoa Võ, Melissa

    2010-10-01

    When encountering familiar scenes, observers can use item-specific memory to facilitate the guidance of attention to objects appearing in known locations or configurations. Here, we investigated how memory for relational contingencies that emerge across different scenes can be exploited to guide attention. Participants searched for letter targets embedded in pictures of bedrooms. In a between-subjects manipulation, targets were either always on a bed pillow or randomly positioned. When targets were systematically located within scenes, search for targets became more efficient. Importantly, this learning transferred to bedrooms without pillows, ruling out learning that is based on perceptual contingencies. Learning also transferred to living room scenes, but it did not transfer to kitchen scenes, even though both scene types contained pillows. These results suggest that statistical regularities abstracted across a range of stimuli are governed by semantic expectations regarding the presence of target-predicting local landmarks. Moreover, explicit awareness of these contingencies led to a central tendency bias in recall memory for precise target positions that is similar to the spatial category effects observed in landmark memory. These results broaden the scope of conditions under which contextual cuing operates and demonstrate how semantic memory plays a causal and independent role in the learning of associations between objects in real-world scenes.

  4. Fourier power, subjective distance, and object categories all provide plausible models of BOLD responses in scene-selective visual areas

    Science.gov (United States)

    Lescroart, Mark D.; Stansbury, Dustin E.; Gallant, Jack L.

    2015-01-01

    Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue. PMID:26594164

  5. Neural Correlates of Divided Attention in Natural Scenes.

    Science.gov (United States)

    Fagioli, Sabrina; Macaluso, Emiliano

    2016-09-01

    Individuals are able to split attention between separate locations, but divided spatial attention incurs the additional requirement of monitoring multiple streams of information. Here, we investigated divided attention using photos of natural scenes, where the rapid categorization of familiar objects and prior knowledge about the likely positions of objects in the real world might affect the interplay between these spatial and nonspatial factors. Sixteen participants underwent fMRI during an object detection task. They were presented with scenes containing either a person or a car, located on the left or right side of the photo. Participants monitored either one or both object categories, in one or both visual hemifields. First, we investigated the interplay between spatial and nonspatial attention by comparing conditions of divided attention between categories and/or locations. We then assessed the contribution of top-down processes versus stimulus-driven signals by separately testing the effects of divided attention in target and nontarget trials. The results revealed activation of a bilateral frontoparietal network when dividing attention between the two object categories versus attending to a single category but no main effect of dividing attention between spatial locations. Within this network, the left dorsal premotor cortex and the left intraparietal sulcus were found to combine task- and stimulus-related signals. These regions showed maximal activation when participants monitored two categories at spatially separate locations and the scene included a nontarget object. We conclude that the dorsal frontoparietal cortex integrates top-down and bottom-up signals in the presence of distractors during divided attention in real-world scenes.

  6. Iconic memory for the gist of natural scenes.

    Science.gov (United States)

    Clarke, Jason; Mack, Arien

    2014-11-01

    Does iconic memory contain the gist of multiple scenes? Three experiments were conducted. In the first, four scenes from different basic-level categories were briefly presented in one of two conditions: a cue or a no-cue condition. The cue condition was designed to provide an index of the contents of iconic memory of the display. Subjects were more sensitive to scene gist in the cue condition than in the no-cue condition. In the second, the scenes came from the same basic-level category. We found no difference in sensitivity between the two conditions. In the third, six scenes from different basic level categories were presented in the visual periphery. Subjects were more sensitive to scene gist in the cue condition. These results suggest that scene gist is contained in iconic memory even in the visual periphery; however, iconic representations are not sufficiently detailed to distinguish between scenes coming from the same category. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. Lateralized discrimination of emotional scenes in peripheral vision.

    Science.gov (United States)

    Calvo, Manuel G; Rodríguez-Chinea, Sandra; Fernández-Martín, Andrés

    2015-03-01

    This study investigates whether there is lateralized processing of emotional scenes in the visual periphery, in the absence of eye fixations; and whether this varies with emotional valence (pleasant vs. unpleasant), specific emotional scene content (babies, erotica, human attack, mutilation, etc.), and sex of the viewer. Pairs of emotional (positive or negative) and neutral photographs were presented for 150 ms peripherally (≥6.5° away from fixation). Observers judged on which side the emotional picture was located. Low-level image properties, scene visual saliency, and eye movements were controlled. Results showed that (a) correct identification of the emotional scene exceeded the chance level; (b) performance was more accurate and faster when the emotional scene appeared in the left than in the right visual field; (c) lateralization was equivalent for females and males for pleasant scenes, but was greater for females and unpleasant scenes; and (d) lateralization occurred similarly for different emotional scene categories. These findings reveal discrimination between emotional and neutral scenes, and right brain hemisphere dominance for emotional processing, which is modulated by sex of the viewer and scene valence, and suggest that coarse affective significance can be extracted in peripheral vision.

  8. Binocular Fusion and Invariant Category Learning due to Predictive Remapping during Scanning of a Depthful Scene with Eye Movements

    Directory of Open Access Journals (Sweden)

    Stephen eGrossberg

    2015-01-01

    Full Text Available How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object’s surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.

  9. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements

    Science.gov (United States)

    Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash

    2015-01-01

    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations. PMID:25642198

  10. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements.

    Science.gov (United States)

    Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash

    2014-01-01

    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying concepts about how multiple cortical areas in the What and Where cortical streams interact to coordinate processes of 3D boundary and surface perception, spatial attention, invariant object category learning, predictive remapping, eye movement control, and learned coordinate transformations. The model explains data from single neuron and psychophysical studies of covert visual attention shifts prior to eye movements. The model further clarifies how perceptual, attentional, and cognitive interactions among multiple brain regions (LGN, V1, V2, V3A, V4, MT, MST, PPC, LIP, ITp, ITa, SC) may accomplish predictive remapping as part of the process whereby view-invariant object categories are learned. These results build upon earlier neural models of 3D vision and figure-ground separation and the learning of invariant object categories as the eyes freely scan a scene. A key process concerns how an object's surface representation generates a form-fitting distribution of spatial attention, or attentional shroud, in parietal cortex that helps maintain the stability of multiple perceptual and cognitive processes. Predictive eye movement signals maintain the stability of the shroud, as well as of binocularly fused perceptual boundaries and surface representations.

  11. Evaluating color descriptors for object and scene recognition.

    Science.gov (United States)

    van de Sande, Koen E A; Gevers, Theo; Snoek, Cees G M

    2010-09-01

    Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been proposed. Because many different descriptors exist, a structured overview is required of color invariant descriptors in the context of image category recognition. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors (software to compute the color descriptors from this paper is available from http://www.colordescriptors.com) in a structured way. The analytical invariance properties of color descriptors are explored, using a taxonomy based on invariance properties with respect to photometric transformations, and tested experimentally using a data set with known illumination conditions. In addition, the distinctiveness of color descriptors is assessed experimentally using two benchmarks, one from the image domain and one from the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results further reveal that, for light intensity shifts, the usefulness of invariance is category-specific. Overall, when choosing a single descriptor and no prior knowledge about the data set and object and scene categories is available, the OpponentSIFT is recommended. Furthermore, a combined set of color descriptors outperforms intensity-based SIFT and improves category recognition by 8 percent on the PASCAL VOC 2007 and by 7 percent on the Mediamill Challenge.

  12. A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification

    Directory of Open Access Journals (Sweden)

    Yunlong Yu

    2018-01-01

    Full Text Available One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references.

  13. Evidence for similar patterns of neural activity elicited by picture- and word-based representations of natural scenes.

    Science.gov (United States)

    Kumar, Manoj; Federmeier, Kara D; Fei-Fei, Li; Beck, Diane M

    2017-07-15

    A long-standing core question in cognitive science is whether different modalities and representation types (pictures, words, sounds, etc.) access a common store of semantic information. Although different input types have been shown to activate a shared network of brain regions, this does not necessitate that there is a common representation, as the neurons in these regions could still differentially process the different modalities. However, multi-voxel pattern analysis can be used to assess whether, e.g., pictures and words evoke a similar pattern of activity, such that the patterns that separate categories in one modality transfer to the other. Prior work using this method has found support for a common code, but has two limitations: they have either only examined disparate categories (e.g. animals vs. tools) that are known to activate different brain regions, raising the possibility that the pattern separation and inferred similarity reflects only large scale differences between the categories or they have been limited to individual object representations. By using natural scene categories, we not only extend the current literature on cross-modal representations beyond objects, but also, because natural scene categories activate a common set of brain regions, we identify a more fine-grained (i.e. higher spatial resolution) common representation. Specifically, we studied picture- and word-based representations of natural scene stimuli from four different categories: beaches, cities, highways, and mountains. Participants passively viewed blocks of either phrases (e.g. "sandy beach") describing scenes or photographs from those same scene categories. To determine whether the phrases and pictures evoke a common code, we asked whether a classifier trained on one stimulus type (e.g. phrase stimuli) would transfer (i.e. cross-decode) to the other stimulus type (e.g. picture stimuli). The analysis revealed cross-decoding in the occipitotemporal, posterior parietal and

  14. A hierarchical probabilistic model for rapid object categorization in natural scenes.

    Directory of Open Access Journals (Sweden)

    Xiaofu He

    Full Text Available Humans can categorize objects in complex natural scenes within 100-150 ms. This amazing ability of rapid categorization has motivated many computational models. Most of these models require extensive training to obtain a decision boundary in a very high dimensional (e.g., ∼6,000 in a leading model feature space and often categorize objects in natural scenes by categorizing the context that co-occurs with objects when objects do not occupy large portions of the scenes. It is thus unclear how humans achieve rapid scene categorization.To address this issue, we developed a hierarchical probabilistic model for rapid object categorization in natural scenes. In this model, a natural object category is represented by a coarse hierarchical probability distribution (PD, which includes PDs of object geometry and spatial configuration of object parts. Object parts are encoded by PDs of a set of natural object structures, each of which is a concatenation of local object features. Rapid categorization is performed as statistical inference. Since the model uses a very small number (∼100 of structures for even complex object categories such as animals and cars, it requires little training and is robust in the presence of large variations within object categories and in their occurrences in natural scenes. Remarkably, we found that the model categorized animals in natural scenes and cars in street scenes with a near human-level performance. We also found that the model located animals and cars in natural scenes, thus overcoming a flaw in many other models which is to categorize objects in natural context by categorizing contextual features. These results suggest that coarse PDs of object categories based on natural object structures and statistical operations on these PDs may underlie the human ability to rapidly categorize scenes.

  15. Beyond scene gist: Objects guide search more than scene background.

    Science.gov (United States)

    Koehler, Kathryn; Eckstein, Miguel P

    2017-06-01

    Although the facilitation of visual search by contextual information is well established, there is little understanding of the independent contributions of different types of contextual cues in scenes. Here we manipulated 3 types of contextual information: object co-occurrence, multiple object configurations, and background category. We isolated the benefits of each contextual cue to target detectability, its impact on decision bias, confidence, and the guidance of eye movements. We find that object-based information guides eye movements and facilitates perceptual judgments more than scene background. The degree of guidance and facilitation of each contextual cue can be related to its inherent informativeness about the target spatial location as measured by human explicit judgments about likely target locations. Our results improve the understanding of the contributions of distinct contextual scene components to search and suggest that the brain's utilization of cues to guide eye movements is linked to the cue's informativeness about the target's location. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  16. The Processing Speed of Scene Categorization at Multiple Levels of Description: The Superordinate Advantage Revisited.

    Science.gov (United States)

    Banno, Hayaki; Saiki, Jun

    2015-03-01

    Recent studies have sought to determine which levels of categories are processed first in visual scene categorization and have shown that the natural and man-made superordinate-level categories are understood faster than are basic-level categories. The current study examined the robustness of the superordinate-level advantage in a visual scene categorization task. A go/no-go categorization task was evaluated with response time distribution analysis using an ex-Gaussian template. A visual scene was categorized as either superordinate or basic level, and two basic-level categories forming a superordinate category were judged as either similar or dissimilar to each other. First, outdoor/ indoor groups and natural/man-made were used as superordinate categories to investigate whether the advantage could be generalized beyond the natural/man-made boundary. Second, a set of images forming a superordinate category was manipulated. We predicted that decreasing image set similarity within the superordinate-level category would work against the speed advantage. We found that basic-level categorization was faster than outdoor/indoor categorization when the outdoor category comprised dissimilar basic-level categories. Our results indicate that the superordinate-level advantage in visual scene categorization is labile across different categories and category structures. © 2015 SAGE Publications.

  17. Knowledge Guided Disambiguation for Large-Scale Scene Classification With Multi-Resolution CNNs

    Science.gov (United States)

    Wang, Limin; Guo, Sheng; Huang, Weilin; Xiong, Yuanjun; Qiao, Yu

    2017-04-01

    Convolutional Neural Networks (CNNs) have made remarkable progress on scene recognition, partially due to these recent large-scale scene datasets, such as the Places and Places2. Scene categories are often defined by multi-level information, including local objects, global layout, and background environment, thus leading to large intra-class variations. In addition, with the increasing number of scene categories, label ambiguity has become another crucial issue in large-scale classification. This paper focuses on large-scale scene recognition and makes two major contributions to tackle these issues. First, we propose a multi-resolution CNN architecture that captures visual content and structure at multiple levels. The multi-resolution CNNs are composed of coarse resolution CNNs and fine resolution CNNs, which are complementary to each other. Second, we design two knowledge guided disambiguation techniques to deal with the problem of label ambiguity. (i) We exploit the knowledge from the confusion matrix computed on validation data to merge ambiguous classes into a super category. (ii) We utilize the knowledge of extra networks to produce a soft label for each image. Then the super categories or soft labels are employed to guide CNN training on the Places2. We conduct extensive experiments on three large-scale image datasets (ImageNet, Places, and Places2), demonstrating the effectiveness of our approach. Furthermore, our method takes part in two major scene recognition challenges, and achieves the second place at the Places2 challenge in ILSVRC 2015, and the first place at the LSUN challenge in CVPR 2016. Finally, we directly test the learned representations on other scene benchmarks, and obtain the new state-of-the-art results on the MIT Indoor67 (86.7\\%) and SUN397 (72.0\\%). We release the code and models at~\\url{https://github.com/wanglimin/MRCNN-Scene-Recognition}.

  18. Dynamics of scene representations in the human brain revealed by magnetoencephalography and deep neural networks

    Science.gov (United States)

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude

    2017-01-01

    Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100 ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250 ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. PMID:27039703

  19. Effects of luminance contrast and its modifications on fixation behavior during free viewing of images from different categories.

    Science.gov (United States)

    Açik, Alper; Onat, Selim; Schumann, Frank; Einhäuser, Wolfgang; König, Peter

    2009-06-01

    During viewing of natural scenes, do low-level features guide attention, and if so, does this depend on higher-level features? To answer these questions, we studied the image category dependence of low-level feature modification effects. Subjects fixated contrast-modified regions often in natural scene images, while smaller but significant effects were observed for urban scenes and faces. Surprisingly, modifications in fractal images did not influence fixations. Further analysis revealed an inverse relationship between modification effects and higher-level, phase-dependent image features. We suggest that high- and mid-level features--such as edges, symmetries, and recursive patterns--guide attention if present. However, if the scene lacks such diagnostic properties, low-level features prevail. We posit a hierarchical framework, which combines aspects of bottom-up and top-down theories and is compatible with our data.

  20. Effects of varying presentation time on long-term recognition memory for scenes: Verbatim and gist representations.

    Science.gov (United States)

    Ahmad, Fahad N; Moscovitch, Morris; Hockley, William E

    2017-04-01

    Konkle, Brady, Alvarez and Oliva (Psychological Science, 21, 1551-1556, 2010) showed that participants have an exceptional long-term memory (LTM) for photographs of scenes. We examined to what extent participants' exceptional LTM for scenes is determined by presentation time during encoding. In addition, at retrieval, we varied the nature of the lures in a forced-choice recognition task so that they resembled the target in gist (i.e., global or categorical) information, but were distinct in verbatim information (e.g., an "old" beach scene and a similar "new" beach scene; exemplar condition) or vice versa (e.g., a beach scene and a new scene from a novel category; novel condition). In Experiment 1, half of the list of scenes was presented for 1 s, whereas the other half was presented for 4 s. We found lower performance for shorter study presentation time in the exemplar test condition and similar performance for both study presentation times in the novel test condition. In Experiment 2, participants showed similar performance in an exemplar test for which the lure was of a different category but a category that was used at study. In Experiment 3, when presentation time was lowered to 500 ms, recognition accuracy was reduced in both novel and exemplar test conditions. A less detailed memorial representation of the studied scene containing more gist (i.e., meaning) than verbatim (i.e., surface or perceptual details) information is retrieved from LTM after a short compared to a long study presentation time. We conclude that our findings support fuzzy-trace theory.

  1. Statistics of high-level scene context.

    Science.gov (United States)

    Greene, Michelle R

    2013-01-01

    CONTEXT IS CRITICAL FOR RECOGNIZING ENVIRONMENTS AND FOR SEARCHING FOR OBJECTS WITHIN THEM: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed "things" in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics

  2. Scene construction in schizophrenia.

    Science.gov (United States)

    Raffard, Stéphane; D'Argembeau, Arnaud; Bayard, Sophie; Boulenger, Jean-Philippe; Van der Linden, Martial

    2010-09-01

    Recent research has revealed that schizophrenia patients are impaired in remembering the past and imagining the future. In this study, we examined patients' ability to engage in scene construction (i.e., the process of mentally generating and maintaining a complex and coherent scene), which is a key part of retrieving past experiences and episodic future thinking. 24 participants with schizophrenia and 25 healthy controls were asked to imagine new fictitious experiences and described their mental representations of the scenes in as much detail as possible. Descriptions were scored according to various dimensions (e.g., sensory details, spatial reference), and participants also provided ratings of their subjective experience when imagining the scenes (e.g., their sense of presence, the perceived similarity of imagined events to past experiences). Imagined scenes contained less phenomenological details (d = 1.11) and were more fragmented (d = 2.81) in schizophrenia patients compared to controls. Furthermore, positive symptoms were positively correlated to the sense of presence (r = .43) and the perceived similarity of imagined events to past episodes (r = .47), whereas negative symptoms were negatively related to the overall richness of the imagined scenes (r = -.43). The results suggest that schizophrenic patients' impairments in remembering the past and imagining the future are, at least in part, due to deficits in the process of scene construction. The relationships between the characteristics of imagined scenes and positive and negative symptoms could be related to reality monitoring deficits and difficulties in strategic retrieval processes, respectively. Copyright 2010 APA, all rights reserved.

  3. A view not to be missed: Salient scene content interferes with cognitive restoration

    Science.gov (United States)

    Van der Jagt, Alexander P. N.; Craig, Tony; Brewer, Mark J.; Pearson, David G.

    2017-01-01

    Attention Restoration Theory (ART) states that built scenes place greater load on attentional resources than natural scenes. This is explained in terms of "hard" and "soft" fascination of built and natural scenes. Given a lack of direct empirical evidence for this assumption we propose that perceptual saliency of scene content can function as an empirically derived indicator of fascination. Saliency levels were established by measuring speed of scene category detection using a Go/No-Go detection paradigm. Experiment 1 shows that built scenes are more salient than natural scenes. Experiment 2 replicates these findings using greyscale images, ruling out a colour-based response strategy, and additionally shows that built objects in natural scenes affect saliency to a greater extent than the reverse. Experiment 3 demonstrates that the saliency of scene content is directly linked to cognitive restoration using an established restoration paradigm. Overall, these findings demonstrate an important link between the saliency of scene content and related cognitive restoration. PMID:28723975

  4. A view not to be missed: Salient scene content interferes with cognitive restoration.

    Directory of Open Access Journals (Sweden)

    Alexander P N Van der Jagt

    Full Text Available Attention Restoration Theory (ART states that built scenes place greater load on attentional resources than natural scenes. This is explained in terms of "hard" and "soft" fascination of built and natural scenes. Given a lack of direct empirical evidence for this assumption we propose that perceptual saliency of scene content can function as an empirically derived indicator of fascination. Saliency levels were established by measuring speed of scene category detection using a Go/No-Go detection paradigm. Experiment 1 shows that built scenes are more salient than natural scenes. Experiment 2 replicates these findings using greyscale images, ruling out a colour-based response strategy, and additionally shows that built objects in natural scenes affect saliency to a greater extent than the reverse. Experiment 3 demonstrates that the saliency of scene content is directly linked to cognitive restoration using an established restoration paradigm. Overall, these findings demonstrate an important link between the saliency of scene content and related cognitive restoration.

  5. Evaluating Color Descriptors for Object and Scene Recognition

    NARCIS (Netherlands)

    van de Sande, K.E.A.; Gevers, T.; Snoek, C.G.M.

    2010-01-01

    Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been

  6. Recall versus familiarity when recall fails for words and scenes: the differential roles of the hippocampus, perirhinal cortex, and category-specific cortical regions.

    Science.gov (United States)

    Ryals, Anthony J; Cleary, Anne M; Seger, Carol A

    2013-01-25

    This fMRI study examined recall and familiarity for words and scenes using the novel recognition without cued recall (RWCR) paradigm. Subjects performed a cued recall task in which half of the test cues resembled studied items (and thus were familiar) and half did not. Subjects also judged the familiarity of the cue itself. RWCR is the finding that, among cues for which recall fails, subjects generally rate cues that resemble studied items as more familiar than cues that do not. For words, left and right hippocampal activity increased when recall succeeded relative to when it failed. When recall failed, right hippocampal activity was decreased for familiar relative to unfamiliar cues. In contrast, right Prc activity increased for familiar cues for which recall failed relative to both familiar cues for which recall succeeded and to unfamiliar cues. For scenes, left hippocampal activity increased when recall succeeded relative to when it failed but did not differentiate familiar from unfamiliar cues when recall failed. In contrast, right Prc activity increased for familiar relative to unfamiliar cues when recall failed. Category-specific cortical regions showed effects unique to their respective stimulus types: The visual word form area (VWFA) showed effects for recall vs. familiarity specific to words, and the parahippocampal place area (PPA) showed effects for recall vs. familiarity specific to scenes. In both cases, these effects were such that there was increased activity occurring during recall relative to when recall failed, and decreased activity occurring for familiar relative to unfamiliar cues when recall failed. Copyright © 2012 Elsevier B.V. All rights reserved.

  7. Recall versus familiarity when recall fails for words and scenes: The differential roles of the hippocampus, perirhinal cortex, and category-specific cortical regions☆

    Science.gov (United States)

    Ryals, Anthony J.; Cleary, Anne M.; Seger, Carol A.

    2013-01-01

    This fMRI study examined recall and familiarity for words and scenes using the novel recognition without cued recall (RWCR) paradigm. Subjects performed a cued recall task in which half of the test cues resembled studied items (and thus were familiar) and half did not. Subjects also judged the familiarity of the cue itself. RWCR is the finding that, among cues for which recall fails, subjects generally rate cues that resemble studied items as more familiar than cues that do not. For words, left and right hippocampal activity increased when recall succeeded relative to when it failed. When recall failed, right hippocampal activity was decreased for familiar relative to unfamiliar cues. In contrast, right Prc activity increased for familiar cues for which recall failed relative to both familiar cues for which recall succeeded and to unfamiliar cues. For scenes, left hippocampal activity increased when recall succeeded relative to when it failed but did not differentiate familiar from unfamiliar cues when recall failed. In contrast, right Prc activity increased for familiar relative to unfamiliar cues when recall failed. Category-specific cortical regions showed effects unique to their respective stimulus types: The visual word form area (VWFA) showed effects for recall vs. familiarity specific to words, and the parahippocampal place area (PPA) showed effects for recall vs. familiarity specific to scenes. In both cases, these effects were such that there was increased activity occurring during recall relative to when recall failed, and decreased activity occurring for familiar relative to unfamiliar cues when recall failed. PMID:23142268

  8. Sleep spindle-related reactivation of category-specific cortical regions after learning face-scene associations

    DEFF Research Database (Denmark)

    Bergmann, Til O; Mölle, Matthias; Diedrichs, Jens

    2012-01-01

    Newly acquired declarative memory traces are believed to be reactivated during NonREM sleep to promote their hippocampo-neocortical transfer for long-term storage. Yet it remains a major challenge to unravel the underlying neuronal mechanisms. Using simultaneous electroencephalography (EEG......-coupled reactivation of brain regions representing the specific task stimuli was traced during subsequent NonREM sleep with EEG-informed fMRI. Relative to the control task, learning face-scene associations triggered a stronger combined activation of neocortical and hippocampal regions during subsequent sleep. Notably......) and functional magnetic resonance imaging (fMRI) recordings in humans, we show that sleep spindles play a key role in the reactivation of memory-related neocortical representations. On separate days, participants either learned face-scene associations or performed a visuomotor control task. Spindle...

  9. Hierarchy-associated semantic-rule inference framework for classifying indoor scenes

    Science.gov (United States)

    Yu, Dan; Liu, Peng; Ye, Zhipeng; Tang, Xianglong; Zhao, Wei

    2016-03-01

    Typically, the initial task of classifying indoor scenes is challenging, because the spatial layout and decoration of a scene can vary considerably. Recent efforts at classifying object relationships commonly depend on the results of scene annotation and predefined rules, making classification inflexible. Furthermore, annotation results are easily affected by external factors. Inspired by human cognition, a scene-classification framework was proposed using the empirically based annotation (EBA) and a match-over rule-based (MRB) inference system. The semantic hierarchy of images is exploited by EBA to construct rules empirically for MRB classification. The problem of scene classification is divided into low-level annotation and high-level inference from a macro perspective. Low-level annotation involves detecting the semantic hierarchy and annotating the scene with a deformable-parts model and a bag-of-visual-words model. In high-level inference, hierarchical rules are extracted to train the decision tree for classification. The categories of testing samples are generated from the parts to the whole. Compared with traditional classification strategies, the proposed semantic hierarchy and corresponding rules reduce the effect of a variable background and improve the classification performance. The proposed framework was evaluated on a popular indoor scene dataset, and the experimental results demonstrate its effectiveness.

  10. Scene complexity: influence on perception, memory, and development in the medial temporal lobe

    Directory of Open Access Journals (Sweden)

    Xiaoqian J Chai

    2010-03-01

    Full Text Available Regions in the medial temporal lobe (MTL and prefrontal cortex (PFC are involved in memory formation for scenes in both children and adults. The development in children and adolescents of successful memory encoding for scenes has been associated with increased activation in PFC, but not MTL, regions. However, evidence suggests that a functional subregion of the MTL that supports scene perception, located in the parahippocampal gyrus (PHG, goes through a prolonged maturation process. Here we tested the hypothesis that maturation of scene perception supports the development of memory for complex scenes. Scenes were characterized by their levels of complexity defined by the number of unique object categories depicted in the scene. Recognition memory improved with age, in participants ages 8-24, for high, but not low, complexity scenes. High-complexity compared to low-complexity scenes activated a network of regions including the posterior PHG. The difference in activations for high- versus low- complexity scenes increased with age in the right posterior PHG. Finally, activations in right posterior PHG were associated with age-related increases in successful memory formation for high-, but not low-, complexity scenes. These results suggest that functional maturation of the right posterior PHG plays a critical role in the development of enduring long-term recollection for high-complexity scenes.

  11. Recognition and attention guidance during contextual cueing in real-world scenes: evidence from eye movements.

    Science.gov (United States)

    Brockmole, James R; Henderson, John M

    2006-07-01

    When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.

  12. Emotional and neutral scenes in competition: orienting, efficiency, and identification.

    Science.gov (United States)

    Calvo, Manuel G; Nummenmaa, Lauri; Hyönä, Jukka

    2007-12-01

    To investigate preferential processing of emotional scenes competing for limited attentional resources with neutral scenes, prime pictures were presented briefly (450 ms), peripherally (5.2 degrees away from fixation), and simultaneously (one emotional and one neutral scene) versus singly. Primes were followed by a mask and a probe for recognition. Hit rate was higher for emotional than for neutral scenes in the dual- but not in the single-prime condition, and A' sensitivity decreased for neutral but not for emotional scenes in the dual-prime condition. This preferential processing involved both selective orienting and efficient encoding, as revealed, respectively, by a higher probability of first fixation on--and shorter saccade latencies to--emotional scenes and by shorter fixation time needed to accurately identify emotional scenes, in comparison with neutral scenes.

  13. Crime Scenes as Augmented Reality

    DEFF Research Database (Denmark)

    Sandvik, Kjetil

    2010-01-01

    Using the concept of augmented reality, this article will investigate how places in various ways have become augmented by means of different mediatization strategies. Augmentation of reality implies an enhancement of the places' emotional character: a certain mood, atmosphere or narrative surplus......, physical damage: they are all readable and interpretable signs. As augmented reality the crime scene carries a narrative which at first is hidden and must be revealed. Due to the process of investigation and the detective's ability to reason and deduce, the crime scene as place is reconstructed as virtual...

  14. Scene recognition and colorization for vehicle infrared images

    Science.gov (United States)

    Hou, Junjie; Sun, Shaoyuan; Shen, Zhenyi; Huang, Zhen; Zhao, Haitao

    2016-10-01

    In order to make better use of infrared technology for driving assistance system, a scene recognition and colorization method is proposed in this paper. Various objects in a queried infrared image are detected and labelled with proper categories by a combination of SIFT-Flow and MRF model. The queried image is then colorized by assigning corresponding colors according to the categories of the objects appeared. The results show that the strategy here emphasizes important information of the IR images for human vision and could be used to broaden the application of IR images for vehicle driving.

  15. Scenes of the self, and trance

    Directory of Open Access Journals (Sweden)

    Jan M. Broekman

    2014-02-01

    Full Text Available Trance shows the Self as a process involved in all sorts and forms of life. A Western perspective on a self and its reifying tendencies is only one (or one series of those variations. The process character of the self does not allow any coherent theory but shows, in particular when confronted with trance, its variability in all regards. What is more: the Self is always first on the scene of itself―a situation in which it becomes a sign for itself. That particular semiotic feature is again not a unified one but leads, as the Self in view of itself does, to series of scenes with changing colors, circumstances and environments. Our first scene “Beyond Monotheism” shows semiotic importance in that a self as determining component of a trance-phenomenon must abolish its own referent and seems not able to answer the question, what makes trance a trance. The Pizzica is an example here. Other social features of trance appear in the second scene, US post traumatic psychological treatments included. Our third scene underlines structures of an unfolding self: beginning with ‘split-ego’ conclusions, a self’s engenderment appears dependent on linguistic events and on spoken words in the first place. A fourth scene explores that theme and explains modern forms of an ego ―in particular those inherent to ‘citizenship’ or a ‘corporation’. The legal consequences are concentrated in the fifth scene, which considers a legal subject by revealing its ‘standing’. Our sixth and final scene pertains to the relation between trance and commerce. All scenes tie together and show parallels between Pizzica, rights-based behavior, RAVE music versus disco, commerce and trance; they demonstrate the meaning of trance as a multifaceted social phenomenon.

  16. Motivational Objects in Natural Scenes (MONS: A Database of >800 Objects

    Directory of Open Access Journals (Sweden)

    Judith Schomaker

    2017-09-01

    Full Text Available In daily life, we are surrounded by objects with pre-existing motivational associations. However, these are rarely controlled for in experiments with natural stimuli. Research on natural stimuli would therefore benefit from stimuli with well-defined motivational properties; in turn, such stimuli also open new paths in research on motivation. Here we introduce a database of Motivational Objects in Natural Scenes (MONS. The database consists of 107 scenes. Each scene contains 2 to 7 objects placed at approximately equal distance from the scene center. Each scene was photographed creating 3 versions, with one object (“critical object” being replaced to vary the overall motivational value of the scene (appetitive, aversive, and neutral, while maintaining high visual similarity between the three versions. Ratings on motivation, valence, arousal and recognizability were obtained using internet-based questionnaires. Since the main objective was to provide stimuli of well-defined motivational value, three motivation scales were used: (1 Desire to own the object; (2 Approach/Avoid; (3 Desire to interact with the object. Three sets of ratings were obtained in independent sets of observers: for all 805 objects presented on a neutral background, for 321 critical objects presented in their scene context, and for the entire scenes. On the basis of the motivational ratings, objects were subdivided into aversive, neutral, and appetitive categories. The MONS database will provide a standardized basis for future studies on motivational value under realistic conditions.

  17. Motivational Objects in Natural Scenes (MONS): A Database of >800 Objects.

    Science.gov (United States)

    Schomaker, Judith; Rau, Elias M; Einhäuser, Wolfgang; Wittmann, Bianca C

    2017-01-01

    In daily life, we are surrounded by objects with pre-existing motivational associations. However, these are rarely controlled for in experiments with natural stimuli. Research on natural stimuli would therefore benefit from stimuli with well-defined motivational properties; in turn, such stimuli also open new paths in research on motivation. Here we introduce a database of Motivational Objects in Natural Scenes (MONS). The database consists of 107 scenes. Each scene contains 2 to 7 objects placed at approximately equal distance from the scene center. Each scene was photographed creating 3 versions, with one object ("critical object") being replaced to vary the overall motivational value of the scene (appetitive, aversive, and neutral), while maintaining high visual similarity between the three versions. Ratings on motivation, valence, arousal and recognizability were obtained using internet-based questionnaires. Since the main objective was to provide stimuli of well-defined motivational value, three motivation scales were used: (1) Desire to own the object; (2) Approach/Avoid; (3) Desire to interact with the object. Three sets of ratings were obtained in independent sets of observers: for all 805 objects presented on a neutral background, for 321 critical objects presented in their scene context, and for the entire scenes. On the basis of the motivational ratings, objects were subdivided into aversive, neutral, and appetitive categories. The MONS database will provide a standardized basis for future studies on motivational value under realistic conditions.

  18. Scene perception in posterior cortical atrophy: categorization, description and fixation patterns.

    Science.gov (United States)

    Shakespeare, Timothy J; Yong, Keir X X; Frost, Chris; Kim, Lois G; Warrington, Elizabeth K; Crutch, Sebastian J

    2013-01-01

    Partial or complete Balint's syndrome is a core feature of the clinico-radiological syndrome of posterior cortical atrophy (PCA), in which individuals experience a progressive deterioration of cortical vision. Although multi-object arrays are frequently used to detect simultanagnosia in the clinical assessment and diagnosis of PCA, to date there have been no group studies of scene perception in patients with the syndrome. The current study involved three linked experiments conducted in PCA patients and healthy controls. Experiment 1 evaluated the accuracy and latency of complex scene perception relative to individual faces and objects (color and grayscale) using a categorization paradigm. PCA patients were both less accurate (faces < scenes < objects) and slower (scenes < objects < faces) than controls on all categories, with performance strongly associated with their level of basic visual processing impairment; patients also showed a small advantage for color over grayscale stimuli. Experiment 2 involved free description of real world scenes. PCA patients generated fewer features and more misperceptions than controls, though perceptual errors were always consistent with the patient's global understanding of the scene (whether correct or not). Experiment 3 used eye tracking measures to compare patient and control eye movements over initial and subsequent fixations of scenes. Patients' fixation patterns were significantly different to those of young and age-matched controls, with comparable group differences for both initial and subsequent fixations. Overall, these findings describe the variability in everyday scene perception exhibited by individuals with PCA, and indicate the importance of exposure duration in the perception of complex scenes.

  19. Scene perception in Posterior Cortical Atrophy: categorisation, description and fixation patterns

    Directory of Open Access Journals (Sweden)

    Timothy J Shakespeare

    2013-10-01

    Full Text Available Partial or complete Balint’s syndrome is a core feature of the clinico-radiological syndrome of posterior cortical atrophy (PCA, in which individuals experience a progressive deterioration of cortical vision. Although multi-object arrays are frequently used to detect simultanagnosia in the clinical assessment and diagnosis of PCA, to date there have been no group studies of scene perception in patients with the syndrome. The current study involved three linked experiments conducted in PCA patients and healthy controls. Experiment 1 evaluated the accuracy and latency of complex scene perception relative to individual faces and objects (colour and greyscale using a categorisation paradigm. PCA patients were both less accurate (faces<scenesscenescategories, with performance strongly associated with their level of basic visual processing impairment; patients also showed a small advantage for colour over greyscale stimuli. Experiment 2 involved free description of real world scenes. PCA patients generated fewer features and more misperceptions than controls, though perceptual errors were always consistent with the patient’s global understanding of the scene (whether correct or not. Experiment 3 used eye tracking measures to compare patient and control eye movements over initial and subsequent fixations of scenes. Patients’ fixation patterns were significantly different to those of young and age-matched controls, with comparable group differences for both initial and subsequent fixations. Overall, these findings describe the variability in everyday scene perception exhibited by individuals with PCA, and indicate the importance of exposure duration in the perception of complex scenes.

  20. Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification

    Science.gov (United States)

    Anwer, Rao Muhammad; Khan, Fahad Shahbaz; van de Weijer, Joost; Molinier, Matthieu; Laaksonen, Jorma

    2018-04-01

    Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification.

  1. Training Small Networks for Scene Classification of Remote Sensing Images via Knowledge Distillation

    Directory of Open Access Journals (Sweden)

    Guanzhou Chen

    2018-05-01

    Full Text Available Scene classification, aiming to identify the land-cover categories of remotely sensed image patches, is now a fundamental task in the remote sensing image analysis field. Deep-learning-model-based algorithms are widely applied in scene classification and achieve remarkable performance, but these high-level methods are computationally expensive and time-consuming. Consequently in this paper, we introduce a knowledge distillation framework, currently a mainstream model compression method, into remote sensing scene classification to improve the performance of smaller and shallower network models. Our knowledge distillation training method makes the high-temperature softmax output of a small and shallow student model match the large and deep teacher model. In our experiments, we evaluate knowledge distillation training method for remote sensing scene classification on four public datasets: AID dataset, UCMerced dataset, NWPU-RESISC dataset, and EuroSAT dataset. Results show that our proposed training method was effective and increased overall accuracy (3% in AID experiments, 5% in UCMerced experiments, 1% in NWPU-RESISC and EuroSAT experiments for small and shallow models. We further explored the performance of the student model on small and unbalanced datasets. Our findings indicate that knowledge distillation can improve the performance of small network models on datasets with lower spatial resolution images, numerous categories, as well as fewer training samples.

  2. Category-specific visual responses: an intracranial study comparing gamma, beta, alpha and ERP response selectivity

    Directory of Open Access Journals (Sweden)

    Juan R Vidal

    2010-11-01

    Full Text Available The specificity of neural responses to visual objects is a major topic in visual neuroscience. In humans, functional magnetic resonance imaging (fMRI studies have identified several regions of the occipital and temporal lobe that appear specific to faces, letter-strings, scenes, or tools. Direct electrophysiological recordings in the visual cortical areas of epileptic patients have largely confirmed this modular organization, using either single-neuron peri-stimulus time-histogram or intracerebral event-related potentials (iERP. In parallel, a new research stream has emerged using high-frequency gamma-band activity (50-150 Hz (GBR and low-frequency alpha/beta activity (8-24 Hz (ABR to map functional networks in humans. An obvious question is now whether the functional organization of the visual cortex revealed by fMRI, ERP, GBR, and ABR coincide. We used direct intracerebral recordings in 18 epileptic patients to directly compare GBR, ABR, and ERP elicited by the presentation of seven major visual object categories (faces, scenes, houses, consonants, pseudowords, tools, and animals, in relation to previous fMRI studies. Remarkably both GBR and iERP showed strong category-specificity that was in many cases sufficient to infer stimulus object category from the neural response at single-trial level. However, we also found a strong discrepancy between the selectivity of GBR, ABR, and ERP with less than 10% of spatial overlap between sites eliciting the same category-specificity. Overall, we found that selective neural responses to visual objects were broadly distributed in the brain with a prominent spatial cluster located in the posterior temporal cortex. Moreover, the different neural markers (GBR, ABR, and iERP that elicit selectivity towards specific visual object categories present little spatial overlap suggesting that the information content of each marker can uniquely characterize high-level visual information in the brain.

  3. The roles of scene gist and spatial dependency among objects in the semantic guidance of attention in real-world scenes.

    Science.gov (United States)

    Wu, Chia-Chien; Wang, Hsueh-Cheng; Pomplun, Marc

    2014-12-01

    A previous study (Vision Research 51 (2011) 1192-1205) found evidence for semantic guidance of visual attention during the inspection of real-world scenes, i.e., an influence of semantic relationships among scene objects on overt shifts of attention. In particular, the results revealed an observer bias toward gaze transitions between semantically similar objects. However, this effect is not necessarily indicative of semantic processing of individual objects but may be mediated by knowledge of the scene gist, which does not require object recognition, or by known spatial dependency among objects. To examine the mechanisms underlying semantic guidance, in the present study, participants were asked to view a series of displays with the scene gist excluded and spatial dependency varied. Our results show that spatial dependency among objects seems to be sufficient to induce semantic guidance. Scene gist, on the other hand, does not seem to affect how observers use semantic information to guide attention while viewing natural scenes. Extracting semantic information mainly based on spatial dependency may be an efficient strategy of the visual system that only adds little cognitive load to the viewing task. Copyright © 2014 Elsevier Ltd. All rights reserved.

  4. Semantic guidance of eye movements in real-world scenes.

    Science.gov (United States)

    Hwang, Alex D; Wang, Hsueh-Cheng; Pomplun, Marc

    2011-05-25

    The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying latent semantic analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects' gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects' eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. Copyright © 2011 Elsevier Ltd. All rights reserved.

  5. Wall grid structure for interior scene synthesis

    KAUST Repository

    Xu, Wenzhuo

    2015-02-01

    We present a system for automatically synthesizing a diverse set of semantically valid, and well-arranged 3D interior scenes for a given empty room shape. Unlike existing work on layout synthesis, that typically knows potentially needed 3D models and optimizes their location through cost functions, our technique performs the retrieval and placement of 3D models by discovering the relationships between the room space and the models\\' categories. This is enabled by a new analytical structure, called Wall Grid Structure, which jointly considers the categories and locations of 3D models. Our technique greatly reduces the amount of user intervention and provides users with suggestions and inspirations. We demonstrate the applicability of our approach on three types of scenarios: conference rooms, living rooms and bedrooms.

  6. Contextual cueing based on the semantic-category membership of the environment

    OpenAIRE

    GOUJON, A

    2005-01-01

    During the analysis of a visual scene, top-down processing is constantly directing the subject's attention to the zones of interest in the scene. The contextual cueing paradigm developed by Chun and Jiang (1998) shows how contextual regularities can facilitate the search for a particular element via implicit learning mechanisms. In the proposed study, contextual cueing task with lexical displays was used. The semantic-category membership of the contextual words predicted the location of the t...

  7. The occipital place area represents the local elements of scenes.

    Science.gov (United States)

    Kamps, Frederik S; Julian, Joshua B; Kubilius, Jonas; Kanwisher, Nancy; Dilks, Daniel D

    2016-05-15

    Neuroimaging studies have identified three scene-selective regions in human cortex: parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA). However, precisely what scene information each region represents is not clear, especially for the least studied, more posterior OPA. Here we hypothesized that OPA represents local elements of scenes within two independent, yet complementary scene descriptors: spatial boundary (i.e., the layout of external surfaces) and scene content (e.g., internal objects). If OPA processes the local elements of spatial boundary information, then it should respond to these local elements (e.g., walls) themselves, regardless of their spatial arrangement. Indeed, we found that OPA, but not PPA or RSC, responded similarly to images of intact rooms and these same rooms in which the surfaces were fractured and rearranged, disrupting the spatial boundary. Next, if OPA represents the local elements of scene content information, then it should respond more when more such local elements (e.g., furniture) are present. Indeed, we found that OPA, but not PPA or RSC, responded more to multiple than single pieces of furniture. Taken together, these findings reveal that OPA analyzes local scene elements - both in spatial boundary and scene content representation - while PPA and RSC represent global scene properties. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Global scene layout modulates contextual learning in change detection

    Directory of Open Access Journals (Sweden)

    Markus eConci

    2014-02-01

    Full Text Available Change in the visual scene often goes unnoticed – a phenomenon referred to as ‘change blindness’. This study examined whether the hierarchical structure, i.e., the global-local layout of a scene can influence performance in a one-shot change detection paradigm. To this end, natural scenes of a laid breakfast table were presented, and observers were asked to locate the onset of a new local object. Importantly, the global structure of the scene was manipulated by varying the relations among objects in the scene layouts. The very same items were either presented as global-congruent (typical layouts or as global-incongruent (random arrangements. Change blindness was less severe for congruent than for incongruent displays, and this congruency benefit increased with the duration of the experiment. These findings show that global layouts are learned, supporting detection of local changes with enhanced efficiency. However, performance was not affected by scene congruency in a subsequent control experiment that required observers to localize a static discontinuity (i.e., an object that was missing from the repeated layouts. Our results thus show that learning of the global layout is particularly linked to the local objects. Taken together, our results reveal an effect of global precedence in natural scenes. We suggest that relational properties within the hierarchy of a natural scene are governed, in particular, by global image analysis, reducing change blindness for local objects through scene learning.

  9. Global scene layout modulates contextual learning in change detection.

    Science.gov (United States)

    Conci, Markus; Müller, Hermann J

    2014-01-01

    Change in the visual scene often goes unnoticed - a phenomenon referred to as "change blindness." This study examined whether the hierarchical structure, i.e., the global-local layout of a scene can influence performance in a one-shot change detection paradigm. To this end, natural scenes of a laid breakfast table were presented, and observers were asked to locate the onset of a new local object. Importantly, the global structure of the scene was manipulated by varying the relations among objects in the scene layouts. The very same items were either presented as global-congruent (typical) layouts or as global-incongruent (random) arrangements. Change blindness was less severe for congruent than for incongruent displays, and this congruency benefit increased with the duration of the experiment. These findings show that global layouts are learned, supporting detection of local changes with enhanced efficiency. However, performance was not affected by scene congruency in a subsequent control experiment that required observers to localize a static discontinuity (i.e., an object that was missing from the repeated layouts). Our results thus show that learning of the global layout is particularly linked to the local objects. Taken together, our results reveal an effect of "global precedence" in natural scenes. We suggest that relational properties within the hierarchy of a natural scene are governed, in particular, by global image analysis, reducing change blindness for local objects through scene learning.

  10. Binocular fusion and invariant category learning due to predictive remapping during scanning of a depthful scene with eye movements

    OpenAIRE

    Grossberg, Stephen; Srinivasan, Karthik; Yazdanbakhsh, Arash

    2015-01-01

    How does the brain maintain stable fusion of 3D scenes when the eyes move? Every eye movement causes each retinal position to process a different set of scenic features, and thus the brain needs to binocularly fuse new combinations of features at each position after an eye movement. Despite these breaks in retinotopic fusion due to each movement, previously fused representations of a scene in depth often appear stable. The 3D ARTSCAN neural model proposes how the brain does this by unifying ...

  11. Scene incongruity and attention.

    Science.gov (United States)

    Mack, Arien; Clarke, Jason; Erol, Muge; Bert, John

    2017-02-01

    Does scene incongruity, (a mismatch between scene gist and a semantically incongruent object), capture attention and lead to conscious perception? We explored this question using 4 different procedures: Inattention (Experiment 1), Scene description (Experiment 2), Change detection (Experiment 3), and Iconic Memory (Experiment 4). We found no differences between scene incongruity and scene congruity in Experiments 1, 2, and 4, although in Experiment 3 change detection was faster for scenes containing an incongruent object. We offer an explanation for why the change detection results differ from the results of the other three experiments. In all four experiments, participants invariably failed to report the incongruity and routinely mis-described it by normalizing the incongruent object. None of the results supports the claim that semantic incongruity within a scene invariably captures attention and provide strong evidence of the dominant role of scene gist in determining what is perceived. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. PC Scene Generation

    Science.gov (United States)

    Buford, James A., Jr.; Cosby, David; Bunfield, Dennis H.; Mayhall, Anthony J.; Trimble, Darian E.

    2007-04-01

    AMRDEC has successfully tested hardware and software for Real-Time Scene Generation for IR and SAL Sensors on COTS PC based hardware and video cards. AMRDEC personnel worked with nVidia and Concurrent Computer Corporation to develop a Scene Generation system capable of frame rates of at least 120Hz while frame locked to an external source (such as a missile seeker) with no dropped frames. Latency measurements and image validation were performed using COTS and in-house developed hardware and software. Software for the Scene Generation system was developed using OpenSceneGraph.

  13. Color Independent Components Based SIFT Descriptors for Object/Scene Classification

    Science.gov (United States)

    Ai, Dan-Ni; Han, Xian-Hua; Ruan, Xiang; Chen, Yen-Wei

    In this paper, we present a novel color independent components based SIFT descriptor (termed CIC-SIFT) for object/scene classification. We first learn an efficient color transformation matrix based on independent component analysis (ICA), which is adaptive to each category in a database. The ICA-based color transformation can enhance contrast between the objects and the background in an image. Then we compute CIC-SIFT descriptors over all three transformed color independent components. Since the ICA-based color transformation can boost the objects and suppress the background, the proposed CIC-SIFT can extract more effective and discriminative local features for object/scene classification. The comparison is performed among seven SIFT descriptors, and the experimental classification results show that our proposed CIC-SIFT is superior to other conventional SIFT descriptors.

  14. Visual properties and memorising scenes: Effects of image-space sparseness and uniformity.

    Science.gov (United States)

    Lukavský, Jiří; Děchtěrenko, Filip

    2017-10-01

    Previous studies have demonstrated that humans have a remarkable capacity to memorise a large number of scenes. The research on memorability has shown that memory performance can be predicted by the content of an image. We explored how remembering an image is affected by the image properties within the context of the reference set, including the extent to which it is different from its neighbours (image-space sparseness) and if it belongs to the same category as its neighbours (uniformity). We used a reference set of 2,048 scenes (64 categories), evaluated pairwise scene similarity using deep features from a pretrained convolutional neural network (CNN), and calculated the image-space sparseness and uniformity for each image. We ran three memory experiments, varying the memory workload with experiment length and colour/greyscale presentation. We measured the sensitivity and criterion value changes as a function of image-space sparseness and uniformity. Across all three experiments, we found separate effects of 1) sparseness on memory sensitivity, and 2) uniformity on the recognition criterion. People better remembered (and correctly rejected) images that were more separated from others. People tended to make more false alarms and fewer miss errors in images from categorically uniform portions of the image-space. We propose that both image-space properties affect human decisions when recognising images. Additionally, we found that colour presentation did not yield better memory performance over grayscale images.

  15. Object-graphs for context-aware visual category discovery.

    Science.gov (United States)

    Lee, Yong Jae; Grauman, Kristen

    2012-02-01

    How can knowing about some categories help us to discover new ones in unlabeled images? Unsupervised visual category discovery is useful to mine for recurring objects without human supervision, but existing methods assume no prior information and thus tend to perform poorly for cluttered scenes with multiple objects. We propose to leverage knowledge about previously learned categories to enable more accurate discovery, and address challenges in estimating their familiarity in unsegmented, unlabeled images. We introduce two variants of a novel object-graph descriptor to encode the 2D and 3D spatial layout of object-level co-occurrence patterns relative to an unfamiliar region and show that by using them to model the interaction between an image’s known and unknown objects, we can better detect new visual categories. Rather than mine for all categories from scratch, our method identifies new objects while drawing on useful cues from familiar ones. We evaluate our approach on several benchmark data sets and demonstrate clear improvements in discovery over conventional purely appearance-based baselines.

  16. Scene grammar shapes the way we interact with objects, strengthens memories, and speeds search.

    Science.gov (United States)

    Draschkow, Dejan; Võ, Melissa L-H

    2017-11-28

    Predictions of environmental rules (here referred to as "scene grammar") can come in different forms: seeing a toilet in a living room would violate semantic predictions, while finding a toilet brush next to the toothpaste would violate syntactic predictions. The existence of such predictions has usually been investigated by showing observers images containing such grammatical violations. Conversely, the generative process of creating an environment according to one's scene grammar and its effects on behavior and memory has received little attention. In a virtual reality paradigm, we either instructed participants to arrange objects according to their scene grammar or against it. Subsequently, participants' memory for the arrangements was probed using a surprise recall (Exp1), or repeated search (Exp2) task. As a result, participants' construction behavior showed strategic use of larger, static objects to anchor the location of smaller objects which are generally the goals of everyday actions. Further analysis of this scene construction data revealed possible commonalities between the rules governing word usage in language and object usage in naturalistic environments. Taken together, we revealed some of the building blocks of scene grammar necessary for efficient behavior, which differentially influence how we interact with objects and what we remember about scenes.

  17. Categorization of natural dynamic audiovisual scenes.

    Directory of Open Access Journals (Sweden)

    Olli Rummukainen

    Full Text Available This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database.

  18. Scene Integration Without Awareness: No Conclusive Evidence for Processing Scene Congruency During Continuous Flash Suppression.

    Science.gov (United States)

    Moors, Pieter; Boelens, David; van Overwalle, Jaana; Wagemans, Johan

    2016-07-01

    A recent study showed that scenes with an object-background relationship that is semantically incongruent break interocular suppression faster than scenes with a semantically congruent relationship. These results implied that semantic relations between the objects and the background of a scene could be extracted in the absence of visual awareness of the stimulus. In the current study, we assessed the replicability of this finding and tried to rule out an alternative explanation dependent on low-level differences between the stimuli. Furthermore, we used a Bayesian analysis to quantify the evidence in favor of the presence or absence of a scene-congruency effect. Across three experiments, we found no convincing evidence for a scene-congruency effect or a modulation of scene congruency by scene inversion. These findings question the generalizability of previous observations and cast doubt on whether genuine semantic processing of object-background relationships in scenes can manifest during interocular suppression. © The Author(s) 2016.

  19. Adaptive attunement of selective covert attention to evolutionary-relevant emotional visual scenes.

    Science.gov (United States)

    Fernández-Martín, Andrés; Gutiérrez-García, Aída; Capafons, Juan; Calvo, Manuel G

    2017-05-01

    We investigated selective attention to emotional scenes in peripheral vision, as a function of adaptive relevance of scene affective content for male and female observers. Pairs of emotional-neutral images appeared peripherally-with perceptual stimulus differences controlled-while viewers were fixating on a different stimulus in central vision. Early selective orienting was assessed by the probability of directing the first fixation towards either scene, and the time until first fixation. Emotional scenes selectively captured covert attention even when they were task-irrelevant, thus revealing involuntary, automatic processing. Sex of observers and specific emotional scene content (e.g., male-to-female-aggression, families and babies, etc.) interactively modulated covert attention, depending on adaptive priorities and goals for each sex, both for pleasant and unpleasant content. The attentional system exhibits domain-specific and sex-specific biases and attunements, probably rooted in evolutionary pressures to enhance reproductive and protective success. Emotional cues selectively capture covert attention based on their bio-social significance. Copyright © 2017 Elsevier Inc. All rights reserved.

  20. Individual differences in the spontaneous recruitment of brain regions supporting mental state understanding when viewing natural social scenes.

    Science.gov (United States)

    Wagner, Dylan D; Kelley, William M; Heatherton, Todd F

    2011-12-01

    People are able to rapidly infer complex personality traits and mental states even from the most minimal person information. Research has shown that when observers view a natural scene containing people, they spend a disproportionate amount of their time looking at the social features (e.g., faces, bodies). Does this preference for social features merely reflect the biological salience of these features or are observers spontaneously attempting to make sense of complex social dynamics? Using functional neuroimaging, we investigated neural responses to social and nonsocial visual scenes in a large sample of participants (n = 48) who varied on an individual difference measure assessing empathy and mentalizing (i.e., empathizing). Compared with other scene categories, viewing natural social scenes activated regions associated with social cognition (e.g., dorsomedial prefrontal cortex and temporal poles). Moreover, activity in these regions during social scene viewing was strongly correlated with individual differences in empathizing. These findings offer neural evidence that observers spontaneously engage in social cognition when viewing complex social material but that the degree to which people do so is mediated by individual differences in trait empathizing.

  1. Short report: the effect of expertise in hiking on recognition memory for mountain scenes.

    Science.gov (United States)

    Kawamura, Satoru; Suzuki, Sae; Morikawa, Kazunori

    2007-10-01

    The nature of an expert memory advantage that does not depend on stimulus structure or chunking was examined, using more ecologically valid stimuli in the context of a more natural activity than previously studied domains. Do expert hikers and novice hikers see and remember mountain scenes differently? In the present experiment, 18 novice hikers and 17 expert hikers were presented with 60 photographs of scenes from hiking trails. These scenes differed in the degree of functional aspects that implied some action possibilities or dangers. The recognition test revealed that the memory performance of experts was significantly superior to that of novices for scenes with highly functional aspects. The memory performance for the scenes with few functional aspects did not differ between novices and experts. These results suggest that experts pay more attention to, and thus remember better, scenes with functional meanings than do novices.

  2. Scene perception and memory revealed by eye movements and receiver-operating characteristic analyses: does a cultural difference truly exist?

    Science.gov (United States)

    Evans, Kris; Rotello, Caren M; Li, Xingshan; Rayner, Keith

    2009-02-01

    Cultural differences have been observed in scene perception and memory: Chinese participants purportedly attend to the background information more than did American participants. We investigated the influence of culture by recording eye movements during scene perception and while participants made recognition memory judgements. Real-world pictures with a focal object on a background were shown to both American and Chinese participants while their eye movements were recorded. Later, memory for the focal object in each scene was tested, and the relationship between the focal object (studied, new) and the background context (studied, new) was manipulated. Receiver-operating characteristic (ROC) curves show that both sensitivity and response bias were changed when objects were tested in new contexts. However, neither the decrease in accuracy nor the response bias shift differed with culture. The eye movement patterns were also similar across cultural groups. Both groups made longer and more fixations on the focal objects than on the contexts. The similarity of eye movement patterns and recognition memory behaviour suggests that both Americans and Chinese use the same strategies in scene perception and memory.

  3. Underwater Scene Composition

    Science.gov (United States)

    Kim, Nanyoung

    2009-01-01

    In this article, the author describes an underwater scene composition for elementary-education majors. This project deals with watercolor with crayon or oil-pastel resist (medium); the beauty of nature represented by fish in the underwater scene (theme); texture and pattern (design elements); drawing simple forms (drawing skill); and composition…

  4. The singular nature of auditory and visual scene analysis in autism.

    Science.gov (United States)

    Lin, I-Fan; Shirama, Aya; Kato, Nobumasa; Kashino, Makio

    2017-02-19

    Individuals with autism spectrum disorder often have difficulty acquiring relevant auditory and visual information in daily environments, despite not being diagnosed as hearing impaired or having low vision. Resent psychophysical and neurophysiological studies have shown that autistic individuals have highly specific individual differences at various levels of information processing, including feature extraction, automatic grouping and top-down modulation in auditory and visual scene analysis. Comparison of the characteristics of scene analysis between auditory and visual modalities reveals some essential commonalities, which could provide clues about the underlying neural mechanisms. Further progress in this line of research may suggest effective methods for diagnosing and supporting autistic individuals.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  5. Scene-Based Contextual Cueing in Pigeons

    Science.gov (United States)

    Wasserman, Edward A.; Teng, Yuejia; Brooks, Daniel I.

    2014-01-01

    Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of such contextual cueing. Pigeons had to peck a target which could appear in one of four locations on color photographs of real-world scenes. On half of the trials, each of four scenes was consistently paired with one of four possible target locations; on the other half of the trials, each of four different scenes was randomly paired with the same four possible target locations. In Experiments 1 and 2, pigeons exhibited robust contextual cueing when the context preceded the target by 1 s to 8 s, with reaction times to the target being shorter on predictive-scene trials than on random-scene trials. Pigeons also responded more frequently during the delay on predictive-scene trials than on random-scene trials; indeed, during the delay on predictive-scene trials, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. In Experiment 3, involving left-right and top-bottom scene reversals, pigeons exhibited stronger control by global than by local scene cues. These results attest to the robustness and associative basis of contextual cueing in pigeons. PMID:25546098

  6. Out of Mind, Out of Sight: Unexpected Scene Elements Frequently Go Unnoticed Until Primed.

    Science.gov (United States)

    Slavich, George M; Zimbardo, Philip G

    2013-12-01

    The human visual system employs a sophisticated set of strategies for scanning the environment and directing attention to stimuli that can be expected given the context and a person's past experience. Although these strategies enable us to navigate a very complex physical and social environment, they can also cause highly salient, but unexpected stimuli to go completely unnoticed. To examine the generality of this phenomenon, we conducted eight studies that included 15 different experimental conditions and 1,577 participants in all. These studies revealed that a large majority of participants do not report having seen a woman in the center of an urban scene who was photographed in midair as she was committing suicide. Despite seeing the scene repeatedly, 46 % of all participants failed to report seeing a central figure and only 4.8 % reported seeing a falling person. Frequency of noticing the suicidal woman was highest for participants who read a narrative priming story that increased the extent to which she was schematically congruent with the scene. In contrast to this robust effect of inattentional blindness , a majority of participants reported seeing other peripheral objects in the visual scene that were equally difficult to detect, yet more consistent with the scene. Follow-up qualitative analyses revealed that participants reported seeing many elements that were not actually present, but which could have been expected given the overall context of the scene. Together, these findings demonstrate the robustness of inattentional blindness and highlight the specificity with which different visual primes may increase noticing behavior.

  7. Scene perception and memory revealed by eye movements and receiver-operating characteristic analyses: Does a cultural difference truly exist?

    OpenAIRE

    Evans, Kris; Rotello, Caren M.; Li, Xingshan; Rayner, Keith

    2008-01-01

    Cultural differences have been observed in scene perception and memory: Chinese participants purportedly attend to the background information more than did American participants. We investigated the influence of culture by recording eye movements during scene perception and while participants made recognition memory judgements. Real-world pictures with a focal object on a background were shown to both American and Chinese participants while their eye movements were recorded. Later, memory for...

  8. The development of brain systems associated with successful memory retrieval of scenes.

    Science.gov (United States)

    Ofen, Noa; Chai, Xiaoqian J; Schuil, Karen D I; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-07-18

    Neuroanatomical and psychological evidence suggests prolonged maturation of declarative memory systems in the human brain from childhood into young adulthood. Here, we examine functional brain development during successful memory retrieval of scenes in children, adolescents, and young adults ages 8-21 via functional magnetic resonance imaging. Recognition memory improved with age, specifically for accurate identification of studied scenes (hits). Successful retrieval (correct old-new decisions for studied vs unstudied scenes) was associated with activations in frontal, parietal, and medial temporal lobe (MTL) regions. Activations associated with successful retrieval increased with age in left parietal cortex (BA7), bilateral prefrontal, and bilateral caudate regions. In contrast, activations associated with successful retrieval did not change with age in the MTL. Psychophysiological interaction analysis revealed that there were, however, age-relate changes in differential connectivity for successful retrieval between MTL and prefrontal regions. These results suggest that neocortical regions related to attentional or strategic control show the greatest developmental changes for memory retrieval of scenes. Furthermore, these results suggest that functional interactions between MTL and prefrontal regions during memory retrieval also develop into young adulthood. The developmental increase of memory-related activations in frontal and parietal regions for retrieval of scenes and the absence of such an increase in MTL regions parallels what has been observed for memory encoding of scenes.

  9. Unsupervised semantic indoor scene classification for robot vision based on context of features using Gist and HSV-SIFT

    Science.gov (United States)

    Madokoro, H.; Yamanashi, A.; Sato, K.

    2013-08-01

    This paper presents an unsupervised scene classification method for actualizing semantic recognition of indoor scenes. Background and foreground features are respectively extracted using Gist and color scale-invariant feature transform (SIFT) as feature representations based on context. We used hue, saturation, and value SIFT (HSV-SIFT) because of its simple algorithm with low calculation costs. Our method creates bags of features for voting visual words created from both feature descriptors to a two-dimensional histogram. Moreover, our method generates labels as candidates of categories for time-series images while maintaining stability and plasticity together. Automatic labeling of category maps can be realized using labels created using adaptive resonance theory (ART) as teaching signals for counter propagation networks (CPNs). We evaluated our method for semantic scene classification using KTH's image database for robot localization (KTH-IDOL), which is popularly used for robot localization and navigation. The mean classification accuracies of Gist, gray SIFT, one class support vector machines (OC-SVM), position-invariant robust features (PIRF), and our method are, respectively, 39.7, 58.0, 56.0, 63.6, and 79.4%. The result of our method is 15.8% higher than that of PIRF. Moreover, we applied our method for fine classification using our original mobile robot. We obtained mean classification accuracy of 83.2% for six zones.

  10. Associative Processing Is Inherent in Scene Perception

    Science.gov (United States)

    Aminoff, Elissa M.; Tarr, Michael J.

    2015-01-01

    How are complex visual entities such as scenes represented in the human brain? More concretely, along what visual and semantic dimensions are scenes encoded in memory? One hypothesis is that global spatial properties provide a basis for categorizing the neural response patterns arising from scenes. In contrast, non-spatial properties, such as single objects, also account for variance in neural responses. The list of critical scene dimensions has continued to grow—sometimes in a contradictory manner—coming to encompass properties such as geometric layout, big/small, crowded/sparse, and three-dimensionality. We demonstrate that these dimensions may be better understood within the more general framework of associative properties. That is, across both the perceptual and semantic domains, features of scene representations are related to one another through learned associations. Critically, the components of such associations are consistent with the dimensions that are typically invoked to account for scene understanding and its neural bases. Using fMRI, we show that non-scene stimuli displaying novel associations across identities or locations recruit putatively scene-selective regions of the human brain (the parahippocampal/lingual region, the retrosplenial complex, and the transverse occipital sulcus/occipital place area). Moreover, we find that the voxel-wise neural patterns arising from these associations are significantly correlated with the neural patterns arising from everyday scenes providing critical evidence whether the same encoding principals underlie both types of processing. These neuroimaging results provide evidence for the hypothesis that the neural representation of scenes is better understood within the broader theoretical framework of associative processing. In addition, the results demonstrate a division of labor that arises across scene-selective regions when processing associations and scenes providing better understanding of the functional

  11. Stages As Models of Scene Geometry

    NARCIS (Netherlands)

    Nedović, V.; Smeulders, A.W.M.; Redert, A.; Geusebroek, J.M.

    2010-01-01

    Reconstruction of 3D scene geometry is an important element for scene understanding, autonomous vehicle and robot navigation, image retrieval, and 3D television. We propose accounting for the inherent structure of the visual world when trying to solve the scene reconstruction problem. Consequently,

  12. When Does Repeated Search in Scenes Involve Memory? Looking at versus Looking for Objects in Scenes

    Science.gov (United States)

    Vo, Melissa L. -H.; Wolfe, Jeremy M.

    2012-01-01

    One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained…

  13. The elephant in the room: Inconsistency in scene viewing and representation.

    Science.gov (United States)

    Spotorno, Sara; Tatler, Benjamin W

    2017-10-01

    We examined the extent to which semantic informativeness, consistency with expectations and perceptual salience contribute to object prioritization in scene viewing and representation. In scene viewing (Experiments 1-2), semantic guidance overshadowed perceptual guidance in determining fixation order, with the greatest prioritization for objects that were diagnostic of the scene's depicted event. Perceptual properties affected selection of consistent objects (regardless of their informativeness) but not of inconsistent objects. Semantic and perceptual properties also interacted in influencing foveal inspection, as inconsistent objects were fixated longer than low but not high salience diagnostic objects. While not studied in direct competition with each other (each studied in competition with diagnostic objects), we found that inconsistent objects were fixated earlier and for longer than consistent but marginally informative objects. In change detection (Experiment 3), perceptual guidance overshadowed semantic guidance, promoting detection of highly salient changes. A residual advantage for diagnosticity over inconsistency emerged only when selection prioritization could not be based on low-level features. Overall these findings show that semantic inconsistency is not prioritized within a scene when competing with other relevant information that is essential to scene understanding and respects observers' expectations. Moreover, they reveal that the relative dominance of semantic or perceptual properties during selection depends on ongoing task requirements. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. Classification of Mls Point Clouds in Urban Scenes Using Detrended Geometric Features from Supervoxel-Based Local Contexts

    Science.gov (United States)

    Sun, Z.; Xu, Y.; Hoegner, L.; Stilla, U.

    2018-05-01

    In this work, we propose a classification method designed for the labeling of MLS point clouds, with detrended geometric features extracted from the points of the supervoxel-based local context. To achieve the analysis of complex 3D urban scenes, acquired points of the scene should be tagged with individual labels of different classes. Thus, assigning a unique label to the points of an object that belong to the same category plays an essential role in the entire 3D scene analysis workflow. Although plenty of studies in this field have been reported, this work is still a challenging task. Specifically, in this work: 1) A novel geometric feature extraction method, detrending the redundant and in-salient information in the local context, is proposed, which is proved to be effective for extracting local geometric features from the 3D scene. 2) Instead of using individual point as basic element, the supervoxel-based local context is designed to encapsulate geometric characteristics of points, providing a flexible and robust solution for feature extraction. 3) Experiments using complex urban scene with manually labeled ground truth are conducted, and the performance of proposed method with respect to different methods is analyzed. With the testing dataset, we have obtained a result of 0.92 for overall accuracy for assigning eight semantic classes.

  15. Functional Organization of the Parahippocampal Cortex: Dissociable Roles for Context Representations and the Perception of Visual Scenes.

    Science.gov (United States)

    Baumann, Oliver; Mattingley, Jason B

    2016-02-24

    The human parahippocampal cortex has been ascribed central roles in both visuospatial and mnemonic processes. More specifically, evidence suggests that the parahippocampal cortex subserves both the perceptual analysis of scene layouts as well as the retrieval of associative contextual memories. It remains unclear, however, whether these two functional roles can be dissociated within the parahippocampal cortex anatomically. Here, we provide evidence for a dissociation between neural activation patterns associated with visuospatial analysis of scenes and contextual mnemonic processing along the parahippocampal longitudinal axis. We used fMRI to measure parahippocampal responses while participants engaged in a task that required them to judge the contextual relatedness of scene and object pairs, which were presented either as words or pictures. Results from combined factorial and conjunction analyses indicated that the posterior section of parahippocampal cortex is driven predominantly by judgments associated with pictorial scene analysis, whereas its anterior section is more active during contextual judgments regardless of stimulus category (scenes vs objects) or modality (word vs picture). Activation maxima associated with visuospatial and mnemonic processes were spatially segregated, providing support for the existence of functionally distinct subregions along the parahippocampal longitudinal axis and suggesting that, in humans, the parahippocampal cortex serves as a functional interface between perception and memory systems. Copyright © 2016 the authors 0270-6474/16/362536-07$15.00/0.

  16. Forensic 3D Scene Reconstruction

    International Nuclear Information System (INIS)

    LITTLE, CHARLES Q.; PETERS, RALPH R.; RIGDON, J. BRIAN; SMALL, DANIEL E.

    1999-01-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene

  17. How children remember neutral and emotional pictures: boundary extension in children's scene memories.

    Science.gov (United States)

    Candel, Ingrid; Merckelbach, Harald; Houben, Katrijn; Vandyck, Inne

    2004-01-01

    Boundary extension is the tendency to remember more of a scene than was actually shown. The dominant interpretation of this memory illusion is that it originates from schemata that people construct when viewing a scene. Evidence of boundary extension has been obtained primarily with adult participants who remember neutral pictures. The current study addressed the developmental stability of this phenomenon. Therefore, we investigated whether children aged 10-12 years display boundary extension for neutral pictures. Moreover, we examined emotional scene memory. Eighty-seven children drew pictures from memory after they had seen either neutral or emotional pictures. Both their neutral and emotional drawings revealed boundary extension. Apparently, the schema construction that underlies boundary extension is a robust and ubiquitous process.

  18. Attentional Bias towards Emotional Scenes in Boys with Attention Deficit Hyperactivity Disorder.

    Science.gov (United States)

    Pishyareh, Ebrahim; Tehrani-Doost, Mehdi; Mahmoodi-Gharaie, Javad; Khorrami, Anahita; Joudi, Mitra; Ahmadi, Mehrnoosh

    2012-01-01

    Children with attention-deficit/hyperactivity disorder (ADHD) react explosively and inappropriately to emotional stimuli. It could be hypothesized that these children have some impairment in attending to emotional cues. Based on this hypothesis, we conducted this study to evaluate visual directions of children with ADHD towards paired emotional scenes. Thirty boys between the ages of 6 and 11 years diagnosed with ADHD were compared with 30 age-matched normal boys. All participants were presented paired emotional and neutral scenes in the four following categories: pleasant-neutral; pleasant-unpleasant; unpleasant-neutral; and neutral - neutral. Meanwhile, their visual orientations towards these pictures were evaluated using the eye tracking system. The number and duration of first fixation and duration of first gaze were compared between the two groups using the MANOVA analysis. The performance of each group in different categories was also analyzed using the Friedman test. With regards to duration of first gaze, which is the time taken to fixate on a picture before moving to another picture, ADHD children spent less time on pleasant pictures compared to normal group, while they were looking at pleasant - neutral and unpleasant - pleasant pairs. The duration of first gaze on unpleasant pictures was higher while children with ADHD were looking at unpleasant - neutral pairs (P<0.01). Based on the findings of this study it could be concluded that children with ADHD attend to unpleasant conditions more than normal children which leads to their emotional reactivity.

  19. Semantic Wavelet-Induced Frequency-Tagging (SWIFT Periodically Activates Category Selective Areas While Steadily Activating Early Visual Areas.

    Directory of Open Access Journals (Sweden)

    Roger Koenig-Robert

    Full Text Available Primate visual systems process natural images in a hierarchical manner: at the early stage, neurons are tuned to local image features, while neurons in high-level areas are tuned to abstract object categories. Standard models of visual processing assume that the transition of tuning from image features to object categories emerges gradually along the visual hierarchy. Direct tests of such models remain difficult due to confounding alteration in low-level image properties when contrasting distinct object categories. When such contrast is performed in a classic functional localizer method, the desired activation in high-level visual areas is typically accompanied with activation in early visual areas. Here we used a novel image-modulation method called SWIFT (semantic wavelet-induced frequency-tagging, a variant of frequency-tagging techniques. Natural images modulated by SWIFT reveal object semantics periodically while keeping low-level properties constant. Using functional magnetic resonance imaging (fMRI, we indeed found that faces and scenes modulated with SWIFT periodically activated the prototypical category-selective areas while they elicited sustained and constant responses in early visual areas. SWIFT and the localizer were selective and specific to a similar extent in activating category-selective areas. Only SWIFT progressively activated the visual pathway from low- to high-level areas, consistent with predictions from standard hierarchical models. We confirmed these results with criterion-free methods, generalizing the validity of our approach and show that it is possible to dissociate neural activation in early and category-selective areas. Our results provide direct evidence for the hierarchical nature of the representation of visual objects along the visual stream and open up future applications of frequency-tagging methods in fMRI.

  20. Rapid discrimination of visual scene content in the human brain

    Science.gov (United States)

    Anokhin, Andrey P.; Golosheykin, Simon; Sirevaag, Erik; Kristjansson, Sean; Rohrbaugh, John W.; Heath, Andrew C.

    2007-01-01

    The rapid evaluation of complex visual environments is critical for an organism's adaptation and survival. Previous studies have shown that emotionally significant visual scenes, both pleasant and unpleasant, elicit a larger late positive wave in the event-related brain potential (ERP) than emotionally neutral pictures. The purpose of the present study was to examine whether neuroelectric responses elicited by complex pictures discriminate between specific, biologically relevant contents of the visual scene and to determine how early in the picture processing this discrimination occurs. Subjects (n=264) viewed 55 color slides differing in both scene content and emotional significance. No categorical judgments or responses were required. Consistent with previous studies, we found that emotionally arousing pictures, regardless of their content, produce a larger late positive wave than neutral pictures. However, when pictures were further categorized by content, anterior ERP components in a time window between 200−600 ms following stimulus onset showed a high selectivity for pictures with erotic content compared to other pictures regardless of their emotional valence (pleasant, neutral, and unpleasant) or emotional arousal. The divergence of ERPs elicited by erotic and non-erotic contents started at 185 ms post-stimulus in the fronto-central midline regions, with a later onset in parietal regions. This rapid, selective, and content-specific processing of erotic materials and its dissociation from other pictures (including emotionally positive pictures) suggests the existence of a specialized neural network for prioritized processing of a distinct category of biologically relevant stimuli with high adaptive and evolutionary significance. PMID:16712815

  1. Three-dimensional model-based object recognition and segmentation in cluttered scenes.

    Science.gov (United States)

    Mian, Ajmal S; Bennamoun, Mohammed; Owens, Robyn

    2006-10-01

    Viewpoint independent recognition of free-form objects and their segmentation in the presence of clutter and occlusions is a challenging task. We present a novel 3D model-based algorithm which performs this task automatically and efficiently. A 3D model of an object is automatically constructed offline from its multiple unordered range images (views). These views are converted into multidimensional table representations (which we refer to as tensors). Correspondences are automatically established between these views by simultaneously matching the tensors of a view with those of the remaining views using a hash table-based voting scheme. This results in a graph of relative transformations used to register the views before they are integrated into a seamless 3D model. These models and their tensor representations constitute the model library. During online recognition, a tensor from the scene is simultaneously matched with those in the library by casting votes. Similarity measures are calculated for the model tensors which receive the most votes. The model with the highest similarity is transformed to the scene and, if it aligns accurately with an object in the scene, that object is declared as recognized and is segmented. This process is repeated until the scene is completely segmented. Experiments were performed on real and synthetic data comprised of 55 models and 610 scenes and an overall recognition rate of 95 percent was achieved. Comparison with the spin images revealed that our algorithm is superior in terms of recognition rate and efficiency.

  2. Callous-unemotional, impulsive-irresponsible, and grandiose-manipulative traits: Distinct associations with heart rate, skin conductance, and startle responses to violent and erotic scenes.

    Science.gov (United States)

    Fanti, Kostas A; Kyranides, Melina N; Georgiou, Giorgos; Petridou, Maria; Colins, Olivier F; Tuvblad, Catherine; Andershed, Henrik

    2017-05-01

    The present study aimed to examine whether callous-unemotional, grandiose-manipulative, and impulsive-irresponsible dimensions of psychopathy are differentially related to various affective and physiological measures, assessed at baseline and in response to violent and erotic movie scenes. Data were collected from young adults (N = 101) at differential risk for psychopathic traits. Findings from regression analyses revealed a unique predictive contribution of grandiose-manipulative traits in particular to higher ratings of positive valence for violent scenes. Callous-unemotional traits were uniquely associated with lower levels of sympathy toward victims and lower ratings of fear and sadness during violent scenes. All three psychopathy dimensions and the total psychopathy scale showed negative zero-order correlations with heart rate at baseline, but regression analyses revealed that only grandiose manipulation was uniquely predictive of lower baseline heart rate. Grandiose manipulation was also significantly associated with lower baseline skin conductance. Regarding autonomic activity, findings resulted in a unique negative association between grandiose manipulation and heart rate activity in response to violent scenes. In contrast, the impulsive-irresponsible dimension was positively related with heart rate activity to violent scenes. Finally, findings revealed that only callous-unemotional traits were negatively associated with startle potentiation in response to violent scenes. No associations during erotic scenes were identified. These findings point to unique associations between the three assessed dimensions of psychopathy with physiological measures, indicating that grandiose manipulation is associated with hypoarousal, impulsive irresponsibility with hyperarousal, and callous-unemotional traits with low emotional and fear responses to violent scenes. © 2017 Society for Psychophysiological Research.

  3. Emergency patients receiving anaesthesiologist-based pre-hospital treatment and subsequently released at the scene

    DEFF Research Database (Denmark)

    Højfeldt, S G; Sørensen, L P; Mikkelsen, Søren

    2014-01-01

    BACKGROUND: The Mobile Emergency Care Unit in Odense, Denmark consists of a rapid response car, manned with an anaesthesiologist and an emergency medical technician. Eleven per cent of the patients are released at the scene following treatment. The aim of the study was to investigate which...... investigated. In each patient, diagnosis as well as any renewed contact with the Mobile Emergency Care Unit or the hospital within 24 h was registered. RESULTS: ONE THOUSAND SIX HUNDRED NINE: patients were released at the scene. Diagnoses within the category 'examination and investigation' [International...... with the Mobile Emergency Care Unit within 24 h. Of the 143 victims of traffic accidents, 19 (13%) required renewed contact with the emergency department and one required admission to hospital (0.7%). Of all 1609 patients, four died within 24 h of contact (0.2%). CONCLUSION: Patients treated and released...

  4. Stages as models of scene geometry.

    Science.gov (United States)

    Nedović, Vladimir; Smeulders, Arnold W M; Redert, André; Geusebroek, Jan-Mark

    2010-09-01

    Reconstruction of 3D scene geometry is an important element for scene understanding, autonomous vehicle and robot navigation, image retrieval, and 3D television. We propose accounting for the inherent structure of the visual world when trying to solve the scene reconstruction problem. Consequently, we identify geometric scene categorization as the first step toward robust and efficient depth estimation from single images. We introduce 15 typical 3D scene geometries called stages, each with a unique depth profile, which roughly correspond to a large majority of broadcast video frames. Stage information serves as a first approximation of global depth, narrowing down the search space in depth estimation and object localization. We propose different sets of low-level features for depth estimation, and perform stage classification on two diverse data sets of television broadcasts. Classification results demonstrate that stages can often be efficiently learned from low-dimensional image representations.

  5. Sex differences in the brain response to affective scenes with or without humans.

    Science.gov (United States)

    Proverbio, Alice Mado; Adorni, Roberta; Zani, Alberto; Trestianu, Laura

    2009-10-01

    Recent findings have demonstrated that women might be more reactive than men to viewing painful stimuli (vicarious response to pain), and therefore more empathic [Han, S., Fan, Y., & Mao, L. (2008). Gender difference in empathy for pain: An electrophysiological investigation. Brain Research, 1196, 85-93]. We investigated whether the two sexes differed in their cerebral responses to affective pictures portraying humans in different positive or negative contexts compared to natural or urban scenarios. 440 IAPS slides were presented to 24 Italian students (12 women and 12 men). Half the pictures displayed humans while the remaining scenes lacked visible persons. ERPs were recorded from 128 electrodes and swLORETA (standardized weighted Low-Resolution Electromagnetic Tomography) source reconstruction was performed. Occipital P115 was greater in response to persons than to scenes and was affected by the emotional valence of the human pictures. This suggests that processing of biologically relevant stimuli is prioritized. Orbitofrontal N2 was greater in response to positive than negative human pictures in women but not in men, and not to scenes. A late positivity (LP) to suffering humans far exceeded the response to negative scenes in women but not in men. In both sexes, the contrast suffering-minus-happy humans revealed a difference in the activation of the occipito/temporal, right occipital (BA19), bilateral parahippocampal, left dorsal prefrontal cortex (DPFC) and left amygdala. However, increased right amygdala and right frontal area activities were observed only in women. The humans-minus-scenes contrast revealed a difference in the activation of the middle occipital gyrus (MOG) in men, and of the left inferior parietal (BA40), left superior temporal gyrus (STG, BA38) and right cingulate (BA31) in women (270-290 ms). These data indicate a sex-related difference in the brain response to humans, possibly supporting human empathy.

  6. Semantic Reasoning for Scene Interpretation

    DEFF Research Database (Denmark)

    Jensen, Lars Baunegaard With; Baseski, Emre; Pugeault, Nicolas

    2008-01-01

    In this paper, we propose a hierarchical architecture for representing scenes, covering 2D and 3D aspects of visual scenes as well as the semantic relations between the different aspects. We argue that labeled graphs are a suitable representational framework for this representation and demonstrat...

  7. Attentional Bias towards Emotional Scenes in Boys with Attention Deficit Hyperactivity Disorder

    Directory of Open Access Journals (Sweden)

    Ebrahim Pishyareh

    2012-06-01

    Full Text Available Objective: Children with attention-deficit / hyperactivity disorder (ADHD react explosively and inappropriately to emotional stimuli. It could be hypothesized that these children have some impairment in attending to emotional cues. Based on this hypothesis, we conducted this study to evaluate visual directions of children with ADHD towards paired emotional scenes.Method: thirty boys between the ages of 6 and 11 years diagnosed with ADHD were compared with 30 age-matched normal boys. All participants were presented paired emotional and neutral scenes in the four following categories: pleasant-neutral; pleasant-unpleasant; unpleasant-neutral; and neutral – neutral. Meanwhile, their visual orientations towards these pictures were evaluated using the eye tracking system. The number and duration of first fixation and duration of first gaze were compared between the two groups using the MANOVA analysis. The performance of each group in different categories was also analyzed using the Friedman test.Results: With regards to duration of first gaze, which is the time taken to fixate on a picture before moving to another picture, ADHD children spent less time on pleasant pictures compared to normal group ,while they were looking at pleasant – neutral and unpleasant – pleasant pairs. The duration of first gaze on unpleasant pictures was higher while children with ADHD were looking at unpleasant – neutral pairs (P<0.01.Conclusion: based on the findings of this study it could be concluded that children with ADHD attend to unpleasant conditions more than normal children which leads to their emotional reactivity.

  8. The scene and the unseen: manipulating photographs for experiments on change blindness and scene memory: image manipulation for change blindness.

    Science.gov (United States)

    Ball, Felix; Elzemann, Anne; Busch, Niko A

    2014-09-01

    The change blindness paradigm, in which participants often fail to notice substantial changes in a scene, is a popular tool for studying scene perception, visual memory, and the link between awareness and attention. Some of the most striking and popular examples of change blindness have been demonstrated with digital photographs of natural scenes; in most studies, however, much simpler displays, such as abstract stimuli or "free-floating" objects, are typically used. Although simple displays have undeniable advantages, natural scenes remain a very useful and attractive stimulus for change blindness research. To assist researchers interested in using natural-scene stimuli in change blindness experiments, we provide here a step-by-step tutorial on how to produce changes in natural-scene images with a freely available image-processing tool (GIMP). We explain how changes in a scene can be made by deleting objects or relocating them within the scene or by changing the color of an object, in just a few simple steps. We also explain how the physical properties of such changes can be analyzed using GIMP and MATLAB (a high-level scientific programming tool). Finally, we present an experiment confirming that scenes manipulated according to our guidelines are effective in inducing change blindness and demonstrating the relationship between change blindness and the physical properties of the change and inter-individual differences in performance measures. We expect that this tutorial will be useful for researchers interested in studying the mechanisms of change blindness, attention, or visual memory using natural scenes.

  9. Albedo estimation for scene segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C H; Rosenfeld, A

    1983-03-01

    Standard methods of image segmentation do not take into account the three-dimensional nature of the underlying scene. For example, histogram-based segmentation tacitly assumes that the image intensity is piecewise constant, and this is not true when the scene contains curved surfaces. This paper introduces a method of taking 3d information into account in the segmentation process. The image intensities are adjusted to compensate for the effects of estimated surface orientation; the adjusted intensities can be regarded as reflectivity estimates. When histogram-based segmentation is applied to these new values, the image is segmented into parts corresponding to surfaces of constant reflectivity in the scene. 7 references.

  10. Pooling Objects for Recognizing Scenes without Examples

    NARCIS (Netherlands)

    Kordumova, S.; Mensink, T.; Snoek, C.G.M.

    2016-01-01

    In this paper we aim to recognize scenes in images without using any scene images as training data. Different from attribute based approaches, we do not carefully select the training classes to match the unseen scene classes. Instead, we propose a pooling over ten thousand of off-the-shelf object

  11. Correlated Topic Vector for Scene Classification.

    Science.gov (United States)

    Wei, Pengxu; Qin, Fei; Wan, Fang; Zhu, Yi; Jiao, Jianbin; Ye, Qixiang

    2017-07-01

    Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features.

  12. Cortical networks dynamically emerge with the interplay of slow and fast oscillations for memory of a natural scene.

    Science.gov (United States)

    Mizuhara, Hiroaki; Sato, Naoyuki; Yamaguchi, Yoko

    2015-05-01

    Neural oscillations are crucial for revealing dynamic cortical networks and for serving as a possible mechanism of inter-cortical communication, especially in association with mnemonic function. The interplay of the slow and fast oscillations might dynamically coordinate the mnemonic cortical circuits to rehearse stored items during working memory retention. We recorded simultaneous EEG-fMRI during a working memory task involving a natural scene to verify whether the cortical networks emerge with the neural oscillations for memory of the natural scene. The slow EEG power was enhanced in association with the better accuracy of working memory retention, and accompanied cortical activities in the mnemonic circuits for the natural scene. Fast oscillation showed a phase-amplitude coupling to the slow oscillation, and its power was tightly coupled with the cortical activities for representing the visual images of natural scenes. The mnemonic cortical circuit with the slow neural oscillations would rehearse the distributed natural scene representations with the fast oscillation for working memory retention. The coincidence of the natural scene representations could be obtained by the slow oscillation phase to create a coherent whole of the natural scene in the working memory. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. Neural Scene Segmentation by Oscillatory Correlation

    National Research Council Canada - National Science Library

    Wang, DeLiang

    2000-01-01

    The segmentation of a visual scene into a set of coherent patterns (objects) is a fundamental aspect of perception, which underlies a variety of important tasks such as figure/ground segregation, and scene analysis...

  14. A hierarchical inferential method for indoor scene classification

    Directory of Open Access Journals (Sweden)

    Jiang Jingzhe

    2017-12-01

    Full Text Available Indoor scene classification forms a basis for scene interaction for service robots. The task is challenging because the layout and decoration of a scene vary considerably. Previous studies on knowledge-based methods commonly ignore the importance of visual attributes when constructing the knowledge base. These shortcomings restrict the performance of classification. The structure of a semantic hierarchy was proposed to describe similarities of different parts of scenes in a fine-grained way. Besides the commonly used semantic features, visual attributes were also introduced to construct the knowledge base. Inspired by the processes of human cognition and the characteristics of indoor scenes, we proposed an inferential framework based on the Markov logic network. The framework is evaluated on a popular indoor scene dataset, and the experimental results demonstrate its effectiveness.

  15. Visual search for arbitrary objects in real scenes

    Science.gov (United States)

    Alvarez, George A.; Rosenholtz, Ruth; Kuzmova, Yoana I.; Sherman, Ashley M.

    2011-01-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target. PMID:21671156

  16. Visual search for arbitrary objects in real scenes.

    Science.gov (United States)

    Wolfe, Jeremy M; Alvarez, George A; Rosenholtz, Ruth; Kuzmova, Yoana I; Sherman, Ashley M

    2011-08-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4-6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the "functional set size" of items that could possibly be the target.

  17. Scene analysis in the natural environment

    DEFF Research Database (Denmark)

    Lewicki, Michael S; Olshausen, Bruno A; Surlykke, Annemarie

    2014-01-01

    The problem of scene analysis has been studied in a number of different fields over the past decades. These studies have led to important insights into problems of scene analysis, but not all of these insights are widely appreciated, and there remain critical shortcomings in current approaches th...... ill-posed problems, (2) the ability to integrate and store information across time and modality, (3) efficient recovery and representation of 3D scene structure, and (4) the use of optimal motor actions for acquiring information to progress toward behavioral goals....

  18. The time course of natural scene perception with reduced attention.

    Science.gov (United States)

    Groen, Iris I A; Ghebreab, Sennay; Lamme, Victor A F; Scholte, H Steven

    2016-02-01

    Attention is thought to impose an informational bottleneck on vision by selecting particular information from visual scenes for enhanced processing. Behavioral evidence suggests, however, that some scene information is extracted even when attention is directed elsewhere. Here, we investigated the neural correlates of this ability by examining how attention affects electrophysiological markers of scene perception. In two electro-encephalography (EEG) experiments, human subjects categorized real-world scenes as manmade or natural (full attention condition) or performed tasks on unrelated stimuli in the center or periphery of the scenes (reduced attention conditions). Scene processing was examined in two ways: traditional trial averaging was used to assess the presence of a categorical manmade/natural distinction in event-related potentials, whereas single-trial analyses assessed whether EEG activity was modulated by scene statistics that are diagnostic of naturalness of individual scenes. The results indicated that evoked activity up to 250 ms was unaffected by reduced attention, showing intact categorical differences between manmade and natural scenes and strong modulations of single-trial activity by scene statistics in all conditions. Thus initial processing of both categorical and individual scene information remained intact with reduced attention. Importantly, however, attention did have profound effects on later evoked activity; full attention on the scene resulted in prolonged manmade/natural differences, increased neural sensitivity to scene statistics, and enhanced scene memory. These results show that initial processing of real-world scene information is intact with diminished attention but that the depth of processing of this information does depend on attention. Copyright © 2016 the American Physiological Society.

  19. 'Working behind the scenes'. An ethical view of mental health nursing and first-episode psychosis.

    Science.gov (United States)

    Moe, Cathrine; Kvig, Erling I; Brinchmann, Beate; Brinchmann, Berit S

    2013-08-01

    The aim of this study was to explore and reflect upon mental health nursing and first-episode psychosis. Seven multidisciplinary focus group interviews were conducted, and data analysis was influenced by a grounded theory approach. The core category was found to be a process named 'working behind the scenes'. It is presented along with three subcategories: 'keeping the patient in mind', 'invisible care' and 'invisible network contact'. Findings are illuminated with the ethical principles of respect for autonomy and paternalism. Nursing care is dynamic, and clinical work moves along continuums between autonomy and paternalism and between ethical reflective and non-reflective practice. 'Working behind the scenes' is considered to be in a paternalistic area, containing an ethical reflection. Treating and caring for individuals experiencing first-episode psychosis demands an ethical awareness and great vigilance by nurses. The study is a contribution to reflection upon everyday nursing practice, and the conclusion concerns the importance of making invisible work visible.

  20. Interaction between scene-based and array-based contextual cueing.

    Science.gov (United States)

    Rosenbaum, Gail M; Jiang, Yuhong V

    2013-07-01

    Contextual cueing refers to the cueing of spatial attention by repeated spatial context. Previous studies have demonstrated distinctive properties of contextual cueing by background scenes and by an array of search items. Whereas scene-based contextual cueing reflects explicit learning of the scene-target association, array-based contextual cueing is supported primarily by implicit learning. In this study, we investigated the interaction between scene-based and array-based contextual cueing. Participants searched for a target that was predicted by both the background scene and the locations of distractor items. We tested three possible patterns of interaction: (1) The scene and the array could be learned independently, in which case cueing should be expressed even when only one cue was preserved; (2) the scene and array could be learned jointly, in which case cueing should occur only when both cues were preserved; (3) overshadowing might occur, in which case learning of the stronger cue should preclude learning of the weaker cue. In several experiments, we manipulated the nature of the contextual cues present during training and testing. We also tested explicit awareness of scenes, scene-target associations, and arrays. The results supported the overshadowing account: Specifically, scene-based contextual cueing precluded array-based contextual cueing when both were predictive of the location of a search target. We suggest that explicit, endogenous cues dominate over implicit cues in guiding spatial attention.

  1. Superpixel-Based Feature for Aerial Image Scene Recognition

    Directory of Open Access Journals (Sweden)

    Hongguang Li

    2018-01-01

    Full Text Available Image scene recognition is a core technology for many aerial remote sensing applications. Different landforms are inputted as different scenes in aerial imaging, and all landform information is regarded as valuable for aerial image scene recognition. However, the conventional features of the Bag-of-Words model are designed using local points or other related information and thus are unable to fully describe landform areas. This limitation cannot be ignored when the aim is to ensure accurate aerial scene recognition. A novel superpixel-based feature is proposed in this study to characterize aerial image scenes. Then, based on the proposed feature, a scene recognition method of the Bag-of-Words model for aerial imaging is designed. The proposed superpixel-based feature that utilizes landform information establishes top-task superpixel extraction of landforms to bottom-task expression of feature vectors. This characterization technique comprises the following steps: simple linear iterative clustering based superpixel segmentation, adaptive filter bank construction, Lie group-based feature quantification, and visual saliency model-based feature weighting. Experiments of image scene recognition are carried out using real image data captured by an unmanned aerial vehicle (UAV. The recognition accuracy of the proposed superpixel-based feature is 95.1%, which is higher than those of scene recognition algorithms based on other local features.

  2. Multi- and hyperspectral scene modeling

    Science.gov (United States)

    Borel, Christoph C.; Tuttle, Ronald F.

    2011-06-01

    This paper shows how to use a public domain raytracer POV-Ray (Persistence Of Vision Raytracer) to render multiand hyper-spectral scenes. The scripting environment allows automatic changing of the reflectance and transmittance parameters. The radiosity rendering mode allows accurate simulation of multiple-reflections between surfaces and also allows semi-transparent surfaces such as plant leaves. We show that POV-Ray computes occlusion accurately using a test scene with two blocks under a uniform sky. A complex scene representing a plant canopy is generated using a few lines of script. With appropriate rendering settings, shadows cast by leaves are rendered in many bands. Comparing single and multiple reflection renderings, the effect of multiple reflections is clearly visible and accounts for 25% of the overall apparent canopy reflectance in the near infrared.

  3. Specific neural traces for intonational discourse categories as revealed by human-evoked potentials.

    Science.gov (United States)

    Borràs-Comes, Joan; Costa-Faidella, Jordi; Prieto, Pilar; Escera, Carles

    2012-04-01

    The neural representation of segmental and tonal phonological distinctions has been shown by means of the MMN ERP, yet this is not the case for intonational discourse contrasts. In Catalan, a rising-falling intonational sequence can be perceived as a statement or as a counterexpectational question, depending exclusively on the size of the pitch range interval of the rising movement. We tested here, using the MMN, whether such categorical distinctions elicited distinct neurophysiological patterns of activity, supporting their specific neural representation. From a behavioral identification experiment, we set the boundary between the two categories and defined four stimuli across the continuum. Although the physical distance between each pair of stimuli was kept constant, the central pair represented an across-category contrast, whereas the other pairs represented within-category contrasts. These four auditory stimuli were contrasted by pairs in three different oddball blocks. The mean amplitude of the MMN was larger for the across-category contrast, suggesting that intonational contrasts in the target language can be encoded automatically in the auditory cortex. These results are in line with recent findings in other fields of linguistics, showing that, when a boundary between categories is crossed, the MMN response is not just larger but rather includes a separate subcomponent.

  4. Primal scene derivatives in the work of Yukio Mishima: the primal scene fantasy.

    Science.gov (United States)

    Turco, Ronald N

    2002-01-01

    This article discusses the preoccupation with fire, revenge, crucifixion, and other fantasies as they relate to the primal scene. The manifestations of these fantasies are demonstrated in a work of fiction by Yukio Mishima. The Temple of the Golden Pavillion. As is the case in other writings of Mishima there is a fusion of aggressive and libidinal drives and a preoccupation with death. The primal scene is directly connected with pyromania and destructive "acting out" of fantasies. This article is timely with regard to understanding contemporary events of cultural and national destruction.

  5. Scene Integration for Online VR Advertising Clouds

    Directory of Open Access Journals (Sweden)

    Michael Kalochristianakis

    2014-12-01

    Full Text Available This paper presents a scene composition approach that allows the combinational use of standard three dimensional objects, called models, in order to create X3D scenes. The module is an integral part of a broader design aiming to construct large scale online advertising infrastructures that rely on virtual reality technologies. The architecture addresses a number of problems regarding remote rendering for low end devices and last but not least, the provision of scene composition and integration. Since viewers do not keep information regarding individual input models or scenes, composition requires the consideration of mechanisms that add state to viewing technologies. In terms of this work we extended a well-known, open source X3D authoring tool.

  6. Advanced radiometric and interferometric milimeter-wave scene simulations

    Science.gov (United States)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  7. Three-dimensional measurement system for crime scene documentation

    Science.gov (United States)

    Adamczyk, Marcin; Hołowko, Elwira; Lech, Krzysztof; Michoński, Jakub; MÄ czkowski, Grzegorz; Bolewicki, Paweł; Januszkiewicz, Kamil; Sitnik, Robert

    2017-10-01

    Three dimensional measurements (such as photogrammetry, Time of Flight, Structure from Motion or Structured Light techniques) are becoming a standard in the crime scene documentation process. The usage of 3D measurement techniques provide an opportunity to prepare more insightful investigation and helps to show every trace in the context of the entire crime scene. In this paper we would like to present a hierarchical, three-dimensional measurement system that is designed for crime scenes documentation process. Our system reflects the actual standards in crime scene documentation process - it is designed to perform measurement in two stages. First stage of documentation, the most general, is prepared with a scanner with relatively low spatial resolution but also big measuring volume - it is used for the whole scene documentation. Second stage is much more detailed: high resolution but smaller size of measuring volume for areas that required more detailed approach. The documentation process is supervised by a specialised application CrimeView3D, that is a software platform for measurements management (connecting with scanners and carrying out measurements, automatic or semi-automatic data registration in the real time) and data visualisation (3D visualisation of documented scenes). It also provides a series of useful tools for forensic technicians: virtual measuring tape, searching for sources of blood spatter, virtual walk on the crime scene and many others. In this paper we present our measuring system and the developed software. We also provide an outcome from research on metrological validation of scanners that was performed according to VDI/VDE standard. We present a CrimeView3D - a software-platform that was developed to manage the crime scene documentation process. We also present an outcome from measurement sessions that were conducted on real crime scenes with cooperation with Technicians from Central Forensic Laboratory of Police.

  8. Moving through a multiplex holographic scene

    Science.gov (United States)

    Mrongovius, Martina

    2013-02-01

    This paper explores how movement can be used as a compositional element in installations of multiplex holograms. My holographic images are created from montages of hand-held video and photo-sequences. These spatially dynamic compositions are visually complex but anchored to landmarks and hints of the capturing process - such as the appearance of the photographer's shadow - to establish a sense of connection to the holographic scene. Moving around in front of the hologram, the viewer animates the holographic scene. A perception of motion then results from the viewer's bodily awareness of physical motion and the visual reading of dynamics within the scene or movement of perspective through a virtual suggestion of space. By linking and transforming the physical motion of the viewer with the visual animation, the viewer's bodily awareness - including proprioception, balance and orientation - play into the holographic composition. How multiplex holography can be a tool for exploring coupled, cross-referenced and transformed perceptions of movement is demonstrated with a number of holographic image installations. Through this process I expanded my creative composition practice to consider how dynamic and spatial scenes can be conveyed through the fragmented view of a multiplex hologram. This body of work was developed through an installation art practice and was the basis of my recently completed doctoral thesis: 'The Emergent Holographic Scene — compositions of movement and affect using multiplex holographic images'.

  9. IR characteristic simulation of city scenes based on radiosity model

    Science.gov (United States)

    Xiong, Xixian; Zhou, Fugen; Bai, Xiangzhi; Yu, Xiyu

    2013-09-01

    Reliable modeling for thermal infrared (IR) signatures of real-world city scenes is required for signature management of civil and military platforms. Traditional modeling methods generally assume that scene objects are individual entities during the physical processes occurring in infrared range. However, in reality, the physical scene involves convective and conductive interactions between objects as well as the radiations interactions between objects. A method based on radiosity model describes these complex effects. It has been developed to enable an accurate simulation for the radiance distribution of the city scenes. Firstly, the physical processes affecting the IR characteristic of city scenes were described. Secondly, heat balance equations were formed on the basis of combining the atmospheric conditions, shadow maps and the geometry of scene. Finally, finite difference method was used to calculate the kinetic temperature of object surface. A radiosity model was introduced to describe the scattering effect of radiation between surface elements in the scene. By the synthesis of objects radiance distribution in infrared range, we could obtain the IR characteristic of scene. Real infrared images and model predictions were shown and compared. The results demonstrate that this method can realistically simulate the IR characteristic of city scenes. It effectively displays the infrared shadow effects and the radiation interactions between objects in city scenes.

  10. The primal scene and symbol formation.

    Science.gov (United States)

    Niedecken, Dietmut

    2016-06-01

    This article discusses the meaning of the primal scene for symbol formation by exploring its way of processing in a child's play. The author questions the notion that a sadomasochistic way of processing is the only possible one. A model of an alternative mode of processing is being presented. It is suggested that both ways of processing intertwine in the "fabric of life" (D. Laub). Two clinical vignettes, one from an analytic child psychotherapy and the other from the analysis of a 30 year-old female patient, illustrate how the primal scene is being played out in the form of a terzet. The author explores whether the sadomasochistic way of processing actually precedes the "primal scene as a terzet". She discusses if it could even be regarded as a precondition for the formation of the latter or, alternatively, if the "combined parent-figure" gives rise to ways of processing. The question is being left open. Finally, it is shown how both modes of experiencing the primal scene underlie the discoursive and presentative symbol formation, respectively. Copyright © 2015 Institute of Psychoanalysis.

  11. Modeling global scene factors in attention

    Science.gov (United States)

    Torralba, Antonio

    2003-07-01

    Models of visual attention have focused predominantly on bottom-up approaches that ignored structured contextual and scene information. I propose a model of contextual cueing for attention guidance based on the global scene configuration. It is shown that the statistics of low-level features across the whole image can be used to prime the presence or absence of objects in the scene and to predict their location, scale, and appearance before exploring the image. In this scheme, visual context information can become available early in the visual processing chain, which allows modulation of the saliency of image regions and provides an efficient shortcut for object detection and recognition. 2003 Optical Society of America

  12. Presentation of 3D Scenes Through Video Example.

    Science.gov (United States)

    Baldacci, Andrea; Ganovelli, Fabio; Corsini, Massimiliano; Scopigno, Roberto

    2017-09-01

    Using synthetic videos to present a 3D scene is a common requirement for architects, designers, engineers or Cultural Heritage professionals however it is usually time consuming and, in order to obtain high quality results, the support of a film maker/computer animation expert is necessary. We introduce an alternative approach that takes the 3D scene of interest and an example video as input, and automatically produces a video of the input scene that resembles the given video example. In other words, our algorithm allows the user to "replicate" an existing video, on a different 3D scene. We build on the intuition that a video sequence of a static environment is strongly characterized by its optical flow, or, in other words, that two videos are similar if their optical flows are similar. We therefore recast the problem as producing a video of the input scene whose optical flow is similar to the optical flow of the input video. Our intuition is supported by a user-study specifically designed to verify this statement. We have successfully tested our approach on several scenes and input videos, some of which are reported in the accompanying material of this paper.

  13. 47 CFR 80.1127 - On-scene communications.

    Science.gov (United States)

    2010-10-01

    ....1127 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Global Maritime Distress and Safety System (GMDSS) Operating Procedures for Distress and Safety Communications § 80.1127 On-scene communications. (a) On-scene communications...

  14. FINANCIAL CONTROL AS A CATEGORY

    Directory of Open Access Journals (Sweden)

    Andrey Yu. Volkov

    2014-01-01

    Full Text Available The article reveals the basics of “financial control” as a category. The main attention is concentrated on the “control” itself (asa term, multiplicity of interpretation of“financial control” term and its juristic-practical matching. The duality of financial control category is detected. The identity of terms “financial control” and “state financial control” is justified. The article also offers ways of development of financial control juristical regulation.

  15. Accumulating and remembering the details of neutral and emotional natural scenes.

    Science.gov (United States)

    Melcher, David

    2010-01-01

    In contrast to our rich sensory experience with complex scenes in everyday life, the capacity of visual working memory is thought to be quite limited. Here our memory has been examined for the details of naturalistic scenes as a function of display duration, emotional valence of the scene, and delay before test. Individual differences in working memory and long-term memory for pictorial scenes were examined in experiment 1. The accumulation of memory for emotional scenes and the retention of these details in long-term memory were investigated in experiment 2. Although there were large individual differences in performance, memory for scene details generally exceeded the traditional working memory limit within a few seconds. Information about positive scenes was learned most quickly, while negative scenes showed the worst memory for details. The overall pattern of results was consistent with the idea that both short-term and long-term representations are mixed together in a medium-term 'online' memory for scenes.

  16. 3D Traffic Scene Understanding From Movable Platforms.

    Science.gov (United States)

    Geiger, Andreas; Lauer, Martin; Wojek, Christian; Stiller, Christoph; Urtasun, Raquel

    2014-05-01

    In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry, and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar, or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow, and occupancy grids. For each of these cues, we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments.

  17. Maxwellian Eye Fixation during Natural Scene Perception

    Directory of Open Access Journals (Sweden)

    Jean Duchesne

    2012-01-01

    Full Text Available When we explore a visual scene, our eyes make saccades to jump rapidly from one area to another and fixate regions of interest to extract useful information. While the role of fixation eye movements in vision has been widely studied, their random nature has been a hitherto neglected issue. Here we conducted two experiments to examine the Maxwellian nature of eye movements during fixation. In Experiment 1, eight participants were asked to perform free viewing of natural scenes displayed on a computer screen while their eye movements were recorded. For each participant, the probability density function (PDF of eye movement amplitude during fixation obeyed the law established by Maxwell for describing molecule velocity in gas. Only the mean amplitude of eye movements varied with expertise, which was lower in experts than novice participants. In Experiment 2, two participants underwent fixed time, free viewing of natural scenes and of their scrambled version while their eye movements were recorded. Again, the PDF of eye movement amplitude during fixation obeyed Maxwell’s law for each participant and for each scene condition (normal or scrambled. The results suggest that eye fixation during natural scene perception describes a random motion regardless of top-down or of bottom-up processes.

  18. Maxwellian Eye Fixation during Natural Scene Perception

    Science.gov (United States)

    Duchesne, Jean; Bouvier, Vincent; Guillemé, Julien; Coubard, Olivier A.

    2012-01-01

    When we explore a visual scene, our eyes make saccades to jump rapidly from one area to another and fixate regions of interest to extract useful information. While the role of fixation eye movements in vision has been widely studied, their random nature has been a hitherto neglected issue. Here we conducted two experiments to examine the Maxwellian nature of eye movements during fixation. In Experiment 1, eight participants were asked to perform free viewing of natural scenes displayed on a computer screen while their eye movements were recorded. For each participant, the probability density function (PDF) of eye movement amplitude during fixation obeyed the law established by Maxwell for describing molecule velocity in gas. Only the mean amplitude of eye movements varied with expertise, which was lower in experts than novice participants. In Experiment 2, two participants underwent fixed time, free viewing of natural scenes and of their scrambled version while their eye movements were recorded. Again, the PDF of eye movement amplitude during fixation obeyed Maxwell's law for each participant and for each scene condition (normal or scrambled). The results suggest that eye fixation during natural scene perception describes a random motion regardless of top-down or of bottom-up processes. PMID:23226987

  19. [Simultanagnosia and scene agnosia induced by right posterior cerebral artery infarction: a case report].

    Science.gov (United States)

    Kobayashi, Yasutaka; Muramatsu, Tomoko; Sato, Mamiko; Hayashi, Hiromi; Miura, Toyoaki

    2015-01-01

    A 68-year-old man was admitted to our hospital for rehabilitation of topographical disorientation. Brain magnetic resonance imaging revealed infarction in the right medial side of the occipital lobe. On neuropsychological testing, he scored low for the visual information-processing task; however, his overall cognitive function was retained. He could identify parts of the picture while describing the context picture of the Visual Perception Test for Agnosia but could not explain the contents of the entire picture, representing so-called simultanagnosia. Further, he could morphologically perceive both familiar and new scenes, but could not identify them, representing so-called scene agnosia. We report this case because simultanagnosia associated with a right occipital lobe lesion is rare.

  20. Selective scene perception deficits in a case of topographical disorientation.

    Science.gov (United States)

    Robin, Jessica; Lowe, Matthew X; Pishdadian, Sara; Rivest, Josée; Cant, Jonathan S; Moscovitch, Morris

    2017-07-01

    Topographical disorientation (TD) is a neuropsychological condition characterized by an inability to find one's way, even in familiar environments. One common contributing cause of TD is landmark agnosia, a visual recognition impairment specific to scenes and landmarks. Although many cases of TD with landmark agnosia have been documented, little is known about the perceptual mechanisms which lead to selective deficits in recognizing scenes. In the present study, we test LH, a man who exhibits TD and landmark agnosia, on measures of scene perception that require selectively attending to either the configural or surface properties of a scene. Compared to healthy controls, LH demonstrates perceptual impairments when attending to the configuration of a scene, but not when attending to its surface properties, such as the pattern of the walls or whether the ground is sand or grass. In contrast, when focusing on objects instead of scenes, LH demonstrates intact perception of both geometric and surface properties. This study demonstrates that in a case of TD and landmark agnosia, the perceptual impairments are selective to the layout of scenes, providing insight into the mechanism of landmark agnosia and scene-selective perceptual processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Semi-Supervised Multitask Learning for Scene Recognition.

    Science.gov (United States)

    Lu, Xiaoqiang; Li, Xuelong; Mou, Lichao

    2015-09-01

    Scene recognition has been widely studied to understand visual information from the level of objects and their relationships. Toward scene recognition, many methods have been proposed. They, however, encounter difficulty to improve the accuracy, mainly due to two limitations: 1) lack of analysis of intrinsic relationships across different scales, say, the initial input and its down-sampled versions and 2) existence of redundant features. This paper develops a semi-supervised learning mechanism to reduce the above two limitations. To address the first limitation, we propose a multitask model to integrate scene images of different resolutions. For the second limitation, we build a model of sparse feature selection-based manifold regularization (SFSMR) to select the optimal information and preserve the underlying manifold structure of data. SFSMR coordinates the advantages of sparse feature selection and manifold regulation. Finally, we link the multitask model and SFSMR, and propose the semi-supervised learning method to reduce the two limitations. Experimental results report the improvements of the accuracy in scene recognition.

  2. What the voice reveals : Within- and between-category stereotyping on the basis of voice

    NARCIS (Netherlands)

    Ko, SJ; Judd, CM; Blair, [No Value; Blair, I.V

    The authors report research that attempts to shift the traditional focus of visual cues to auditory cues as a basis for stereotyping. Moreover, their approach examines whether gender-signaling vocal cues lead not only to between-category but also to within-category gender stereotyping. Study 1

  3. Political conservatism predicts asymmetries in emotional scene memory.

    Science.gov (United States)

    Mills, Mark; Gonzalez, Frank J; Giuseffi, Karl; Sievert, Benjamin; Smith, Kevin B; Hibbing, John R; Dodd, Michael D

    2016-06-01

    Variation in political ideology has been linked to differences in attention to and processing of emotional stimuli, with stronger responses to negative versus positive stimuli (negativity bias) the more politically conservative one is. As memory is enhanced by attention, such findings predict that memory for negative versus positive stimuli should similarly be enhanced the more conservative one is. The present study tests this prediction by having participants study 120 positive, negative, and neutral scenes in preparation for a subsequent memory test. On the memory test, the same 120 scenes were presented along with 120 new scenes and participants were to respond whether a scene was old or new. Results on the memory test showed that negative scenes were more likely to be remembered than positive scenes, though, this was true only for political conservatives. That is, a larger negativity bias was found the more conservative one was. The effect was sizeable, explaining 45% of the variance across subjects in the effect of emotion. These findings demonstrate that the relationship between political ideology and asymmetries in emotion processing extend to memory and, furthermore, suggest that exploring the extent to which subject variation in interactions among emotion, attention, and memory is predicted by conservatism may provide new insights into theories of political ideology. Published by Elsevier B.V.

  4. COGNITIVE-COMMUNICATIVE PERSONALITY CATEGORY IN THE KAZAKH LANGUAGE

    Directory of Open Access Journals (Sweden)

    Orynay Sagingalievna Zhubaeva

    2018-02-01

    Full Text Available The purpose of the research is to reveal the character of anthropocentricity of grammatical categories in their meaning and functioning. Materials and methods. According to the research objectives and goals, the methods used were as follows: the descriptive method, general scientific methods of analysis and synthesis, cognitive analysis, method of experiment, contextual analysis, structural-semantic analysis, transformation technique, comparative analysis. Results. For the first time in the Kazakh linguistics the substantial aspect of grammatical categories is characterized being a result of both conceptualization and categorization processes. Based on generalization and the comparative analysis of nature and forms of the human factor reflection in the Kazakh grammatical categories there has been revealed the national-cultural specific character of grammatical categories. Practical implications. The research materials can be used in theoretical courses onf grammar and linguistics, as well as in the development of special courses on cognitive linguistics, cognitive grammar, etc.

  5. Being There: (Re)Making the Assessment Scene

    Science.gov (United States)

    Gallagher, Chris W.

    2011-01-01

    I use Burkean analysis to show how neoliberalism undermines faculty assessment expertise and underwrites testing industry expertise in the current assessment scene. Contending that we cannot extricate ourselves from our limited agency in this scene until we abandon the familiar "stakeholder" theory of power, I propose a rewriting of the…

  6. Transcriptome analysis by GeneTrail revealed regulation of functional categories in response to alterations of iron homeostasis in Arabidopsis thaliana

    Directory of Open Access Journals (Sweden)

    Lenhof Hans-Peter

    2011-05-01

    Full Text Available Abstract Background High-throughput technologies have opened new avenues to study biological processes and pathways. The interpretation of the immense amount of data sets generated nowadays needs to be facilitated in order to enable biologists to identify complex gene networks and functional pathways. To cope with this task multiple computer-based programs have been developed. GeneTrail is a freely available online tool that screens comparative transcriptomic data for differentially regulated functional categories and biological pathways extracted from common data bases like KEGG, Gene Ontology (GO, TRANSPATH and TRANSFAC. Additionally, GeneTrail offers a feature that allows screening of individually defined biological categories that are relevant for the respective research topic. Results We have set up GeneTrail for the use of Arabidopsis thaliana. To test the functionality of this tool for plant analysis, we generated transcriptome data of root and leaf responses to Fe deficiency and the Arabidopsis metal homeostasis mutant nas4x-1. We performed Gene Set Enrichment Analysis (GSEA with eight meaningful pairwise comparisons of transcriptome data sets. We were able to uncover several functional pathways including metal homeostasis that were affected in our experimental situations. Representation of the differentially regulated functional categories in Venn diagrams uncovered regulatory networks at the level of whole functional pathways. Over-Representation Analysis (ORA of differentially regulated genes identified in pairwise comparisons revealed specific functional plant physiological categories as major targets upon Fe deficiency and in nas4x-1. Conclusion Here, we obtained supporting evidence, that the nas4x-1 mutant was defective in metal homeostasis. It was confirmed that nas4x-1 showed Fe deficiency in roots and signs of Fe deficiency and Fe sufficiency in leaves. Besides metal homeostasis, biotic stress, root carbohydrate, leaf

  7. Structural and effective connectivity reveals potential network-based influences on category-sensitive visual areas

    Directory of Open Access Journals (Sweden)

    Nicholas eFurl

    2015-05-01

    Full Text Available Visual category perception is thought to depend on brain areas that respond specifically when certain categories are viewed. These category-sensitive areas are often assumed to be modules (with some degree of processing autonomy and to act predominantly on feedforward visual input. This modular view can be complemented by a view that treats brain areas as elements within more complex networks and as influenced by network properties. This network-oriented viewpoint is emerging from studies using either diffusion tensor imaging to map structural connections or effective connectivity analyses to measure how their functional responses influence each other. This literature motivates several hypotheses that predict category-sensitive activity based on network properties. Large, long-range fiber bundles such as inferior fronto-occipital, arcuate and inferior longitudinal fasciculi are associated with behavioural recognition and could play crucial roles in conveying backward influences on visual cortex from anterior temporal and frontal areas. Such backward influences could support top-down functions such as visual search and emotion-based visual modulation. Within visual cortex itself, areas sensitive to different categories appear well-connected (e.g., face areas connect to object- and motion sensitive areas and their responses can be predicted by backward modulation. Evidence supporting these propositions remains incomplete and underscores the need for better integration of DTI and functional imaging.

  8. [Perception of objects and scenes in age-related macular degeneration].

    Science.gov (United States)

    Tran, T H C; Boucart, M

    2012-01-01

    Vision related quality of life questionnaires suggest that patients with AMD exhibit difficulties in finding objects and in mobility. In the natural environment, objects seldom appear in isolation. They appear in a spatial context which may obscure them in part or place obstacles in the patient's path. Furthermore, the luminance of a natural scene varies as a function of the hour of the day and the light source, which can alter perception. This study aims to evaluate recognition of objects and natural scenes by patients with AMD, by using photographs of such scenes. Studies demonstrate that AMD patients are able to categorize scenes as nature scenes or urban scenes and to discriminate indoor from outdoor scenes with a high degree of precision. They detect objects better in isolation, in color, or against a white background than in their natural contexts. These patients encounter more difficulties than normally sighted individuals in detecting objects in a low-contrast, black-and-white scene. These results may have implications for rehabilitation, for layout of texts and magazines for the reading-impaired and for the rearrangement of the spatial environment of older AMD patients in order to facilitate mobility, finding objects and reducing the risk of falls. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  9. Construction and Optimization of Three-Dimensional Disaster Scenes within Mobile Virtual Reality

    Directory of Open Access Journals (Sweden)

    Ya Hu

    2018-06-01

    Full Text Available Because mobile virtual reality (VR is both mobile and immersive, three-dimensional (3D visualizations of disaster scenes based in mobile VR enable users to perceive and recognize disaster environments faster and better than is possible with other methods. To achieve immersion and prevent users from feeling dizzy, such visualizations require a high scene-rendering frame rate. However, the existing related visualization work cannot provide a sufficient solution for this purpose. This study focuses on the construction and optimization of a 3D disaster scene in order to satisfy the high frame-rate requirements for the rendering of 3D disaster scenes in mobile VR. First, the design of a plugin-free browser/server (B/S architecture for 3D disaster scene construction and visualization based in mobile VR is presented. Second, certain key technologies for scene optimization are discussed, including diverse modes of scene data representation, representation optimization of mobile scenes, and adaptive scheduling of mobile scenes. By means of these technologies, smartphones with various performance levels can achieve higher scene-rendering frame rates and improved visual quality. Finally, using a flood disaster as an example, a plugin-free prototype system was developed, and experiments were conducted. The experimental results demonstrate that a 3D disaster scene constructed via the methods addressed in this study has a sufficiently high scene-rendering frame rate to satisfy the requirements for rendering a 3D disaster scene in mobile VR.

  10. Fixations on objects in natural scenes: dissociating importance from salience

    Directory of Open Access Journals (Sweden)

    Bernard Marius e’t Hart

    2013-07-01

    Full Text Available The relation of selective attention to understanding of natural scenes has been subject to intense behavioral research and computational modeling, and gaze is often used as a proxy for such attention. The probability of an image region to be fixated typically correlates with its contrast. However, this relation does not imply a causal role of contrast. Rather, contrast may relate to an object’s importance for a scene, which in turn drives attention. Here we operationalize importance by the probability that an observer names the object as characteristic for a scene. We modify luminance contrast of either a frequently named (common/important or a rarely named (rare/unimportant object, track the observers’ eye movements during scene viewing and ask them to provide keywords describing the scene immediately after.When no object is modified relative to the background, important objects draw more fixations than unimportant ones. Increases of contrast make an object more likely to be fixated, irrespective of whether it was important for the original scene, while decreases in contrast have little effect on fixations. Any contrast modification makes originally unimportant objects more important for the scene. Finally, important objects are fixated more centrally than unimportant objects, irrespective of contrast.Our data suggest a dissociation between object importance (relevance for the scene and salience (relevance for attention. If an object obeys natural scene statistics, important objects are also salient. However, when natural scene statistics are violated, importance and salience are differentially affected. Object salience is modulated by the expectation about object properties (e.g., formed by context or gist, and importance by the violation of such expectations. In addition, the dependence of fixated locations within an object on the object’s importance suggests an analogy to the effects of word frequency on landing positions in reading.

  11. A statistical model for radar images of agricultural scenes

    Science.gov (United States)

    Frost, V. S.; Shanmugan, K. S.; Holtzman, J. C.; Stiles, J. A.

    1982-01-01

    The presently derived and validated statistical model for radar images containing many different homogeneous fields predicts the probability density functions of radar images of entire agricultural scenes, thereby allowing histograms of large scenes composed of a variety of crops to be described. Seasat-A SAR images of agricultural scenes are accurately predicted by the model on the basis of three assumptions: each field has the same SNR, all target classes cover approximately the same area, and the true reflectivity characterizing each individual target class is a uniformly distributed random variable. The model is expected to be useful in the design of data processing algorithms and for scene analysis using radar images.

  12. The neural bases of spatial frequency processing during scene perception

    Science.gov (United States)

    Kauffmann, Louise; Ramanoël, Stephen; Peyrin, Carole

    2014-01-01

    Theories on visual perception agree that scenes are processed in terms of spatial frequencies. Low spatial frequencies (LSF) carry coarse information whereas high spatial frequencies (HSF) carry fine details of the scene. However, how and where spatial frequencies are processed within the brain remain unresolved questions. The present review addresses these issues and aims to identify the cerebral regions differentially involved in low and high spatial frequency processing, and to clarify their attributes during scene perception. Results from a number of behavioral and neuroimaging studies suggest that spatial frequency processing is lateralized in both hemispheres, with the right and left hemispheres predominantly involved in the categorization of LSF and HSF scenes, respectively. There is also evidence that spatial frequency processing is retinotopically mapped in the visual cortex. HSF scenes (as opposed to LSF) activate occipital areas in relation to foveal representations, while categorization of LSF scenes (as opposed to HSF) activates occipital areas in relation to more peripheral representations. Concomitantly, a number of studies have demonstrated that LSF information may reach high-order areas rapidly, allowing an initial coarse parsing of the visual scene, which could then be sent back through feedback into the occipito-temporal cortex to guide finer HSF-based analysis. Finally, the review addresses spatial frequency processing within scene-selective regions areas of the occipito-temporal cortex. PMID:24847226

  13. Detection of chromatic and luminance distortions in natural scenes.

    Science.gov (United States)

    Jennings, Ben J; Wang, Karen; Menzies, Samantha; Kingdom, Frederick A A

    2015-09-01

    A number of studies have measured visual thresholds for detecting spatial distortions applied to images of natural scenes. In one study, Bex [J. Vis.10(2), 1 (2010)10.1167/10.2.231534-7362] measured sensitivity to sinusoidal spatial modulations of image scale. Here, we measure sensitivity to sinusoidal scale distortions applied to the chromatic, luminance, or both layers of natural scene images. We first established that sensitivity does not depend on whether the undistorted comparison image was of the same or of a different scene. Next, we found that, when the luminance but not chromatic layer was distorted, performance was the same regardless of whether the chromatic layer was present, absent, or phase-scrambled; in other words, the chromatic layer, in whatever form, did not affect sensitivity to the luminance layer distortion. However, when the chromatic layer was distorted, sensitivity was higher when the luminance layer was intact compared to when absent or phase-scrambled. These detection threshold results complement the appearance of periodic distortions of the image scale: when the luminance layer is distorted visibly, the scene appears distorted, but when the chromatic layer is distorted visibly, there is little apparent scene distortion. We conclude that (a) observers have a built-in sense of how a normal image of a natural scene should appear, and (b) the detection of distortion in, as well as the apparent distortion of, natural scene images is mediated predominantly by the luminance layer and not chromatic layer.

  14. Semi-automatic scene generation using the Digital Anatomist Foundational Model.

    Science.gov (United States)

    Wong, B A; Rosse, C; Brinkley, J F

    1999-01-01

    A recent survey shows that a major impediment to more widespread use of computers in anatomy education is the inability to directly manipulate 3-D models, and to relate these to corresponding textual information. In the University of Washington Digital Anatomist Project we have developed a prototype Web-based scene generation program that combines the symbolic Foundational Model of Anatomy with 3-D models. A Web user can browse the Foundational Model (FM), then click to request that a 3-D scene be created of an object and its parts or branches. The scene is rendered by a graphics server, and a snapshot is sent to the Web client. The user can then manipulate the scene, adding new structures, deleting structures, rotating the scene, zooming, and saving the scene as a VRML file. Applications such as this, when fully realized with fast rendering and more anatomical content, have the potential to significantly change the way computers are used in anatomy education.

  15. Visual search for changes in scenes creates long-term, incidental memory traces.

    Science.gov (United States)

    Utochkin, Igor S; Wolfe, Jeremy M

    2018-05-01

    Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 1-3, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a "depth-of-processing" account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.

  16. The functional consequences of social distraction: Attention and memory for complex scenes.

    Science.gov (United States)

    Doherty, Brianna Ruth; Patai, Eva Zita; Duta, Mihaela; Nobre, Anna Christina; Scerif, Gaia

    2017-01-01

    Cognitive scientists have long proposed that social stimuli attract visual attention even when task irrelevant, but the consequences of this privileged status for memory are unknown. To address this, we combined computational approaches, eye-tracking methodology, and individual-differences measures. Participants searched for targets in scenes containing social or non-social distractors equated for low-level visual salience. Subsequent memory precision for target locations was tested. Individual differences in autistic traits and social anxiety were also measured. Eye-tracking revealed significantly more attentional capture to social compared to non-social distractors. Critically, memory precision for target locations was poorer for social scenes. This effect was moderated by social anxiety, with anxious individuals remembering target locations better under conditions of social distraction. These findings shed further light onto the privileged attentional status of social stimuli and its functional consequences on memory across individuals. Copyright © 2016. Published by Elsevier B.V.

  17. Scene reassembly after multimodal digitization and pipeline evaluation using photorealistic rendering

    DEFF Research Database (Denmark)

    Stets, Jonathan Dyssel; Dal Corso, Alessandro; Nielsen, Jannik Boll

    2017-01-01

    of the lighting environment. This enables pixelwise comparison of photographs of the real scene with renderings of the digital version of the scene. Such quantitative evaluation is useful for verifying acquired material appearance and reconstructed surface geometry, which is an important aspect of digital content......Transparent objects require acquisition modalities that are very different from the ones used for objects with more diffuse reflectance properties. Digitizing a scene where objects must be acquired with different modalities requires scene reassembly after reconstruction of the object surfaces....... This reassembly of a scene that was picked apart for scanning seems unexplored. We contribute with a multimodal digitization pipeline for scenes that require this step of reassembly. Our pipeline includes measurement of bidirectional reflectance distribution functions and high dynamic range imaging...

  18. Dynamic Frames Based Generation of 3D Scenes and Applications

    Directory of Open Access Journals (Sweden)

    Danijel Radošević

    2015-05-01

    Full Text Available Modern graphic/programming tools like Unity enables the possibility of creating 3D scenes as well as making 3D scene based program applications, including full physical model, motion, sounds, lightning effects etc. This paper deals with the usage of dynamic frames based generator in the automatic generation of 3D scene and related source code. The suggested model enables the possibility to specify features of the 3D scene in a form of textual specification, as well as exporting such features from a 3D tool. This approach enables higher level of code generation flexibility and the reusability of the main code and scene artifacts in a form of textual templates. An example of the generated application is presented and discussed.

  19. Visual search in scenes involves selective and non-selective pathways

    Science.gov (United States)

    Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R

    2010-01-01

    How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734

  20. Radiative transfer model for heterogeneous 3-D scenes

    Science.gov (United States)

    Kimes, D. S.; Kirchner, J. A.

    1982-01-01

    A general mathematical framework for simulating processes in heterogeneous 3-D scenes is presented. Specifically, a model was designed and coded for application to radiative transfers in vegetative scenes. The model is unique in that it predicts (1) the directional spectral reflectance factors as a function of the sensor's azimuth and zenith angles and the sensor's position above the canopy, (2) the spectral absorption as a function of location within the scene, and (3) the directional spectral radiance as a function of the sensor's location within the scene. The model was shown to follow known physical principles of radiative transfer. Initial verification of the model as applied to a soybean row crop showed that the simulated directional reflectance data corresponded relatively well in gross trends to the measured data. However, the model can be greatly improved by incorporating more sophisticated and realistic anisotropic scattering algorithms

  1. Two Distinct Scene-Processing Networks Connecting Vision and Memory.

    Science.gov (United States)

    Baldassano, Christopher; Esteva, Andre; Fei-Fei, Li; Beck, Diane M

    2016-01-01

    A number of regions in the human brain are known to be involved in processing natural scenes, but the field has lacked a unifying framework for understanding how these different regions are organized and interact. We provide evidence from functional connectivity and meta-analyses for a new organizational principle, in which scene processing relies upon two distinct networks that split the classically defined parahippocampal place area (PPA). The first network of strongly connected regions consists of the occipital place area/transverse occipital sulcus and posterior PPA, which contain retinotopic maps and are not strongly coupled to the hippocampus at rest. The second network consists of the caudal inferior parietal lobule, retrosplenial complex, and anterior PPA, which connect to the hippocampus (especially anterior hippocampus), and are implicated in both visual and nonvisual tasks, including episodic memory and navigation. We propose that these two distinct networks capture the primary functional division among scene-processing regions, between those that process visual features from the current view of a scene and those that connect information from a current scene view with a much broader temporal and spatial context. This new framework for understanding the neural substrates of scene-processing bridges results from many lines of research, and makes specific functional predictions.

  2. Cognitive organization of roadway scenes : an empirical study.

    NARCIS (Netherlands)

    Gundy, C.M.

    1995-01-01

    This report describes six studies investigating the cognitive organization of roadway scenes. These scenes were represented by still photographs taken on a number of roads outside of built-up areas. Seventy-eight drivers, stratified by age and sex to simulate the Dutch driving population,

  3. A dose-dependent relationship between exposure to a street-based drug scene and health-related harms among people who use injection drugs.

    Science.gov (United States)

    Debeck, Kora; Wood, Evan; Zhang, Ruth; Buxton, Jane; Montaner, Julio; Kerr, Thomas

    2011-08-01

    While the community impacts of drug-related street disorder have been well described, lesser attention has been given to the potential health and social implications of drug scene exposure on street-involved people who use illicit drugs. Therefore, we sought to assess the impacts of exposure to a street-based drug scene among injection drug users (IDU) in a Canadian setting. Data were derived from a prospective cohort study known as the Vancouver Injection Drug Users Study. Four categories of drug scene exposure were defined based on the numbers of hours spent on the street each day. Three generalized estimating equation (GEE) logistic regression models were constructed to identify factors associated with varying levels of drug scene exposure (2-6, 6-15, over 15 hours) during the period of December 2005 to March 2009. Among our sample of 1,486 IDU, at baseline, a total of 314 (21%) fit the criteria for high drug scene exposure (>15 hours per day). In multivariate GEE analysis, factors significantly and independently associated with high exposure included: unstable housing (adjusted odds ratio [AOR] = 9.50; 95% confidence interval [CI], 6.36-14.20); daily crack use (AOR = 2.70; 95% CI, 2.07-3.52); encounters with police (AOR = 2.11; 95% CI, 1.62-2.75); and being a victim of violence (AOR = 1.49; 95 % CI, 1.14-1.95). Regular employment (AOR = 0.50; 95% CI, 0.38-0.65), and engagement with addiction treatment (AOR = 0.58; 95% CI, 0.45-0.75) were negatively associated with high exposure. Our findings indicate that drug scene exposure is associated with markers of vulnerability and higher intensity addiction. Intensity of drug scene exposure was associated with indicators of vulnerability to harm in a dose-dependent fashion. These findings highlight opportunities for policy interventions to address exposure to street disorder in the areas of employment, housing, and addiction treatment.

  4. Performance Benefits with Scene-Linked HUD Symbology: An Attentional Phenomenon?

    Science.gov (United States)

    Levy, Jonathan L.; Foyle, David C.; McCann, Robert S.; Null, Cynthia H. (Technical Monitor)

    1999-01-01

    Previous research has shown that in a simulated flight task, navigating a path defined by ground markers while maintaining a target altitude is more accurate when an altitude indicator appears in a virtual "scenelinked" format (projected symbology moving as if it were part of the out-the-window environment) compared to the fixed-location, superimposed format found on present-day HUDs (Foyle, McCann & Shelden, 1995). One explanation of the scene-linked performance advantage is that attention can be divided between scene-linked symbology and the outside world more efficiently than between standard (fixed-position) HUD symbology and the outside world. The present study tested two alternative explanations by manipulating the location of the scene-linked HUD symbology relative to the ground path markers. Scene-linked symbology yielded better ground path-following performance than standard fixed-location superimposed symbology regardless of whether the scene-linked symbology appeared directly along the ground path or at various distances off the path. The results support the explanation that the performance benefits found with scene-linked symbology are attentional.

  5. Crime Scene Investigation.

    Science.gov (United States)

    Harris, Barbara; Kohlmeier, Kris; Kiel, Robert D.

    Casting students in grades 5 through 12 in the roles of reporters, lawyers, and detectives at the scene of a crime, this interdisciplinary activity involves participants in the intrigue and drama of crime investigation. Using a hands-on, step-by-step approach, students work in teams to investigate a crime and solve a mystery. Through role-playing…

  6. Change detection in urban and rural driving scenes: Effects of target type and safety relevance on change blindness.

    Science.gov (United States)

    Beanland, Vanessa; Filtness, Ashleigh J; Jeans, Rhiannon

    2017-03-01

    The ability to detect changes is crucial for safe driving. Previous research has demonstrated that drivers often experience change blindness, which refers to failed or delayed change detection. The current study explored how susceptibility to change blindness varies as a function of the driving environment, type of object changed, and safety relevance of the change. Twenty-six fully-licenced drivers completed a driving-related change detection task. Changes occurred to seven target objects (road signs, cars, motorcycles, traffic lights, pedestrians, animals, or roadside trees) across two environments (urban or rural). The contextual safety relevance of the change was systematically manipulated within each object category, ranging from high safety relevance (i.e., requiring a response by the driver) to low safety relevance (i.e., requiring no response). When viewing rural scenes, compared with urban scenes, participants were significantly faster and more accurate at detecting changes, and were less susceptible to "looked-but-failed-to-see" errors. Interestingly, safety relevance of the change differentially affected performance in urban and rural environments. In urban scenes, participants were more efficient at detecting changes with higher safety relevance, whereas in rural scenes the effect of safety relevance has marginal to no effect on change detection. Finally, even after accounting for safety relevance, change blindness varied significantly between target types. Overall the results suggest that drivers are less susceptible to change blindness for objects that are likely to change or move (e.g., traffic lights vs. road signs), and for moving objects that pose greater danger (e.g., wild animals vs. pedestrians). Copyright © 2017 Elsevier Ltd. All rights reserved.

  7. SCEGRAM: An image database for semantic and syntactic inconsistencies in scenes.

    Science.gov (United States)

    Öhlschläger, Sabine; Võ, Melissa Le-Hoa

    2017-10-01

    Our visual environment is not random, but follows compositional rules according to what objects are usually found where. Despite the growing interest in how such semantic and syntactic rules - a scene grammar - enable effective attentional guidance and object perception, no common image database containing highly-controlled object-scene modifications has been publically available. Such a database is essential in minimizing the risk that low-level features drive high-level effects of interest, which is being discussed as possible source of controversial study results. To generate the first database of this kind - SCEGRAM - we took photographs of 62 real-world indoor scenes in six consistency conditions that contain semantic and syntactic (both mild and extreme) violations as well as their combinations. Importantly, always two scenes were paired, so that an object was semantically consistent in one scene (e.g., ketchup in kitchen) and inconsistent in the other (e.g., ketchup in bathroom). Low-level salience did not differ between object-scene conditions and was generally moderate. Additionally, SCEGRAM contains consistency ratings for every object-scene condition, as well as object-absent scenes and object-only images. Finally, a cross-validation using eye-movements replicated previous results of longer dwell times for both semantic and syntactic inconsistencies compared to consistent controls. In sum, the SCEGRAM image database is the first to contain well-controlled semantic and syntactic object-scene inconsistencies that can be used in a broad range of cognitive paradigms (e.g., verbal and pictorial priming, change detection, object identification, etc.) including paradigms addressing developmental aspects of scene grammar. SCEGRAM can be retrieved for research purposes from http://www.scenegrammarlab.com/research/scegram-database/ .

  8. Subjective emotional over-arousal to neutral social scenes in paranoid schizophrenia.

    Science.gov (United States)

    Haralanova, Evelina; Haralanov, Svetlozar; Beraldi, Anna; Möller, Hans-Jürgen; Hennig-Fast, Kristina

    2012-02-01

    From the clinical practice and some experimental studies, it is apparent that paranoid schizophrenia patients tend to assign emotional salience to neutral social stimuli. This aberrant cognitive bias has been conceptualized to result from increased emotional arousal, but direct empirical data are scarce. The aim of the present study was to quantify the subjective emotional arousal (SEA) evoked by emotionally non-salient (neutral) compared to emotionally salient (negative) social stimuli in schizophrenia patients and healthy controls. Thirty male inpatients with paranoid schizophrenia psychosis and 30 demographically matched healthy controls rated their level of SEA in response to neutral and negative social scenes from the International Affective Picture System and the Munich Affective Picture System. Schizophrenia patients compared to healthy controls had an increased overall SEA level. This relatively higher SEA was evoked only by the neutral but not by the negative social scenes. To our knowledge, the present study is the first designed to directly demonstrate subjective emotional over-arousal to neutral social scenes in paranoid schizophrenia. This finding might explain previous clinical and experimental data and could be viewed as the missing link between the primary neurobiological and secondary psychological mechanisms of paranoid psychotic-symptom formation. Furthermore, despite being very short and easy to perform, the task we used appeared to be sensitive enough to reveal emotional dysregulation, in terms of emotional disinhibition/hyperactivation in paranoid schizophrenia patients. Thus, it could have further research and clinical applications, including as a neurobehavioral probe for imaging studies.

  9. Changing scenes: memory for naturalistic events following change blindness.

    Science.gov (United States)

    Mäntylä, Timo; Sundström, Anna

    2004-11-01

    Research on scene perception indicates that viewers often fail to detect large changes to scene regions when these changes occur during a visual disruption such as a saccade or a movie cut. In two experiments, we examined whether this relative inability to detect changes would produce systematic biases in event memory. In Experiment 1, participants decided whether two successively presented images were the same or different, followed by a memory task, in which they recalled the content of the viewed scene. In Experiment 2, participants viewed a short video, in which an actor carried out a series of daily activities, and central scenes' attributes were changed during a movie cut. A high degree of change blindness was observed in both experiments, and these effects were related to scene complexity (Experiment 1) and level of retrieval support (Experiment 2). Most important, participants reported the changed, rather than the initial, event attributes following a failure in change detection. These findings suggest that attentional limitations during encoding contribute to biases in episodic memory.

  10. Sensory substitution: the spatial updating of auditory scenes ‘mimics’ the spatial updating of visual scenes

    Directory of Open Access Journals (Sweden)

    Achille ePasqualotto

    2016-04-01

    Full Text Available Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or ‘soundscapes’. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localising sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgement of relative direction task (JRD was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s. Moreover, our results have practical implications to improve training methods with sensory substitution devices.

  11. Mental Layout Extrapolations Prime Spatial Processing of Scenes

    Science.gov (United States)

    Gottesman, Carmela V.

    2011-01-01

    Four experiments examined whether scene processing is facilitated by layout representation, including layout that was not perceived but could be predicted based on a previous partial view (boundary extension). In a priming paradigm (after Sanocki, 2003), participants judged objects' distances in photographs. In Experiment 1, full scenes (target),…

  12. Long-Term Memory for Context-Specific Category Information at Six Months.

    Science.gov (United States)

    Shields, Pamela J.; Rovee-Collier, Carolyn

    1992-01-01

    The ability of six-month-old infants to remember a functional category acquired in a specific context was assessed in three experiments. Findings revealed that at six months, information about the place where categories are constructed is prerequisite for retrieval of a category concept from long-term memory. (GLR)

  13. Colour agnosia impairs the recognition of natural but not of non-natural scenes.

    Science.gov (United States)

    Nijboer, Tanja C W; Van Der Smagt, Maarten J; Van Zandvoort, Martine J E; De Haan, Edward H F

    2007-03-01

    Scene recognition can be enhanced by appropriate colour information, yet the level of visual processing at which colour exerts its effects is still unclear. It has been suggested that colour supports low-level sensory processing, while others have claimed that colour information aids semantic categorization and recognition of objects and scenes. We investigated the effect of colour on scene recognition in a case of colour agnosia, M.A.H. In a scene identification task, participants had to name images of natural or non-natural scenes in six different formats. Irrespective of scene format, M.A.H. was much slower on the natural than on the non-natural scenes. As expected, neither M.A.H. nor control participants showed any difference in performance for the non-natural scenes. However, for the natural scenes, appropriate colour facilitated scene recognition in control participants (i.e., shorter reaction times), whereas M.A.H.'s performance did not differ across formats. Our data thus support the hypothesis that the effect of colour occurs at the level of learned associations.

  14. Graphics processing unit (GPU) real-time infrared scene generation

    Science.gov (United States)

    Christie, Chad L.; Gouthas, Efthimios (Themie); Williams, Owen M.

    2007-04-01

    VIRSuite, the GPU-based suite of software tools developed at DSTO for real-time infrared scene generation, is described. The tools include the painting of scene objects with radiometrically-associated colours, translucent object generation, polar plot validation and versatile scene generation. Special features include radiometric scaling within the GPU and the presence of zoom anti-aliasing at the core of VIRSuite. Extension of the zoom anti-aliasing construct to cover target embedding and the treatment of translucent objects is described.

  15. Semantic guidance of eye movements in real-world scenes

    OpenAIRE

    Hwang, Alex D.; Wang, Hsueh-Cheng; Pomplun, Marc

    2011-01-01

    The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movemen...

  16. Picture models for 2-scene comics creating system

    Directory of Open Access Journals (Sweden)

    Miki UENO

    2015-03-01

    Full Text Available Recently, computer understanding pictures and stories becomes one of the most important research topics in computer science. However, there are few researches about human like understanding by computers because pictures have not certain format and contain more lyric aspect than that of natural laguage. For picture understanding, a comic is the suitable target because it is consisted by clear and simple plot of stories and separated scenes.In this paper, we propose 2 different types of picture models for 2-scene comics creating system. We also show the method of the application of 2-scene comics creating system by means of proposed picture model.

  17. Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Fan Hu

    2015-11-01

    Full Text Available Learning efficient image representations is at the core of the scene classification task of remote sensing imagery. The existing methods for solving the scene classification task, based on either feature coding approaches with low-level hand-engineered features or unsupervised feature learning, can only generate mid-level image features with limited representative ability, which essentially prevents them from achieving better performance. Recently, the deep convolutional neural networks (CNNs, which are hierarchical architectures trained on large-scale datasets, have shown astounding performance in object recognition and detection. However, it is still not clear how to use these deep convolutional neural networks for high-resolution remote sensing (HRRS scene classification. In this paper, we investigate how to transfer features from these successfully pre-trained CNNs for HRRS scene classification. We propose two scenarios for generating image features via extracting CNN features from different layers. In the first scenario, the activation vectors extracted from fully-connected layers are regarded as the final image features; in the second scenario, we extract dense features from the last convolutional layer at multiple scales and then encode the dense features into global image features through commonly used feature coding approaches. Extensive experiments on two public scene classification datasets demonstrate that the image features obtained by the two proposed scenarios, even with a simple linear classifier, can result in remarkable performance and improve the state-of-the-art by a significant margin. The results reveal that the features from pre-trained CNNs generalize well to HRRS datasets and are more expressive than the low- and mid-level features. Moreover, we tentatively combine features extracted from different CNN models for better performance.

  18. Scene Categorization in Alzheimer's Disease: A Saccadic Choice Task

    Directory of Open Access Journals (Sweden)

    Quentin Lenoble

    2015-01-01

    Full Text Available Aims: We investigated the performance in scene categorization of patients with Alzheimer's disease (AD using a saccadic choice task. Method: 24 patients with mild AD, 28 age-matched controls and 26 young people participated in the study. The participants were presented pairs of coloured photographs and were asked to make a saccadic eye movement to the picture corresponding to the target scene (natural vs. urban, indoor vs. outdoor. Results: The patients' performance did not differ from chance for natural scenes. Differences between young and older controls and patients with AD were found in accuracy but not saccadic latency. Conclusions: The results are interpreted in terms of cerebral reorganization in the prefrontal and temporo-occipital cortex of patients with AD, but also in terms of impaired processing of visual global properties of scenes.

  19. Defining spatial relations in a specific ontology for automated scene creation

    Directory of Open Access Journals (Sweden)

    D. Contraş

    2013-06-01

    Full Text Available This paper presents the approach of building an ontology for automatic scene generation. Every scene contains various elements (backgrounds, characters, objects which are spatially interrelated. The article focuses on these spatial and temporal relationships of the elements constituting a scene.

  20. Attention Switching during Scene Perception: How Goals Influence the Time Course of Eye Movements across Advertisements

    Science.gov (United States)

    Wedel, Michel; Pieters, Rik; Liechty, John

    2008-01-01

    Eye movements across advertisements express a temporal pattern of bursts of respectively relatively short and long saccades, and this pattern is systematically influenced by activated scene perception goals. This was revealed by a continuous-time hidden Markov model applied to eye movements of 220 participants exposed to 17 ads under a…

  1. System and method for extracting dominant orientations from a scene

    Science.gov (United States)

    Straub, Julian; Rosman, Guy; Freifeld, Oren; Leonard, John J.; Fisher, III; , John W.

    2017-05-30

    In one embodiment, a method of identifying the dominant orientations of a scene comprises representing a scene as a plurality of directional vectors. The scene may comprise a three-dimensional representation of a scene, and the plurality of directional vectors may comprise a plurality of surface normals. The method further comprises determining, based on the plurality of directional vectors, a plurality of orientations describing the scene. The determined plurality of orientations explains the directionality of the plurality of directional vectors. In certain embodiments, the plurality of orientations may have independent axes of rotation. The plurality of orientations may be determined by representing the plurality of directional vectors as lying on a mathematical representation of a sphere, and inferring the parameters of a statistical model to adapt the plurality of orientations to explain the positioning of the plurality of directional vectors lying on the mathematical representation of the sphere.

  2. AR goggles make crime scene investigation a desk job

    OpenAIRE

    Aron, Jacob; NORTHFIELD, Dean

    2012-01-01

    CRIME scene investigators could one day help solve murders without leaving the office. A pair of augmented reality glasses could allow local police to virtually tag objects in a crime scene, and build a clean record of the scene in 3D video before evidence is removed for processing.\\ud The system, being developed by Oytun Akman and colleagues at the Delft University of Technology in the Netherlands, consists of a head-mounted display receiving 3D video from a pair of attached cameras controll...

  3. Oculomotor capture during real-world scene viewing depends on cognitive load.

    Science.gov (United States)

    Matsukura, Michi; Brockmole, James R; Boot, Walter R; Henderson, John M

    2011-03-25

    It has been claimed that gaze control during scene viewing is largely governed by stimulus-driven, bottom-up selection mechanisms. Recent research, however, has strongly suggested that observers' top-down control plays a dominant role in attentional prioritization in scenes. A notable exception to this strong top-down control is oculomotor capture, where visual transients in a scene draw the eyes. One way to test whether oculomotor capture during scene viewing is independent of an observer's top-down goal setting is to reduce observers' cognitive resource availability. In the present study, we examined whether increasing observers' cognitive load influences the frequency and speed of oculomotor capture during scene viewing. In Experiment 1, we tested whether increasing observers' cognitive load modulates the degree of oculomotor capture by a new object suddenly appeared in a scene. Similarly, in Experiment 2, we tested whether increasing observers' cognitive load modulates the degree of oculomotor capture by an object's color change. In both experiments, the degree of oculomotor capture decreased as observers' cognitive resources were reduced. These results suggest that oculomotor capture during scene viewing is dependent on observers' top-down selection mechanisms. Copyright © 2011 Elsevier Ltd. All rights reserved.

  4. Gay and Lesbian Scene in Metelkova

    Directory of Open Access Journals (Sweden)

    Nataša Velikonja

    2013-09-01

    Full Text Available The article deals with the development of the gay and lesbian scene in ACC Metelkova, while specifying the preliminary aspects of establishing and building gay and lesbian activism associated with spatial issues. The struggle for space or occupying public space is vital for the gay and lesbian scene, as it provides not only the necessary socializing opportunities for gays and lesbians, but also does away with the historical hiding of homosexuality in the closet, in seclusion and silence. Because of their autonomy and long-term, continuous existence, homo-clubs at Metelkova contributed to the consolidation of the gay and lesbian scene in Slovenia and significantly improved the opportunities for cultural, social and political expression of gays and lesbians. Such a synthesis of the cultural, social and political, further intensified in Metelkova, and characterizes the gay and lesbian community in Slovenia from the very outset of gay and lesbian activism in 1984. It is this long-term synthesis that keeps this community in Slovenia so vital and politically resilient.

  5. Integration of heterogeneous features for remote sensing scene classification

    Science.gov (United States)

    Wang, Xin; Xiong, Xingnan; Ning, Chen; Shi, Aiye; Lv, Guofang

    2018-01-01

    Scene classification is one of the most important issues in remote sensing (RS) image processing. We find that features from different channels (shape, spectral, texture, etc.), levels (low-level and middle-level), or perspectives (local and global) could provide various properties for RS images, and then propose a heterogeneous feature framework to extract and integrate heterogeneous features with different types for RS scene classification. The proposed method is composed of three modules (1) heterogeneous features extraction, where three heterogeneous feature types, called DS-SURF-LLC, mean-Std-LLC, and MS-CLBP, are calculated, (2) heterogeneous features fusion, where the multiple kernel learning (MKL) is utilized to integrate the heterogeneous features, and (3) an MKL support vector machine classifier for RS scene classification. The proposed method is extensively evaluated on three challenging benchmark datasets (a 6-class dataset, a 12-class dataset, and a 21-class dataset), and the experimental results show that the proposed method leads to good classification performance. It produces good informative features to describe the RS image scenes. Moreover, the integration of heterogeneous features outperforms some state-of-the-art features on RS scene classification tasks.

  6. Audio scene segmentation for video with generic content

    Science.gov (United States)

    Niu, Feng; Goela, Naveen; Divakaran, Ajay; Abdel-Mottaleb, Mohamed

    2008-01-01

    In this paper, we present a content-adaptive audio texture based method to segment video into audio scenes. The audio scene is modeled as a semantically consistent chunk of audio data. Our algorithm is based on "semantic audio texture analysis." At first, we train GMM models for basic audio classes such as speech, music, etc. Then we define the semantic audio texture based on those classes. We study and present two types of scene changes, those corresponding to an overall audio texture change and those corresponding to a special "transition marker" used by the content creator, such as a short stretch of music in a sitcom or silence in dramatic content. Unlike prior work using genre specific heuristics, such as some methods presented for detecting commercials, we adaptively find out if such special transition markers are being used and if so, which of the base classes are being used as markers without any prior knowledge about the content. Our experimental results show that our proposed audio scene segmentation works well across a wide variety of broadcast content genres.

  7. Simulator scene display evaluation device

    Science.gov (United States)

    Haines, R. F. (Inventor)

    1986-01-01

    An apparatus for aligning and calibrating scene displays in an aircraft simulator has a base on which all of the instruments for the aligning and calibrating are mounted. Laser directs beam at double right prism which is attached to pivoting support on base. The pivot point of the prism is located at the design eye point (DEP) of simulator during the aligning and calibrating. The objective lens in the base is movable on a track to follow the laser beam at different angles within the field of vision at the DEP. An eyepiece and a precision diopter are movable into a position behind the prism during the scene evaluation. A photometer or illuminometer is pivotable about the pivot into and out of position behind the eyepiece.

  8. Parallel programming of saccades during natural scene viewing: evidence from eye movement positions.

    Science.gov (United States)

    Wu, Esther X W; Gilani, Syed Omer; van Boxtel, Jeroen J A; Amihai, Ido; Chua, Fook Kee; Yen, Shih-Cheng

    2013-10-24

    Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (>100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis.

  9. Synchronous contextual irregularities affect early scene processing: replication and extension.

    Science.gov (United States)

    Mudrik, Liad; Shalgi, Shani; Lamy, Dominique; Deouell, Leon Y

    2014-04-01

    Whether contextual regularities facilitate perceptual stages of scene processing is widely debated, and empirical evidence is still inconclusive. Specifically, it was recently suggested that contextual violations affect early processing of a scene only when the incongruent object and the scene are presented a-synchronously, creating expectations. We compared event-related potentials (ERPs) evoked by scenes that depicted a person performing an action using either a congruent or an incongruent object (e.g., a man shaving with a razor or with a fork) when scene and object were presented simultaneously. We also explored the role of attention in contextual processing by using a pre-cue to direct subjects׳ attention towards or away from the congruent/incongruent object. Subjects׳ task was to determine how many hands the person in the picture used in order to perform the action. We replicated our previous findings of frontocentral negativity for incongruent scenes that started ~ 210 ms post stimulus presentation, even earlier than previously found. Surprisingly, this incongruency ERP effect was negatively correlated with the reaction times cost on incongruent scenes. The results did not allow us to draw conclusions about the role of attention in detecting the regularity, due to a weak attention manipulation. By replicating the 200-300 ms incongruity effect with a new group of subjects at even earlier latencies than previously reported, the results strengthen the evidence for contextual processing during this time window even when simultaneous presentation of the scene and object prevent the formation of prior expectations. We discuss possible methodological limitations that may account for previous failures to find this an effect, and conclude that contextual information affects object model selection processes prior to full object identification, with semantic knowledge activation stages unfolding only later on. Copyright © 2014 Elsevier Ltd. All rights reserved.

  10. Prior knowledge of category size impacts visual search.

    Science.gov (United States)

    Wu, Rachel; McGee, Brianna; Echiverri, Chelsea; Zinszer, Benjamin D

    2018-03-30

    Prior research has shown that category search can be similar to one-item search (as measured by the N2pc ERP marker of attentional selection) for highly familiar, smaller categories (e.g., letters and numbers) because the finite set of items in a category can be grouped into one unit to guide search. Other studies have shown that larger, more broadly defined categories (e.g., healthy food) also can elicit N2pc components during category search, but the amplitude of these components is typically attenuated. Two experiments investigated whether the perceived size of a familiar category impacts category and exemplar search. We presented participants with 16 familiar company logos: 8 from a smaller category (social media companies) and 8 from a larger category (entertainment/recreation manufacturing companies). The ERP results from Experiment 1 revealed that, in a two-item search array, search was more efficient for the smaller category of logos compared to the larger category. In a four-item search array (Experiment 2), where two of the four items were placeholders, search was largely similar between the category types, but there was more attentional capture by nontarget members from the same category as the target for smaller rather than larger categories. These results support a growing literature on how prior knowledge of categories affects attentional selection and capture during visual search. We discuss the implications of these findings in relation to assessing cognitive abilities across the lifespan, given that prior knowledge typically increases with age. © 2018 Society for Psychophysiological Research.

  11. On the contribution of binocular disparity to the long-term memory for natural scenes.

    Directory of Open Access Journals (Sweden)

    Matteo Valsecchi

    Full Text Available Binocular disparity is a fundamental dimension defining the input we receive from the visual world, along with luminance and chromaticity. In a memory task involving images of natural scenes we investigate whether binocular disparity enhances long-term visual memory. We found that forest images studied in the presence of disparity for relatively long times (7s were remembered better as compared to 2D presentation. This enhancement was not evident for other categories of pictures, such as images containing cars and houses, which are mostly identified by the presence of distinctive artifacts rather than by their spatial layout. Evidence from a further experiment indicates that observers do not retain a trace of stereo presentation in long-term memory.

  12. Cortical Representations of Speech in a Multitalker Auditory Scene.

    Science.gov (United States)

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory

  13. Improved content aware scene retargeting for retinitis pigmentosa patients

    Directory of Open Access Journals (Sweden)

    Al-Atabany Walid I

    2010-09-01

    Full Text Available Abstract Background In this paper we present a novel scene retargeting technique to reduce the visual scene while maintaining the size of the key features. The algorithm is scalable to implementation onto portable devices, and thus, has potential for augmented reality systems to provide visual support for those with tunnel vision. We therefore test the efficacy of our algorithm on shrinking the visual scene into the remaining field of view for those patients. Methods Simple spatial compression of visual scenes makes objects appear further away. We have therefore developed an algorithm which removes low importance information, maintaining the size of the significant features. Previous approaches in this field have included seam carving, which removes low importance seams from the scene, and shrinkability which dynamically shrinks the scene according to a generated importance map. The former method causes significant artifacts and the latter is inefficient. In this work we have developed a new algorithm, combining the best aspects of both these two previous methods. In particular, our approach is to generate a shrinkability importance map using as seam based approach. We then use it to dynamically shrink the scene in similar fashion to the shrinkability method. Importantly, we have implemented it so that it can be used in real time without prior knowledge of future frames. Results We have evaluated and compared our algorithm to the seam carving and image shrinkability approaches from a content preservation perspective and a compression quality perspective. Also our technique has been evaluated and tested on a trial included 20 participants with simulated tunnel vision. Results show the robustness of our method at reducing scenes up to 50% with minimal distortion. We also demonstrate efficacy in its use for those with simulated tunnel vision of 22 degrees of field of view or less. Conclusions Our approach allows us to perform content aware video

  14. Recall versus familiarity when recall fails for words and scenes: The differential roles of the hippocampus, perirhinal cortex, and category-specific cortical regions☆

    OpenAIRE

    Ryals, Anthony J.; Cleary, Anne M.; Seger, Carol A.

    2012-01-01

    This fMRI study examined recall and familiarity for words and scenes using the novel recognition without cued recall (RWCR) paradigm. Subjects performed a cued recall task in which half of the test cues resembled studied items (and thus were familiar) and half did not. Subjects also judged the familiarity of the cue itself. RWCR is the finding that, among cues for which recall fails, subjects generally rate cues that resemble studied items as more familiar than cues that do not. For words, le...

  15. Separate and simultaneous adjustment of light qualities in a real scene

    NARCIS (Netherlands)

    Xia, L.; Pont, S.C.; Heynderickx, I.E.J.R.

    2017-01-01

    Humans are able to estimate light field properties in a scene in that they have expectations of the objects' appearance inside it. Previously, we probed such expectations in a real scene by asking whether a "probe object" fitted a real scene with regard to its lighting. But how well are observers

  16. Recognizing the Stranger: Recognition Scenes in the Gospel of John

    DEFF Research Database (Denmark)

    Larsen, Kasper Bro

    Recognizing the Stranger is the first monographic study of recognition scenes and motifs in the Gospel of John. The recognition type-scene (anagnōrisis) was a common feature in ancient drama and narrative, highly valued by Aristotle as a touching moment of truth, e.g., in Oedipus’ tragic self...... structures of the type-scene in order to show how Jesus’ true identity can be recognized behind the half-mask of his human appearance....

  17. Functional relationships between the hippocampus and dorsomedial striatum in learning a visual scene-based memory task in rats.

    Science.gov (United States)

    Delcasso, Sébastien; Huh, Namjung; Byeon, Jung Seop; Lee, Jihyun; Jung, Min Whan; Lee, Inah

    2014-11-19

    The hippocampus is important for contextual behavior, and the striatum plays key roles in decision making. When studying the functional relationships with the hippocampus, prior studies have focused mostly on the dorsolateral striatum (DLS), emphasizing the antagonistic relationships between the hippocampus and DLS in spatial versus response learning. By contrast, the functional relationships between the dorsomedial striatum (DMS) and hippocampus are relatively unknown. The current study reports that lesions to both the hippocampus and DMS profoundly impaired performance of rats in a visual scene-based memory task in which the animals were required to make a choice response by using visual scenes displayed in the background. Analysis of simultaneous recordings of local field potentials revealed that the gamma oscillatory power was higher in the DMS, but not in CA1, when the rat performed the task using familiar scenes than novel ones. In addition, the CA1-DMS networks increased coherence at γ, but not at θ, rhythm as the rat mastered the task. At the single-unit level, the neuronal populations in CA1 and DMS showed differential firing patterns when responses were made using familiar visual scenes than novel ones. Such learning-dependent firing patterns were observed earlier in the DMS than in CA1 before the rat made choice responses. The present findings suggest that both the hippocampus and DMS process memory representations for visual scenes in parallel with different time courses and that flexible choice action using background visual scenes requires coordinated operations of the hippocampus and DMS at γ frequencies. Copyright © 2014 the authors 0270-6474/14/3415534-14$15.00/0.

  18. SAR Raw Data Generation for Complex Airport Scenes

    Directory of Open Access Journals (Sweden)

    Jia Li

    2014-10-01

    Full Text Available The method of generating the SAR raw data of complex airport scenes is studied in this paper. A formulation of the SAR raw signal model of airport scenes is given. Via generating the echoes from the background, aircrafts and buildings, respectively, the SAR raw data of the unified SAR imaging geometry is obtained from their vector additions. The multipath scattering and the shadowing between the background and different ground covers of standing airplanes and buildings are analyzed. Based on the scattering characteristics, coupling scattering models and SAR raw data models of different targets are given, respectively. A procedure is given to generate the SAR raw data of airport scenes. The SAR images from the simulated raw data demonstrate the validity of the proposed method.

  19. Effects of aging on neural connectivity underlying selective memory for emotional scenes.

    Science.gov (United States)

    Waring, Jill D; Addis, Donna Rose; Kensinger, Elizabeth A

    2013-02-01

    Older adults show age-related reductions in memory for neutral items within complex visual scenes, but just like young adults, older adults exhibit a memory advantage for emotional items within scenes compared with the background scene information. The present study examined young and older adults' encoding-stage effective connectivity for selective memory of emotional items versus memory for both the emotional item and its background. In a functional magnetic resonance imaging (fMRI) study, participants viewed scenes containing either positive or negative items within neutral backgrounds. Outside the scanner, participants completed a memory test for items and backgrounds. Irrespective of scene content being emotionally positive or negative, older adults had stronger positive connections among frontal regions and from frontal regions to medial temporal lobe structures than did young adults, especially when items and backgrounds were subsequently remembered. These results suggest there are differences between young and older adults' connectivity accompanying the encoding of emotional scenes. Older adults may require more frontal connectivity to encode all elements of a scene rather than just encoding the emotional item. Published by Elsevier Inc.

  20. Scene text recognition in mobile applications by character descriptor and structure configuration.

    Science.gov (United States)

    Yi, Chucai; Tian, Yingli

    2014-07-01

    Text characters and strings in natural scene can provide valuable information for many applications. Extracting text directly from natural scene images or videos is a challenging task because of diverse text patterns and variant background interferences. This paper proposes a method of scene text recognition from detected text regions. In text detection, our previously proposed algorithms are applied to obtain text regions from scene image. First, we design a discriminative character descriptor by combining several state-of-the-art feature detectors and descriptors. Second, we model character structure at each character class by designing stroke configuration maps. Our algorithm design is compatible with the application of scene text extraction in smart mobile devices. An Android-based demo system is developed to show the effectiveness of our proposed method on scene text information extraction from nearby objects. The demo system also provides us some insight into algorithm design and performance improvement of scene text extraction. The evaluation results on benchmark data sets demonstrate that our proposed scheme of text recognition is comparable with the best existing methods.

  1. Radio Wave Propagation Scene Partitioning for High-Speed Rails

    Directory of Open Access Journals (Sweden)

    Bo Ai

    2012-01-01

    Full Text Available Radio wave propagation scene partitioning is necessary for wireless channel modeling. As far as we know, there are no standards of scene partitioning for high-speed rail (HSR scenarios, and therefore we propose the radio wave propagation scene partitioning scheme for HSR scenarios in this paper. Based on our measurements along the Wuhan-Guangzhou HSR, Zhengzhou-Xian passenger-dedicated line, Shijiazhuang-Taiyuan passenger-dedicated line, and Beijing-Tianjin intercity line in China, whose operation speeds are above 300 km/h, and based on the investigations on Beijing South Railway Station, Zhengzhou Railway Station, Wuhan Railway Station, Changsha Railway Station, Xian North Railway Station, Shijiazhuang North Railway Station, Taiyuan Railway Station, and Tianjin Railway Station, we obtain an overview of HSR propagation channels and record many valuable measurement data for HSR scenarios. On the basis of these measurements and investigations, we partitioned the HSR scene into twelve scenarios. Further work on theoretical analysis based on radio wave propagation mechanisms, such as reflection and diffraction, may lead us to develop the standard of radio wave propagation scene partitioning for HSR. Our work can also be used as a basis for the wireless channel modeling and the selection of some key techniques for HSR systems.

  2. Unconscious analyses of visual scenes based on feature conjunctions.

    Science.gov (United States)

    Tachibana, Ryosuke; Noguchi, Yasuki

    2015-06-01

    To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).

  3. Eye tracking to evaluate evidence recognition in crime scene investigations.

    Science.gov (United States)

    Watalingam, Renuka Devi; Richetelli, Nicole; Pelz, Jeff B; Speir, Jacqueline A

    2017-11-01

    Crime scene analysts are the core of criminal investigations; decisions made at the scene greatly affect the speed of analysis and the quality of conclusions, thereby directly impacting the successful resolution of a case. If an examiner fails to recognize the pertinence of an item on scene, the analyst's theory regarding the crime will be limited. Conversely, unselective evidence collection will most likely include irrelevant material, thus increasing a forensic laboratory's backlog and potentially sending the investigation into an unproductive and costly direction. Therefore, it is critical that analysts recognize and properly evaluate forensic evidence that can assess the relative support of differing hypotheses related to event reconstruction. With this in mind, the aim of this study was to determine if quantitative eye tracking data and qualitative reconstruction accuracy could be used to distinguish investigator expertise. In order to assess this, 32 participants were successfully recruited and categorized as experts or trained novices based on their practical experiences and educational backgrounds. Each volunteer then processed a mock crime scene while wearing a mobile eye tracker, wherein visual fixations, durations, search patterns, and reconstruction accuracy were evaluated. The eye tracking data (dwell time and task percentage on areas of interest or AOIs) were compared using Earth Mover's Distance (EMD) and the Needleman-Wunsch (N-W) algorithm, revealing significant group differences for both search duration (EMD), as well as search sequence (N-W). More specifically, experts exhibited greater dissimilarity in search duration, but greater similarity in search sequences than their novice counterparts. In addition to the quantitative visual assessment of examiner variability, each participant's reconstruction skill was assessed using a 22-point binary scoring system, in which significant group differences were detected as a function of total

  4. Emotional Scene Content Drives the Saccade Generation System Reflexively

    Science.gov (United States)

    Nummenmaa, Lauri; Hyona, Jukka; Calvo, Manuel G.

    2009-01-01

    The authors assessed whether parafoveal perception of emotional content influences saccade programming. In Experiment 1, paired emotional and neutral scenes were presented to parafoveal vision. Participants performed voluntary saccades toward either of the scenes according to an imperative signal (color cue). Saccadic reaction times were faster…

  5. SAMPEG: a scene-adaptive parallel MPEG-2 software encoder

    NARCIS (Netherlands)

    Farin, D.S.; Mache, N.; With, de P.H.N.; Girod, B.; Bouman, C.A.; Steinbach, E.G.

    2001-01-01

    This paper presents a fully software-based MPEG-2 encoder architecture, which uses scene-change detection to optimize the Group-of-Picture (GOP) structure for the actual video sequence. This feature enables easy, lossless edit cuts at scene-change positions and it also improves overall picture

  6. Review of On-Scene Management of Mass-Casualty Attacks

    Directory of Open Access Journals (Sweden)

    Annelie Holgersson

    2016-02-01

    Full Text Available Background: The scene of a mass-casualty attack (MCA entails a crime scene, a hazardous space, and a great number of people needing medical assistance. Public transportation has been the target of such attacks and involves a high probability of generating mass casualties. The review aimed to investigate challenges for on-scene responses to MCAs and suggestions made to counter these challenges, with special attention given to attacks on public transportation and associated terminals. Methods: Articles were found through PubMed and Scopus, “relevant articles” as defined by the databases, and a manual search of references. Inclusion criteria were that the article referred to attack(s and/or a public transportation-related incident and issues concerning formal on-scene response. An appraisal of the articles’ scientific quality was conducted based on an evidence hierarchy model developed for the study. Results: One hundred and five articles were reviewed. Challenges for command and coordination on scene included establishing leadership, inter-agency collaboration, multiple incident sites, and logistics. Safety issues entailed knowledge and use of personal protective equipment, risk awareness and expectations, cordons, dynamic risk assessment, defensive versus offensive approaches, and joining forces. Communication concerns were equipment shortfalls, dialoguing, and providing information. Assessment problems were scene layout and interpreting environmental indicators as well as understanding setting-driven needs for specialist skills and resources. Triage and treatment difficulties included differing triage systems, directing casualties, uncommon injuries, field hospitals, level of care, providing psychological and pediatric care. Transportation hardships included scene access, distance to hospitals, and distribution of casualties. Conclusion: Commonly encountered challenges during unintentional incidents were added to during MCAs

  7. Clandestine laboratory scene investigation and processing using portable GC/MS

    Science.gov (United States)

    Matejczyk, Raymond J.

    1997-02-01

    This presentation describes the use of portable gas chromatography/mass spectrometry for on-scene investigation and processing of clandestine laboratories. Clandestine laboratory investigations present special problems to forensic investigators. These crime scenes contain many chemical hazards that must be detected, identified and collected as evidence. Gas chromatography/mass spectrometry performed on-scene with a rugged, portable unit is capable of analyzing a variety of matrices for drugs and chemicals used in the manufacture of illicit drugs, such as methamphetamine. Technologies used to detect various materials at a scene have particular applications but do not address the wide range of samples, chemicals, matrices and mixtures that exist in clan labs. Typical analyses performed by GC/MS are for the purpose of positively establishing the identity of starting materials, chemicals and end-product collected from clandestine laboratories. Concerns for the public and investigator safety and the environment are also important factors for rapid on-scene data generation. Here is described the implementation of a portable multiple-inlet GC/MS system designed for rapid deployment to a scene to perform forensic investigations of clandestine drug manufacturing laboratories. GC/MS has long been held as the 'gold standard' in performing forensic chemical analyses. With the capability of GC/MS to separate and produce a 'chemical fingerprint' of compounds, it is utilized as an essential technique for detecting and positively identifying chemical evidence. Rapid and conclusive on-scene analysis of evidence will assist the forensic investigators in collecting only pertinent evidence thereby reducing the amount of evidence to be transported, reducing chain of custody concerns, reducing costs and hazards, maintaining sample integrity and speeding the completion of the investigative process.

  8. Viewing nature scenes positively affects recovery of autonomic function following acute-mental stress.

    Science.gov (United States)

    Brown, Daniel K; Barton, Jo L; Gladwell, Valerie F

    2013-06-04

    A randomized crossover study explored whether viewing different scenes prior to a stressor altered autonomic function during the recovery from the stressor. The two scenes were (a) nature (composed of trees, grass, fields) or (b) built (composed of man-made, urban scenes lacking natural characteristics) environments. Autonomic function was assessed using noninvasive techniques of heart rate variability; in particular, time domain analyses evaluated parasympathetic activity, using root-mean-square of successive differences (RMSSD). During stress, secondary cardiovascular markers (heart rate, systolic and diastolic blood pressure) showed significant increases from baseline which did not differ between the two viewing conditions. Parasympathetic activity, however, was significantly higher in recovery following the stressor in the viewing scenes of nature condition compared to viewing scenes depicting built environments (RMSSD; 50.0 ± 31.3 vs 34.8 ± 14.8 ms). Thus, viewing nature scenes prior to a stressor alters autonomic activity in the recovery period. The secondary aim was to examine autonomic function during viewing of the two scenes. Standard deviation of R-R intervals (SDRR), as change from baseline, during the first 5 min of viewing nature scenes was greater than during built scenes. Overall, this suggests that nature can elicit improvements in the recovery process following a stressor.

  9. Brain processing of emotional scenes in aging: effect of arousal and affective context.

    Directory of Open Access Journals (Sweden)

    Nicolas Gilles Mathieu

    Full Text Available Research on emotion showed an increase, with age, in prevalence of positive information relative to negative ones. This effect is called positivity effect. From the cerebral analysis of the Late Positive Potential (LPP, sensitive to attention, our study investigated to which extent the arousal level of negative scenes is differently processed between young and older adults and, to which extent the arousal level of negative scenes, depending on its value, may contextually modulate the cerebral processing of positive (and neutral scenes and favor the observation of a positivity effect with age. With this aim, two negative scene groups characterized by two distinct arousal levels (high and low were displayed into two separate experimental blocks in which were included positive and neutral pictures. The two blocks only differed by their negative pictures across participants, as to create two negative global contexts for the processing of the positive and neutral pictures. The results show that the relative processing of different arousal levels of negative stimuli, reflected by LPP, appears similar between the two age groups. However, a lower activity for negative stimuli is observed with the older group for both tested arousal levels. The processing of positive information seems to be preserved with age and is also not contextually impacted by negative stimuli in both younger and older adults. For neutral stimuli, a significantly reduced activity is observed for older adults in the contextual block of low-arousal negative stimuli. Globally, our study reveals that the positivity effect is mainly due to a modulation, with age, in processing of negative stimuli, regardless of their arousal level. It also suggests that processing of neutral stimuli may be modulated with age, depending on negative context in which they are presented to. These age-related effects could contribute to justify the differences in emotional preference with age.

  10. Real-time maritime scene simulation for ladar sensors

    Science.gov (United States)

    Christie, Chad L.; Gouthas, Efthimios; Swierkowski, Leszek; Williams, Owen M.

    2011-06-01

    Continuing interest exists in the development of cost-effective synthetic environments for testing Laser Detection and Ranging (ladar) sensors. In this paper we describe a PC-based system for real-time ladar scene simulation of ships and small boats in a dynamic maritime environment. In particular, we describe the techniques employed to generate range imagery accompanied by passive radiance imagery. Our ladar scene generation system is an evolutionary extension of the VIRSuite infrared scene simulation program and includes all previous features such as ocean wave simulation, the physically-realistic representation of boat and ship dynamics, wake generation and simulation of whitecaps, spray, wake trails and foam. A terrain simulation extension is also under development. In this paper we outline the development, capabilities and limitations of the VIRSuite extensions.

  11. The influence of color on emotional perception of natural scenes.

    Science.gov (United States)

    Codispoti, Maurizio; De Cesarei, Andrea; Ferrari, Vera

    2012-01-01

    Is color a critical factor when processing the emotional content of natural scenes? Under challenging perceptual conditions, such as when pictures are briefly presented, color might facilitate scene segmentation and/or function as a semantic cue via association with scene-relevant concepts (e.g., red and blood/injury). To clarify the influence of color on affective picture perception, we compared the late positive potentials (LPP) to color versus grayscale pictures, presented for very brief (24 ms) and longer (6 s) exposure durations. Results indicated that removing color information had no effect on the affective modulation of the LPP, regardless of exposure duration. These findings imply that the recognition of the emotional content of scenes, even when presented very briefly, does not critically rely on color information. Copyright © 2011 Society for Psychophysiological Research.

  12. Cybersickness in the presence of scene rotational movements along different axes.

    Science.gov (United States)

    Lo, W T; So, R H

    2001-02-01

    Compelling scene movements in a virtual reality (VR) system can cause symptoms of motion sickness (i.e., cybersickness). A within-subject experiment has been conducted to investigate the effects of scene oscillations along different axes on the level of cybersickness. Sixteen male participants were exposed to four 20-min VR simulation sessions. The four sessions used the same virtual environment but with scene oscillations along different axes, i.e., pitch, yaw, roll, or no oscillation (speed: 30 degrees/s, range: +/- 60 degrees). Verbal ratings of the level of nausea were taken at 5-min intervals during the sessions and sickness symptoms were also measured before and after the sessions using the Simulator Sickness Questionnaire (SSQ). In the presence of scene oscillation, both nausea ratings and SSQ scores increased at significantly higher rates than with no oscillation. While individual participants exhibited different susceptibilities to nausea associated with VR simulation containing scene oscillations along different rotational axes, the overall effects of axis among our group of 16 randomly selected participants were not significant. The main effects of, and interactions among, scene oscillation, duration, and participants are discussed in the paper.

  13. Estimating perception of scene layout properties from global image features.

    Science.gov (United States)

    Ross, Michael G; Oliva, Aude

    2010-01-08

    The relationship between image features and scene structure is central to the study of human visual perception and computer vision, but many of the specifics of real-world layout perception remain unknown. We do not know which image features are relevant to perceiving layout properties, or whether those features provide the same information for every type of image. Furthermore, we do not know the spatial resolutions required for perceiving different properties. This paper describes an experiment and a computational model that provides new insights on these issues. Humans perceive the global spatial layout properties such as dominant depth, openness, and perspective, from a single image. This work describes an algorithm that reliably predicts human layout judgments. This model's predictions are general, not specific to the observers it trained on. Analysis reveals that the optimal spatial resolutions for determining layout vary with the content of the space and the property being estimated. Openness is best estimated at high resolution, depth is best estimated at medium resolution, and perspective is best estimated at low resolution. Given the reliability and simplicity of estimating the global layout of real-world environments, this model could help resolve perceptual ambiguities encountered by more detailed scene reconstruction schemas.

  14. Framework of passive millimeter-wave scene simulation based on material classification

    Science.gov (United States)

    Park, Hyuk; Kim, Sung-Hyun; Lee, Ho-Jin; Kim, Yong-Hoon; Ki, Jae-Sug; Yoon, In-Bok; Lee, Jung-Min; Park, Soon-Jun

    2006-05-01

    Over the past few decades, passive millimeter-wave (PMMW) sensors have emerged as useful implements in transportation and military applications such as autonomous flight-landing system, smart weapons, night- and all weather vision system. As an efficient way to predict the performance of a PMMW sensor and apply it to system, it is required to test in SoftWare-In-the-Loop (SWIL). The PMMW scene simulation is a key component for implementation of this simulator. However, there is no commercial on-the-shelf available to construct the PMMW scene simulation; only there have been a few studies on this technology. We have studied the PMMW scene simulation method to develop the PMMW sensor SWIL simulator. This paper describes the framework of the PMMW scene simulation and the tentative results. The purpose of the PMMW scene simulation is to generate sensor outputs (or image) from a visible image and environmental conditions. We organize it into four parts; material classification mapping, PMMW environmental setting, PMMW scene forming, and millimeter-wave (MMW) sensorworks. The background and the objects in the scene are classified based on properties related with MMW radiation and reflectivity. The environmental setting part calculates the following PMMW phenomenology; atmospheric propagation and emission including sky temperature, weather conditions, and physical temperature. Then, PMMW raw images are formed with surface geometry. Finally, PMMW sensor outputs are generated from PMMW raw images by applying the sensor characteristics such as an aperture size and noise level. Through the simulation process, PMMW phenomenology and sensor characteristics are simulated on the output scene. We have finished the design of framework of the simulator, and are working on implementation in detail. As a tentative result, the flight observation was simulated in specific conditions. After implementation details, we plan to increase the reliability of the simulation by data collecting

  15. Attention switching during scene perception: how goals influence the time course of eye movements across advertisements.

    Science.gov (United States)

    Wedel, Michel; Pieters, Rik; Liechty, John

    2008-06-01

    Eye movements across advertisements express a temporal pattern of bursts of respectively relatively short and long saccades, and this pattern is systematically influenced by activated scene perception goals. This was revealed by a continuous-time hidden Markov model applied to eye movements of 220 participants exposed to 17 ads under a free-viewing condition, and a scene-learning goal (ad memorization), a scene-evaluation goal (ad appreciation), a target-learning goal (product learning), or a target-evaluation goal (product evaluation). The model reflects how attention switches between two states--local and global--expressed in saccades of shorter and longer amplitude on a spatial grid with 48 cells overlaid on the ads. During the 5- to 6-s duration of self-controlled exposure to ads in the magazine context, attention predominantly started in the local state and ended in the global state, and rapidly switched about 5 times between states. The duration of the local attention state was much longer than the duration of the global state. Goals affected the frequency of switching between attention states and the duration of the local, but not of the global, state. (c) 2008 APA, all rights reserved

  16. 1/f 2 Characteristics and isotropy in the fourier power spectra of visual art, cartoons, comics, mangas, and different categories of photographs.

    Science.gov (United States)

    Koch, Michael; Denzler, Joachim; Redies, Christoph

    2010-08-19

    Art images and natural scenes have in common that their radially averaged (1D) Fourier spectral power falls according to a power-law with increasing spatial frequency (1/f(2) characteristics), which implies that the power spectra have scale-invariant properties. In the present study, we show that other categories of man-made images, cartoons and graphic novels (comics and mangas), have similar properties. Further on, we extend our investigations to 2D power spectra. In order to determine whether the Fourier power spectra of man-made images differed from those of other categories of images (photographs of natural scenes, objects, faces and plants and scientific illustrations), we analyzed their 2D power spectra by principal component analysis. Results indicated that the first fifteen principal components allowed a partial separation of the different image categories. The differences between the image categories were studied in more detail by analyzing whether the mean power and the slope of the power gradients from low to high spatial frequencies varied across orientations in the power spectra. Mean power was generally higher in cardinal orientations both in real-world photographs and artworks, with no systematic difference between the two types of images. However, the slope of the power gradients showed a lower degree of mean variability across spectral orientations (i.e., more isotropy) in art images, cartoons and graphic novels than in photographs of comparable subject matters. Taken together, these results indicate that art images, cartoons and graphic novels possess relatively uniform 1/f(2) characteristics across all orientations. In conclusion, the man-made stimuli studied, which were presumably produced to evoke pleasant and/or enjoyable visual perception in human observers, form a subset of all images and share statistical properties in their Fourier power spectra. Whether these properties are necessary or sufficient to induce aesthetic perception remains

  17. Hydrological AnthropoScenes

    Science.gov (United States)

    Cudennec, Christophe

    2016-04-01

    The Anthropocene concept encapsulates the planetary-scale changes resulting from accelerating socio-ecological transformations, beyond the stratigraphic definition actually in debate. The emergence of multi-scale and proteiform complexity requires inter-discipline and system approaches. Yet, to reduce the cognitive challenge of tackling this complexity, the global Anthropocene syndrome must now be studied from various topical points of view, and grounded at regional and local levels. A system approach should allow to identify AnthropoScenes, i.e. settings where a socio-ecological transformation subsystem is clearly coherent within boundaries and displays explicit relationships with neighbouring/remote scenes and within a nesting architecture. Hydrology is a key topical point of view to be explored, as it is important in many aspects of the Anthropocene, either with water itself being a resource, hazard or transport force; or through the network, connectivity, interface, teleconnection, emergence and scaling issues it determines. We will schematically exemplify these aspects with three contrasted hydrological AnthropoScenes in Tunisia, France and Iceland; and reframe therein concepts of the hydrological change debate. Bai X., van der Leeuw S., O'Brien K., Berkhout F., Biermann F., Brondizio E., Cudennec C., Dearing J., Duraiappah A., Glaser M., Revkin A., Steffen W., Syvitski J., 2016. Plausible and desirable futures in the Anthropocene: A new research agenda. Global Environmental Change, in press, http://dx.doi.org/10.1016/j.gloenvcha.2015.09.017 Brondizio E., O'Brien K., Bai X., Biermann F., Steffen W., Berkhout F., Cudennec C., Lemos M.C., Wolfe A., Palma-Oliveira J., Chen A. C-T. Re-conceptualizing the Anthropocene: A call for collaboration. Global Environmental Change, in review. Montanari A., Young G., Savenije H., Hughes D., Wagener T., Ren L., Koutsoyiannis D., Cudennec C., Grimaldi S., Blöschl G., Sivapalan M., Beven K., Gupta H., Arheimer B., Huang Y

  18. Hierarchical Model for the Similarity Measurement of a Complex Holed-Region Entity Scene

    Directory of Open Access Journals (Sweden)

    Zhanlong Chen

    2017-11-01

    Full Text Available Complex multi-holed-region entity scenes (i.e., sets of random region with holes are common in spatial database systems, spatial query languages, and the Geographic Information System (GIS. A multi-holed-region (region with an arbitrary number of holes is an abstraction of the real world that primarily represents geographic objects that have more than one interior boundary, such as areas that contain several lakes or lakes that contain islands. When the similarity of the two complex holed-region entity scenes is measured, the number of regions in the scenes and the number of holes in the regions are usually different between the two scenes, which complicates the matching relationships of holed-regions and holes. The aim of this research is to develop several holed-region similarity metrics and propose a hierarchical model to measure comprehensively the similarity between two complex holed-region entity scenes. The procedure first divides a complex entity scene into three layers: a complex scene, a micro-spatial-scene, and a simple entity (hole. The relationships between the adjacent layers are considered to be sets of relationships, and each level of similarity measurements is nested with the adjacent one. Next, entity matching is performed from top to bottom, while the similarity results are calculated from local to global. In addition, we utilize position graphs to describe the distribution of the holed-regions and subsequently describe the directions between the holes using a feature matrix. A case study that uses the Great Lakes in North America in 1986 and 2015 as experimental data illustrates the entire similarity measurement process between two complex holed-region entity scenes. The experimental results show that the hierarchical model accounts for the relationships of the different layers in the entire complex holed-region entity scene. The model can effectively calculate the similarity of complex holed-region entity scenes, even if the

  19. Learning object-to-class kernels for scene classification.

    Science.gov (United States)

    Zhang, Lei; Zhen, Xiantong; Shao, Ling

    2014-08-01

    High-level image representations have drawn increasing attention in visual recognition, e.g., scene classification, since the invention of the object bank. The object bank represents an image as a response map of a large number of pretrained object detectors and has achieved superior performance for visual recognition. In this paper, based on the object bank representation, we propose the object-to-class (O2C) distances to model scene images. In particular, four variants of O2C distances are presented, and with the O2C distances, we can represent the images using the object bank by lower-dimensional but more discriminative spaces, called distance spaces, which are spanned by the O2C distances. Due to the explicit computation of O2C distances based on the object bank, the obtained representations can possess more semantic meanings. To combine the discriminant ability of the O2C distances to all scene classes, we further propose to kernalize the distance representation for the final classification. We have conducted extensive experiments on four benchmark data sets, UIUC-Sports, Scene-15, MIT Indoor, and Caltech-101, which demonstrate that the proposed approaches can significantly improve the original object bank approach and achieve the state-of-the-art performance.

  20. The role of memory for visual search in scenes.

    Science.gov (United States)

    Le-Hoa Võ, Melissa; Wolfe, Jeremy M

    2015-03-01

    Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes. © 2015 New York Academy of Sciences.

  1. The time course of natural scene perception with reduced attention

    NARCIS (Netherlands)

    Groen, I.I.A.; Ghebreab, S.; Lamme, V.A.F.; Scholte, H.S.

    Attention is thought to impose an informational bottleneck on vision by selecting particular information from visual scenes for enhanced processing. Behavioral evidence suggests, however, that some scene information is extracted even when attention is directed elsewhere. Here, we investigated the

  2. Research on hyperspectral dynamic scene and image sequence simulation

    Science.gov (United States)

    Sun, Dandan; Liu, Fang; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyperspectral dynamic scene and image sequence for hyperspectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyperspectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyperspectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyperspectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyperspectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyperspectral images are consistent with the theoretical analysis results.

  3. Feature-Based versus Category-Based Induction with Uncertain Categories

    Science.gov (United States)

    Griffiths, Oren; Hayes, Brett K.; Newell, Ben R.

    2012-01-01

    Previous research has suggested that when feature inferences have to be made about an instance whose category membership is uncertain, feature-based inductive reasoning is used to the exclusion of category-based induction. These results contrast with the observation that people can and do use category-based induction when category membership is…

  4. Review of infrared scene projector technology-1993

    Science.gov (United States)

    Driggers, Ronald G.; Barnard, Kenneth J.; Burroughs, E. E.; Deep, Raymond G.; Williams, Owen M.

    1994-07-01

    The importance of testing IR imagers and missile seekers with realistic IR scenes warrants a review of the current technologies used in dynamic infrared scene projection. These technologies include resistive arrays, deformable mirror arrays, mirror membrane devices, liquid crystal light valves, laser writers, laser diode arrays, and CRTs. Other methods include frustrated total internal reflection, thermoelectric devices, galvanic cells, Bly cells, and vanadium dioxide. A description of each technology is presented along with a discussion of their relative benefits and disadvantages. The current state of each methodology is also summarized. Finally, the methods are compared and contrasted in terms of their performance parameters.

  5. Use of AFIS for linking scenes of crime.

    Science.gov (United States)

    Hefetz, Ido; Liptz, Yakir; Vaturi, Shaul; Attias, David

    2016-05-01

    Forensic intelligence can provide critical information in criminal investigations - the linkage of crime scenes. The Automatic Fingerprint Identification System (AFIS) is an example of a technological improvement that has advanced the entire forensic identification field to strive for new goals and achievements. In one example using AFIS, a series of burglaries into private apartments enabled a fingerprint examiner to search latent prints from different burglary scenes against an unsolved latent print database. Latent finger and palm prints coming from the same source were associated with over than 20 cases. Then, by forensic intelligence and profile analysis the offender's behavior could be anticipated. He was caught, identified, and arrested. It is recommended to perform an AFIS search of LT/UL prints against current crimes automatically as part of laboratory protocol and not by an examiner's discretion. This approach may link different crime scenes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  6. Automatic video surveillance of outdoor scenes using track before detect

    DEFF Research Database (Denmark)

    Hansen, Morten; Sørensen, Helge Bjarup Dissing; Birkemark, Christian M.

    2005-01-01

    This paper concerns automatic video surveillance of outdoor scenes using a single camera. The first step in automatic interpretation of the video stream is activity detection based on background subtraction. Usually, this process will generate a large number of false alarms in outdoor scenes due...

  7. Gordon Craig's Scene Project: a history open to revision

    Directory of Open Access Journals (Sweden)

    Luiz Fernando

    2014-09-01

    Full Text Available The article proposes a review of Gordon Craig’s Scene project, an invention patented in 1910 and developed until 1922. Craig himself kept an ambiguous position whether it was an unfulfilled project or not. His son and biographer Edward Craig sustained that Craig’s original aims were never achieved because of technical limitation, and most of the scholars who examined the matter followed this position. Departing from the actual screen models saved in the Bibliothèque Nationale de France, Craig’s original notebooks, and a short film from 1963, I defend that the patented project and the essay published in 1923 mean, indeed, the materialisation of the dreamed device of the thousand scenes in one scene

  8. A view not to be missed: Salient scene content interferes with cognitive restoration

    NARCIS (Netherlands)

    van der Jagt, A.P.N.; Craig, Tony; Brewer, Mark J.; Pearson, David G.

    2017-01-01

    Attention Restoration Theory (ART) states that built scenes place greater load on attentional resources than natural scenes. This is explained in terms of "hard" and "soft" fascination of built and natural scenes. Given a lack of direct empirical evidence for this assumption we propose that

  9. Places in the Brain: Bridging Layout and Object Geometry in Scene-Selective Cortex.

    Science.gov (United States)

    Dillon, Moira R; Persichetti, Andrew S; Spelke, Elizabeth S; Dilks, Daniel D

    2017-06-13

    Diverse animal species primarily rely on sense (left-right) and egocentric distance (proximal-distal) when navigating the environment. Recent neuroimaging studies with human adults show that this information is represented in 2 scene-selective cortical regions-the occipital place area (OPA) and retrosplenial complex (RSC)-but not in a third scene-selective region-the parahippocampal place area (PPA). What geometric properties, then, does the PPA represent, and what is its role in scene processing? Here we hypothesize that the PPA represents relative length and angle, the geometric properties classically associated with object recognition, but only in the context of large extended surfaces that compose the layout of a scene. Using functional magnetic resonance imaging adaptation, we found that the PPA is indeed sensitive to relative length and angle changes in pictures of scenes, but not pictures of objects that reliably elicited responses to the same geometric changes in object-selective cortical regions. Moreover, we found that the OPA is also sensitive to such changes, while the RSC is tolerant to such changes. Thus, the geometric information typically associated with object recognition is also used during some aspects of scene processing. These findings provide evidence that scene-selective cortex differentially represents the geometric properties guiding navigation versus scene categorization. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  10. Estimating cotton canopy ground cover from remotely sensed scene reflectance

    International Nuclear Information System (INIS)

    Maas, S.J.

    1998-01-01

    Many agricultural applications require spatially distributed information on growth-related crop characteristics that could be supplied through aircraft or satellite remote sensing. A study was conducted to develop and test a methodology for estimating plant canopy ground cover for cotton (Gossypium hirsutum L.) from scene reflectance. Previous studies indicated that a relatively simple relationship between ground cover and scene reflectance could be developed based on linear mixture modeling. Theoretical analysis indicated that the effects of shadows in the scene could be compensated for by averaging the results obtained using scene reflectance in the red and near-infrared wavelengths. The methodology was tested using field data collected over several years from cotton test plots in Texas and California. Results of the study appear to verify the utility of this approach. Since the methodology relies on information that can be obtained solely through remote sensing, it would be particularly useful in applications where other field information, such as plant size, row spacing, and row orientation, is unavailable

  11. Abnormal brain activation and connectivity to standardized disorder-related visual scenes in social anxiety disorder.

    Science.gov (United States)

    Heitmann, Carina Yvonne; Feldker, Katharina; Neumeister, Paula; Zepp, Britta Maria; Peterburs, Jutta; Zwitserlood, Pienie; Straube, Thomas

    2016-04-01

    Our understanding of altered emotional processing in social anxiety disorder (SAD) is hampered by a heterogeneity of findings, which is probably due to the vastly different methods and materials used so far. This is why the present functional magnetic resonance imaging (fMRI) study investigated immediate disorder-related threat processing in 30 SAD patients and 30 healthy controls (HC) with a novel, standardized set of highly ecologically valid, disorder-related complex visual scenes. SAD patients rated disorder-related as compared with neutral scenes as more unpleasant, arousing and anxiety-inducing than HC. On the neural level, disorder-related as compared with neutral scenes evoked differential responses in SAD patients in a widespread emotion processing network including (para-)limbic structures (e.g. amygdala, insula, thalamus, globus pallidus) and cortical regions (e.g. dorsomedial prefrontal cortex (dmPFC), posterior cingulate cortex (PCC), and precuneus). Functional connectivity analysis yielded an altered interplay between PCC/precuneus and paralimbic (insula) as well as cortical regions (dmPFC, precuneus) in SAD patients, which emphasizes a central role for PCC/precuneus in disorder-related scene processing. Hyperconnectivity of globus pallidus with amygdala, anterior cingulate cortex (ACC) and medial prefrontal cortex (mPFC) additionally underlines the relevance of this region in socially anxious threat processing. Our findings stress the importance of specific disorder-related stimuli for the investigation of altered emotion processing in SAD. Disorder-related threat processing in SAD reveals anomalies at multiple stages of emotion processing which may be linked to increased anxiety and to dysfunctionally elevated levels of self-referential processing reported in previous studies. © 2016 Wiley Periodicals, Inc.

  12. Significance of perceptually relevant image decolorization for scene classification

    Science.gov (United States)

    Viswanathan, Sowmya; Divakaran, Govind; Soman, Kutti Padanyl

    2017-11-01

    Color images contain luminance and chrominance components representing the intensity and color information, respectively. The objective of this paper is to show the significance of incorporating chrominance information to the task of scene classification. An improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the color-to-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images. The experimental results based on an image quality assessment for image decolorization and its success rate (using the Cadik and COLOR250 datasets) show that the proposed image decolorization technique performs better than eight existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component for scene classification tasks is demonstrated using a deep belief network-based image classification system developed using dense scale-invariant feature transforms. The amount of chrominance information incorporated into the proposed image decolorization technique is confirmed with the improvement to the overall scene classification accuracy. Moreover, the overall scene classification performance improved by combining the models obtained using the proposed method and conventional decolorization methods.

  13. Medial Temporal Lobe Contributions to Episodic Future Thinking: Scene Construction or Future Projection?

    Science.gov (United States)

    Palombo, D J; Hayes, S M; Peterson, K M; Keane, M M; Verfaellie, M

    2018-02-01

    Previous research has shown that the medial temporal lobes (MTL) are more strongly engaged when individuals think about the future than about the present, leading to the suggestion that future projection drives MTL engagement. However, future thinking tasks often involve scene processing, leaving open the alternative possibility that scene-construction demands, rather than future projection, are responsible for the MTL differences observed in prior work. This study explores this alternative account. Using functional magnetic resonance imaging, we directly contrasted MTL activity in 1) high scene-construction and low scene-construction imagination conditions matched in future thinking demands and 2) future-oriented and present-oriented imagination conditions matched in scene-construction demands. Consistent with the alternative account, the MTL was more active for the high versus low scene-construction condition. By contrast, MTL differences were not observed when comparing the future versus present conditions. Moreover, the magnitude of MTL activation was associated with the extent to which participants imagined a scene but was not associated with the extent to which participants thought about the future. These findings help disambiguate which component processes of imagination specifically involve the MTL. Published by Oxford University Press 2016.

  14. Eye movements and attention in reading, scene perception, and visual search.

    Science.gov (United States)

    Rayner, Keith

    2009-08-01

    Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with "real-world" tasks and research utilizing the visual-world paradigm are also briefly discussed.

  15. Acquaintance Rape: Applying Crime Scene Analysis to the Prediction of Sexual Recidivism.

    Science.gov (United States)

    Lehmann, Robert J B; Goodwill, Alasdair M; Hanson, R Karl; Dahle, Klaus-Peter

    2016-10-01

    The aim of the current study was to enhance the assessment and predictive accuracy of risk assessments for sexual offenders by utilizing detailed crime scene analysis (CSA). CSA was conducted on a sample of 247 male acquaintance rapists from Berlin (Germany) using a nonmetric, multidimensional scaling (MDS) Behavioral Thematic Analysis (BTA) approach. The age of the offenders at the time of the index offense ranged from 14 to 64 years (M = 32.3; SD = 11.4). The BTA procedure revealed three behavioral themes of hostility, criminality, and pseudo-intimacy, consistent with previous CSA research on stranger rape. The construct validity of the three themes was demonstrated through correlational analyses with known sexual offending measures and criminal histories. The themes of hostility and pseudo-intimacy were significant predictors of sexual recidivism. In addition, the pseudo-intimacy theme led to a significant increase in the incremental validity of the Static-99 actuarial risk assessment instrument for the prediction of sexual recidivism. The results indicate the potential utility and validity of crime scene behaviors in the applied risk assessment of sexual offenders. © The Author(s) 2015.

  16. Robotic Discovery of the Auditory Scene

    National Research Council Canada - National Science Library

    Martinson, E; Schultz, A

    2007-01-01

    .... Motivated by the large negative effect of ambient noise sources on robot audition, the long-term goal is to provide awareness of the auditory scene to a robot, so that it may more effectively act...

  17. Developmental Changes in Attention to Faces and Bodies in Static and Dynamic Scenes

    Directory of Open Access Journals (Sweden)

    Brenda M Stoesz

    2014-03-01

    Full Text Available Typically developing individuals show a strong visual preference for faces and face-like stimuli; however, this may come at the expense of attending to bodies or to other aspects of a scene. The primary goal of the present study was to provide additional insight into the development of attentional mechanisms that underlie perception of real people in naturalistic scenes. We examined the looking behaviours of typical children, adolescents, and young adults as they viewed static and dynamic scenes depicting one or more people. Overall, participants showed a bias to attend to faces more than on other parts of the scenes. Adding motion cues led to a reduction in the number, but an increase in the average duration of face fixations in single-character scenes. When multiple characters appeared in a scene, motion-related effects were attenuated and participants shifted their gaze from faces to bodies, or made off-screen glances. Children showed the largest effects related to the introduction of motion cues or additional characters, suggesting that they find dynamic faces difficult to process, and are especially prone to look away from faces when viewing complex social scenes – a strategy that could reduce the cognitive and the affective load imposed by having to divide one’s attention between multiple faces. Our findings provide new insights into the typical development of social attention during natural scene viewing, and lay the foundation for future work examining gaze behaviours in typical and atypical development.

  18. Procedural-Based Category Learning in Patients with Parkinson's Disease: Impact of Category Number and Category Continuity

    Directory of Open Access Journals (Sweden)

    J. Vincent eFiloteo

    2014-02-01

    Full Text Available Previously we found that Parkinson's disease (PD patients are impaired in procedural-based category learning when category membership is defined by a nonlinear relationship between stimulus dimensions, but these same patients are normal when the rule is defined by a linear relationship (Filoteo et al., 2005; Maddox & Filoteo, 2001. We suggested that PD patients' impairment was due to a deficit in recruiting ‘striatal units' to represent complex nonlinear rules. In the present study, we further examined the nature of PD patients' procedural-based deficit in two experiments designed to examine the impact of (1 the number of categories, and (2 category discontinuity on learning. Results indicated that PD patients were impaired only under discontinuous category conditions but were normal when the number of categories was increased from two to four. The lack of impairment in the four-category condition suggests normal integrity of striatal medium spiny cells involved in procedural-based category learning. In contrast, and consistent with our previous observation of a nonlinear deficit, the finding that PD patients were impaired in the discontinuous condition suggests that these patients are impaired when they have to associate perceptually distinct exemplars with the same category. Theoretically, this deficit might be related to dysfunctional communication among medium spiny neurons within the striatum, particularly given that these are cholinergic neurons and a cholinergic deficiency could underlie some of PD patients’ cognitive impairment.

  19. Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach

    Directory of Open Access Journals (Sweden)

    Mengyun Liu

    2017-12-01

    Full Text Available After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an indoor scene constrained method for localization is proposed in this paper, which is inspired by the visual cognition ability of the human brain and the progress in the computer vision field regarding high-level image understanding. Furthermore, a multi-sensor fusion method is implemented on a commercial smartphone including cameras, WiFi and inertial sensors. Compared to former research, the camera on a smartphone is used to “see” which scene the user is in. With this information, a particle filter algorithm constrained by scene information is adopted to determine the final location. For indoor scene recognition, we take advantage of deep learning that has been proven to be highly effective in the computer vision community. For particle filter, both WiFi and magnetic field signals are used to update the weights of particles. Similar to other fingerprinting localization methods, there are two stages in the proposed system, offline training and online localization. In the offline stage, an indoor scene model is trained by Caffe (one of the most popular open source frameworks for deep learning and a fingerprint database is constructed by user trajectories in different scenes. To reduce the volume requirement of training data for deep learning, a fine-tuned method is adopted for model training. In the online stage, a camera in a smartphone is used to recognize the initial scene. Then a particle filter algorithm is used to fuse the sensor data and determine the final location. To prove the effectiveness of the proposed method, an Android client and a web server are implemented. The Android client is used to collect data and locate a user. The web

  20. Contested Categories

    DEFF Research Database (Denmark)

    Drawing on social science perspectives, Contested Categories presents a series of empirical studies that engage with the often shifting and day-to-day realities of life sciences categories. In doing so, it shows how such categories remain contested and dynamic, and that the boundaries they create...

  1. Adaptive attunement of selective covert attention to evolutionary-relevant emotional visual scenes

    OpenAIRE

    Fernández-Martín, Andrés (UNIR); Gutiérrez-García, Aida; Capafons, Juan; Calvo, Manuel G

    2017-01-01

    We investigated selective attention to emotional scenes in peripheral vision, as a function of adaptive relevance of scene affective content for male and female observers. Pairs of emotional neutral images appeared peripherally with perceptual stimulus differences controlled while viewers were fixating on a different stimulus in central vision. Early selective orienting was assessed by the probability of directing the first fixation towards either scene, and the time until first fixation. Emo...

  2. Representations and Techniques for 3D Object Recognition and Scene Interpretation

    CERN Document Server

    Hoiem, Derek

    2011-01-01

    One of the grand challenges of artificial intelligence is to enable computers to interpret 3D scenes and objects from imagery. This book organizes and introduces major concepts in 3D scene and object representation and inference from still images, with a focus on recent efforts to fuse models of geometry and perspective with statistical machine learning. The book is organized into three sections: (1) Interpretation of Physical Space; (2) Recognition of 3D Objects; and (3) Integrated 3D Scene Interpretation. The first discusses representations of spatial layout and techniques to interpret physi

  3. Automating the construction of scene classifiers for content-based video retrieval

    NARCIS (Netherlands)

    Khan, L.; Israël, Menno; Petrushin, V.A.; van den Broek, Egon; van der Putten, Peter

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a

  4. Number of perceptually distinct surface colors in natural scenes.

    Science.gov (United States)

    Marín-Franch, Iván; Foster, David H

    2010-09-30

    The ability to perceptually identify distinct surfaces in natural scenes by virtue of their color depends not only on the relative frequency of surface colors but also on the probabilistic nature of observer judgments. Previous methods of estimating the number of discriminable surface colors, whether based on theoretical color gamuts or recorded from real scenes, have taken a deterministic approach. Thus, a three-dimensional representation of the gamut of colors is divided into elementary cells or points which are spaced at one discrimination-threshold unit intervals and which are then counted. In this study, information-theoretic methods were used to take into account both differing surface-color frequencies and observer response uncertainty. Spectral radiances were calculated from 50 hyperspectral images of natural scenes and were represented in a perceptually almost uniform color space. The average number of perceptually distinct surface colors was estimated as 7.3 × 10(3), much smaller than that based on counting methods. This number is also much smaller than the number of distinct points in a scene that are, in principle, available for reliable identification under illuminant changes, suggesting that color constancy, or the lack of it, does not generally determine the limit on the use of color for surface identification.

  5. Characteristics of nontrauma scene flights for air medical transport.

    Science.gov (United States)

    Krebs, Margaret G; Fletcher, Erica N; Werman, Howard; McKenzie, Lara B

    2014-01-01

    Little is known about the use of air medical transport for patients with medical, rather than traumatic, emergencies. This study describes the practices of air transport programs, with respect to nontrauma scene responses, in several areas throughout the United States and Canada. A descriptive, retrospective study was conducted of all nontrauma scene flights from 2008 and 2009. Flight information and patient demographic data were collected from 5 air transport programs. Descriptive statistics were used to examine indications for transport, Glasgow Coma Scale Scores, and loaded miles traveled. A total of 1,785 nontrauma scene flights were evaluated. The percentage of scene flights contributed by nontraumatic emergencies varied between programs, ranging from 0% to 44.3%. The most common indication for transport was cardiac, nonST-segment elevation myocardial infarction (22.9%). Cardiac arrest was the indication for transport in 2.5% of flights. One air transport program reported a high percentage (49.4) of neurologic, stroke, flights. The use of air transport for nontraumatic emergencies varied considerably between various air transport programs and regions. More research is needed to evaluate which nontraumatic emergencies benefit from air transport. National guidelines regarding the use of air transport for nontraumatic emergencies are needed. Copyright © 2014 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.

  6. Narrative Collage of Image Collections by Scene Graph Recombination.

    Science.gov (United States)

    Fang, Fei; Yi, Miao; Feng, Hui; Hu, Shenghong; Xiao, Chunxia

    2017-10-04

    Narrative collage is an interesting image editing art to summarize the main theme or storyline behind an image collection. We present a novel method to generate narrative images with plausible semantic scene structures. To achieve this goal, we introduce a layer graph and a scene graph to represent relative depth order and semantic relationship between image objects, respectively. We firstly cluster the input image collection to select representative images, and then extract a group of semantic salient objects from each representative image. Both Layer graphs and scene graphs are constructed and combined according to our specific rules for reorganizing the extracted objects in every image. We design an energy model to appropriately locate every object on the final canvas. Experiment results show that our method can produce competitive narrative collage result and works well on a wide range of image collections.

  7. Virtual environments for scene of crime reconstruction and analysis

    Science.gov (United States)

    Howard, Toby L. J.; Murta, Alan D.; Gibson, Simon

    2000-02-01

    This paper describes research conducted in collaboration with Greater Manchester Police (UK), to evalute the utility of Virtual Environments for scene of crime analysis, forensic investigation, and law enforcement briefing and training. We present an illustrated case study of the construction of a high-fidelity virtual environment, intended to match a particular real-life crime scene as closely as possible. We describe and evaluate the combination of several approaches including: the use of the Manchester Scene Description Language for constructing complex geometrical models; the application of a radiosity rendering algorithm with several novel features based on human perceptual consideration; texture extraction from forensic photography; and experiments with interactive walkthroughs and large-screen stereoscopic display of the virtual environment implemented using the MAVERIK system. We also discuss the potential applications of Virtual Environment techniques in the Law Enforcement and Forensic communities.

  8. STREAM PROCESSING ALGORITHMS FOR DYNAMIC 3D SCENE ANALYSIS

    Science.gov (United States)

    2018-02-15

    PROCESSING ALGORITHMS FOR DYNAMIC 3D SCENE ANALYSIS 5a. CONTRACT NUMBER FA8750-14-2-0072 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 62788F 6...of Figures 1 The 3D processing pipeline flowchart showing key modules. . . . . . . . . . . . . . . . . 12 2 Overall view (data flow) of the proposed...pipeline flowchart showing key modules. from motion and bundle adjustment algorithm. By fusion of depth masks of the scene obtained from 3D

  9. Children's Development of Analogical Reasoning: Insights from Scene Analogy Problems

    Science.gov (United States)

    Richland, Lindsey E.; Morrison, Robert G.; Holyoak, Keith J.

    2006-01-01

    We explored how relational complexity and featural distraction, as varied in scene analogy problems, affect children's analogical reasoning performance. Results with 3- and 4-year-olds, 6- and 7-year-olds, 9- to 11-year-olds, and 13- and 14-year-olds indicate that when children can identify the critical structural relations in a scene analogy…

  10. The Influence of Color on the Perception of Scene Gist

    Science.gov (United States)

    Castelhano, Monica S.; Henderson, John M.

    2008-01-01

    In 3 experiments the authors used a new contextual bias paradigm to explore how quickly information is extracted from a scene to activate gist, whether color contributes to this activation, and how color contributes, if it does. Participants were shown a brief presentation of a scene followed by the name of a target object. The target object could…

  11. Emotional event-related potentials are larger to figures than scenes but are similarly reduced by inattention

    Directory of Open Access Journals (Sweden)

    Nordström Henrik

    2012-05-01

    Full Text Available Abstract Background In research on event-related potentials (ERP to emotional pictures, greater attention to emotional than neutral stimuli (i.e., motivated attention is commonly indexed by two difference waves between emotional and neutral stimuli: the early posterior negativity (EPN and the late positive potential (LPP. Evidence suggests that if attention is directed away from the pictures, then the emotional effects on EPN and LPP are eliminated. However, a few studies have found residual, emotional effects on EPN and LPP. In these studies, pictures were shown at fixation, and picture composition was that of simple figures rather than that of complex scenes. Because figures elicit larger LPP than do scenes, figures might capture and hold attention more strongly than do scenes. Here, we showed negative and neutral pictures of figures and scenes and tested first, whether emotional effects are larger to figures than scenes for both EPN and LPP, and second, whether emotional effects on EPN and LPP are reduced less for unattended figures than scenes. Results Emotional effects on EPN and LPP were larger for figures than scenes. When pictures were unattended, emotional effects on EPN increased for scenes but tended to decrease for figures, whereas emotional effects on LPP decreased similarly for figures and scenes. Conclusions Emotional effects on EPN and LPP were larger for figures than scenes, but these effects did not resist manipulations of attention more strongly for figures than scenes. These findings imply that the emotional content captures attention more strongly for figures than scenes, but that the emotional content does not hold attention more strongly for figures than scenes.

  12. Emotional event-related potentials are larger to figures than scenes but are similarly reduced by inattention

    Science.gov (United States)

    2012-01-01

    Background In research on event-related potentials (ERP) to emotional pictures, greater attention to emotional than neutral stimuli (i.e., motivated attention) is commonly indexed by two difference waves between emotional and neutral stimuli: the early posterior negativity (EPN) and the late positive potential (LPP). Evidence suggests that if attention is directed away from the pictures, then the emotional effects on EPN and LPP are eliminated. However, a few studies have found residual, emotional effects on EPN and LPP. In these studies, pictures were shown at fixation, and picture composition was that of simple figures rather than that of complex scenes. Because figures elicit larger LPP than do scenes, figures might capture and hold attention more strongly than do scenes. Here, we showed negative and neutral pictures of figures and scenes and tested first, whether emotional effects are larger to figures than scenes for both EPN and LPP, and second, whether emotional effects on EPN and LPP are reduced less for unattended figures than scenes. Results Emotional effects on EPN and LPP were larger for figures than scenes. When pictures were unattended, emotional effects on EPN increased for scenes but tended to decrease for figures, whereas emotional effects on LPP decreased similarly for figures and scenes. Conclusions Emotional effects on EPN and LPP were larger for figures than scenes, but these effects did not resist manipulations of attention more strongly for figures than scenes. These findings imply that the emotional content captures attention more strongly for figures than scenes, but that the emotional content does not hold attention more strongly for figures than scenes. PMID:22607397

  13. Heterogeneity in Perceptual Category Learning by High Functioning Children with Autism Spectrum Disorder

    Directory of Open Access Journals (Sweden)

    Eduardo eMercado

    2015-06-01

    Full Text Available Previous research suggests that high functioning children with Autism Spectrum Disorder (ASD sometimes have problems learning categories, but often appear to perform normally in categorization tasks. The deficits that individuals with ASD show when learning categories have been attributed to executive dysfunction, general deficits in implicit learning, atypical cognitive strategies, or abnormal perceptual biases and abilities. Several of these psychological explanations for category learning deficits have been associated with neural abnormalities such as cortical underconnectivity. The present study evaluated how well existing neurally-based theories account for atypical perceptual category learning shown by high functioning children with ASD across multiple category learning tasks involving novel, abstract shapes. Consistent with earlier results, children’s performances revealed two distinct patterns of learning and generalization associated with ASD: one was indistinguishable from performance in typically developing children; the other revealed dramatic impairments. These two patterns were evident regardless of training regimen or stimulus set. Surprisingly, some children with ASD showed both patterns. Simulations of perceptual category learning could account for the two observed patterns in terms of differences in neural plasticity. However, no current psychological or neural theory adequately explains why a child with ASD might show such large fluctuations in category learning ability across training conditions or stimulus sets.

  14. Heterogeneity in perceptual category learning by high functioning children with autism spectrum disorder.

    Science.gov (United States)

    Mercado, Eduardo; Church, Barbara A; Coutinho, Mariana V C; Dovgopoly, Alexander; Lopata, Christopher J; Toomey, Jennifer A; Thomeer, Marcus L

    2015-01-01

    Previous research suggests that high functioning (HF) children with autism spectrum disorder (ASD) sometimes have problems learning categories, but often appear to perform normally in categorization tasks. The deficits that individuals with ASD show when learning categories have been attributed to executive dysfunction, general deficits in implicit learning, atypical cognitive strategies, or abnormal perceptual biases and abilities. Several of these psychological explanations for category learning deficits have been associated with neural abnormalities such as cortical underconnectivity. The present study evaluated how well existing neurally based theories account for atypical perceptual category learning shown by HF children with ASD across multiple category learning tasks involving novel, abstract shapes. Consistent with earlier results, children's performances revealed two distinct patterns of learning and generalization associated with ASD: one was indistinguishable from performance in typically developing children; the other revealed dramatic impairments. These two patterns were evident regardless of training regimen or stimulus set. Surprisingly, some children with ASD showed both patterns. Simulations of perceptual category learning could account for the two observed patterns in terms of differences in neural plasticity. However, no current psychological or neural theory adequately explains why a child with ASD might show such large fluctuations in category learning ability across training conditions or stimulus sets.

  15. An approach for brain-controlled prostheses based on Scene Graph Steady-State Visual Evoked Potentials.

    Science.gov (United States)

    Li, Rui; Zhang, Xiaodong; Li, Hanzhe; Zhang, Liming; Lu, Zhufeng; Chen, Jiangcheng

    2018-08-01

    Brain control technology can restore communication between the brain and a prosthesis, and choosing a Brain-Computer Interface (BCI) paradigm to evoke electroencephalogram (EEG) signals is an essential step for developing this technology. In this paper, the Scene Graph paradigm used for controlling prostheses was proposed; this paradigm is based on Steady-State Visual Evoked Potentials (SSVEPs) regarding the Scene Graph of a subject's intention. A mathematic model was built to predict SSVEPs evoked by the proposed paradigm and a sinusoidal stimulation method was used to present the Scene Graph stimulus to elicit SSVEPs from subjects. Then, a 2-degree of freedom (2-DOF) brain-controlled prosthesis system was constructed to validate the performance of the Scene Graph-SSVEP (SG-SSVEP)-based BCI. The classification of SG-SSVEPs was detected via the Canonical Correlation Analysis (CCA) approach. To assess the efficiency of proposed BCI system, the performances of traditional SSVEP-BCI system were compared. Experimental results from six subjects suggested that the proposed system effectively enhanced the SSVEP responses, decreased the degradation of SSVEP strength and reduced the visual fatigue in comparison with the traditional SSVEP-BCI system. The average signal to noise ratio (SNR) of SG-SSVEP was 6.31 ± 2.64 dB, versus 3.38 ± 0.78 dB of traditional-SSVEP. In addition, the proposed system achieved good performances in prosthesis control. The average accuracy was 94.58% ± 7.05%, and the corresponding high information transfer rate (IRT) was 19.55 ± 3.07 bit/min. The experimental results revealed that the SG-SSVEP based BCI system achieves the good performance and improved the stability relative to the conventional approach. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. The Role of Corticostriatal Systems in Speech Category Learning.

    Science.gov (United States)

    Yi, Han-Gyol; Maddox, W Todd; Mumford, Jeanette A; Chandrasekaran, Bharath

    2016-04-01

    One of the most difficult category learning problems for humans is learning nonnative speech categories. While feedback-based category training can enhance speech learning, the mechanisms underlying these benefits are unclear. In this functional magnetic resonance imaging study, we investigated neural and computational mechanisms underlying feedback-dependent speech category learning in adults. Positive feedback activated a large corticostriatal network including the dorsolateral prefrontal cortex, inferior parietal lobule, middle temporal gyrus, caudate, putamen, and the ventral striatum. Successful learning was contingent upon the activity of domain-general category learning systems: the fast-learning reflective system, involving the dorsolateral prefrontal cortex that develops and tests explicit rules based on the feedback content, and the slow-learning reflexive system, involving the putamen in which the stimuli are implicitly associated with category responses based on the reward value in feedback. Computational modeling of response strategies revealed significant use of reflective strategies early in training and greater use of reflexive strategies later in training. Reflexive strategy use was associated with increased activation in the putamen. Our results demonstrate a critical role for the reflexive corticostriatal learning system as a function of response strategy and proficiency during speech category learning. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  17. OpenSceneGraph 3 Cookbook

    CERN Document Server

    Wang, Rui

    2012-01-01

    This is a cookbook full of recipes with practical examples enriched with code and the required screenshots for easy and quick comprehension. You should be familiar with the basic concepts of the OpenSceneGraph API and should be able to write simple programs. Some OpenGL and math knowledge will help a lot, too.

  18. Attention, awareness, and the perception of auditory scenes

    Directory of Open Access Journals (Sweden)

    Joel S Snyder

    2012-02-01

    Full Text Available Auditory perception and cognition entails both low-level and high-level processes, which are likely to interact with each other to create our rich conscious experience of soundscapes. Recent research that we review has revealed numerous influences of high-level factors, such as attention, intention, and prior experience, on conscious auditory perception. And recently, studies have shown that auditory scene analysis tasks can exhibit multistability in a manner very similar to ambiguous visual stimuli, presenting a unique opportunity to study neural correlates of auditory awareness and the extent to which mechanisms of perception are shared across sensory modalities. Research has also led to a growing number of techniques through which auditory perception can be manipulated and even completely suppressed. Such findings have important consequences for our understanding of the mechanisms of perception and also should allow scientists to precisely distinguish the influences of different higher-level influences.

  19. The Anthropo-scene: A guide for the perplexed.

    Science.gov (United States)

    Lorimer, Jamie

    2017-02-01

    The scientific proposal that the Earth has entered a new epoch as a result of human activities - the Anthropocene - has catalysed a flurry of intellectual activity. I introduce and review the rich, inchoate and multi-disciplinary diversity of this Anthropo-scene. I identify five ways in which the concept of the Anthropocene has been mobilized: scientific question, intellectual zeitgeist, ideological provocation, new ontologies and science fiction. This typology offers an analytical framework for parsing this diversity, for understanding the interactions between different ways of thinking in the Anthropo-scene, and thus for comprehending elements of its particular and peculiar sociabilities. Here I deploy this framework to situate Earth Systems Science within the Anthropo-scene, exploring both the status afforded science in discussions of this new epoch, and the various ways in which the other means of engaging with the concept come to shape the conduct, content and politics of this scientific enquiry. In conclusion the paper reflects on the potential of the Anthropocene for new modes of academic praxis.

  20. A Virtual Environments Editor for Driving Scenes

    Directory of Open Access Journals (Sweden)

    Ronald R. Mourant

    2003-12-01

    Full Text Available The goal of this project was to enable the rapid creation of three-dimensional virtual driving environments. We designed and implemented a high-level scene editor that allows a user to construct a driving environment by pasting icons that represent 1 road segments, 2 road signs, 3 trees and 4 buildings. These icons represent two- and three-dimensional objects that have been predesigned. Icons can be placed in the scene at specific locations (x, y, and z coordinates. The editor includes the capability of a user to "drive" a vehicle using a computer mouse for steering, accelerating and braking. At any time during the process of building a virtual environment, a user may switch to "Run Mode" and inspect the three-dimensional scene by "driving" through it using the mouse. Adjustments and additions can be made to the virtual environment by going back to "Build Mode". Once a user is satisfied with the threedimensional virtual environment, it can be saved in a file. The file can used with Java3D software that enables the traversing of three-dimensional environments. The process of building virtual environments from predesigned icons can be applied to many other application areas. It will enable novice computer users to rapidly construct and use three-dimensional virtual environments.

  1. Where and when Do Objects Become Scenes?

    Directory of Open Access Journals (Sweden)

    Jiye G. Kim

    2011-05-01

    Full Text Available Scenes can be understood with extraordinary speed and facility, not merely as an inventory of individual objects but in the coding of the relations among them. These relations, which can be readily described by prepositions or gerunds (e.g., a hand holding a pen, allows the explicit representation of complex structures. Where in the brain are inter-object relations specified? In a series of fMRI experiments, we show that pairs of objects shown as interacting elicit greater activity in LOC than when the objects are depicted side-by-side (e.g., a hand beside a pen. Other visual areas, PPA, IPS, and DLPFC, did not show this sensitivity to scene relations, rendering it unlikely that the relations were computed in these regions. Using EEG and TMS, we further show that LOC's sensitivity to object interactions arises around 170ms post stimulus onset and that disruption of normal LOC activity—but not IPS activity—is detrimental to the behavioral sensitivity of inter-object relations. Insofar as LOC is the earliest cortical region where shape is distinguished from texture, our results provide strong evidence that scene-like relations are achieved simultaneously with the perception of object shape and not inferred at some stage following object identification.

  2. Converging modalities ground abstract categories: the case of politics.

    Science.gov (United States)

    Farias, Ana Rita; Garrido, Margarida V; Semin, Gün R

    2013-01-01

    Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory) and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal.

  3. Single-View 3D Scene Reconstruction and Parsing by Attribute Grammar.

    Science.gov (United States)

    Liu, Xiaobai; Zhao, Yibiao; Zhu, Song-Chun

    2018-03-01

    In this paper, we present an attribute grammar for solving two coupled tasks: i) parsing a 2D image into semantic regions; and ii) recovering the 3D scene structures of all regions. The proposed grammar consists of a set of production rules, each describing a kind of spatial relation between planar surfaces in 3D scenes. These production rules are used to decompose an input image into a hierarchical parse graph representation where each graph node indicates a planar surface or a composite surface. Different from other stochastic image grammars, the proposed grammar augments each graph node with a set of attribute variables to depict scene-level global geometry, e.g., camera focal length, or local geometry, e.g., surface normal, contact lines between surfaces. These geometric attributes impose constraints between a node and its off-springs in the parse graph. Under a probabilistic framework, we develop a Markov Chain Monte Carlo method to construct a parse graph that optimizes the 2D image recognition and 3D scene reconstruction purposes simultaneously. We evaluated our method on both public benchmarks and newly collected datasets. Experiments demonstrate that the proposed method is capable of achieving state-of-the-art scene reconstruction of a single image.

  4. Reconstruction and simplification of urban scene models based on oblique images

    Science.gov (United States)

    Liu, J.; Guo, B.

    2014-08-01

    We describe a multi-view stereo reconstruction and simplification algorithms for urban scene models based on oblique images. The complexity, diversity, and density within the urban scene, it increases the difficulty to build the city models using the oblique images. But there are a lot of flat surfaces existing in the urban scene. One of our key contributions is that a dense matching algorithm based on Self-Adaptive Patch in view of the urban scene is proposed. The basic idea of matching propagating based on Self-Adaptive Patch is to build patches centred by seed points which are already matched. The extent and shape of the patches can adapt to the objects of urban scene automatically: when the surface is flat, the extent of the patch would become bigger; while the surface is very rough, the extent of the patch would become smaller. The other contribution is that the mesh generated by Graph Cuts is 2-manifold surface satisfied the half edge data structure. It is solved by clustering and re-marking tetrahedrons in s-t graph. The purpose of getting 2- manifold surface is to simply the mesh by edge collapse algorithm which can preserve and stand out the features of buildings.

  5. Automatic processing of semantic relations in fMRI: neural activation during semantic priming of taxonomic and thematic categories.

    Science.gov (United States)

    Sachs, Olga; Weis, Susanne; Zellagui, Nadia; Huber, Walter; Zvyagintsev, Mikhail; Mathiak, Klaus; Kircher, Tilo

    2008-07-07

    Most current models of knowledge organization are based on hierarchical or taxonomic categories (animals, tools). Another important organizational pattern is thematic categorization, i.e. categories held together by external relations, a unifying scene or event (car and garage). The goal of this study was to compare the neural correlates of these categories under automatic processing conditions that minimize strategic influences. We used fMRI to examine neural correlates of semantic priming for category members with a short stimulus onset asynchrony (SOA) of 200 ms as subjects performed a lexical decision task. Four experimental conditions were compared: thematically related words (car-garage); taxonomically related (car-bus); unrelated (car-spoon); non-word trials (car-derf). We found faster reaction times for related than for unrelated prime-target pairs for both thematic and taxonomic categories. However, the size of the thematic priming effect was greater than that of the taxonomic. The imaging data showed signal changes for the taxonomic priming effects in the right precuneus, postcentral gyrus, middle frontal and superior frontal gyri and thematic priming effects in the right middle frontal gyrus and anterior cingulate. The contrast of neural priming effects showed larger signal changes in the right precuneus associated with the taxonomic but not with thematic priming response. We suggest that the greater involvement of precuneus in the processing of taxonomic relations indicates their reduced salience in the knowledge structure compared to more prominent thematic relations.

  6. Effects of scene content and layout on the perceived light direction in 3D spaces.

    Science.gov (United States)

    Xia, Ling; Pont, Sylvia C; Heynderickx, Ingrid

    2016-08-01

    The lighting and furnishing of an interior space (i.e., the reflectance of its materials, the geometries of the furnishings, and their arrangement) determine the appearance of this space. Conversely, human observers infer lighting properties from the space's appearance. We conducted two psychophysical experiments to investigate how the perception of the light direction is influenced by a scene's objects and their layout using real scenes. In the first experiment, we confirmed that the shape of the objects in the scene and the scene layout influence the perceived light direction. In the second experiment, we systematically investigated how specific shape properties influenced the estimation of the light direction. The results showed that increasing the number of visible faces of an object, ultimately using globally spherical shapes in the scene, supported the veridicality of the estimated light direction. Furthermore, symmetric arrangements in the scene improved the estimation of the tilt direction. Thus, human perception of light should integrally consider materials, scene content, and layout.

  7. Overt attention in natural scenes: objects dominate features.

    Science.gov (United States)

    Stoll, Josef; Thrun, Michael; Nuthmann, Antje; Einhäuser, Wolfgang

    2015-02-01

    Whether overt attention in natural scenes is guided by object content or by low-level stimulus features has become a matter of intense debate. Experimental evidence seemed to indicate that once object locations in a scene are known, salience models provide little extra explanatory power. This approach has recently been criticized for using inadequate models of early salience; and indeed, state-of-the-art salience models outperform trivial object-based models that assume a uniform distribution of fixations on objects. Here we propose to use object-based models that take a preferred viewing location (PVL) close to the centre of objects into account. In experiment 1, we demonstrate that, when including this comparably subtle modification, object-based models again are at par with state-of-the-art salience models in predicting fixations in natural scenes. One possible interpretation of these results is that objects rather than early salience dominate attentional guidance. In this view, early-salience models predict fixations through the correlation of their features with object locations. To test this hypothesis directly, in two additional experiments we reduced low-level salience in image areas of high object content. For these modified stimuli, the object-based model predicted fixations significantly better than early salience. This finding held in an object-naming task (experiment 2) and a free-viewing task (experiment 3). These results provide further evidence for object-based fixation selection--and by inference object-based attentional guidance--in natural scenes. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Angular difference feature extraction for urban scene classification using ZY-3 multi-angle high-resolution satellite imagery

    Science.gov (United States)

    Huang, Xin; Chen, Huijun; Gong, Jianya

    2018-01-01

    Spaceborne multi-angle images with a high-resolution are capable of simultaneously providing spatial details and three-dimensional (3D) information to support detailed and accurate classification of complex urban scenes. In recent years, satellite-derived digital surface models (DSMs) have been increasingly utilized to provide height information to complement spectral properties for urban classification. However, in such a way, the multi-angle information is not effectively exploited, which is mainly due to the errors and difficulties of the multi-view image matching and the inaccuracy of the generated DSM over complex and dense urban scenes. Therefore, it is still a challenging task to effectively exploit the available angular information from high-resolution multi-angle images. In this paper, we investigate the potential for classifying urban scenes based on local angular properties characterized from high-resolution ZY-3 multi-view images. Specifically, three categories of angular difference features (ADFs) are proposed to describe the angular information at three levels (i.e., pixel, feature, and label levels): (1) ADF-pixel: the angular information is directly extrapolated by pixel comparison between the multi-angle images; (2) ADF-feature: the angular differences are described in the feature domains by comparing the differences between the multi-angle spatial features (e.g., morphological attribute profiles (APs)). (3) ADF-label: label-level angular features are proposed based on a group of urban primitives (e.g., buildings and shadows), in order to describe the specific angular information related to the types of primitive classes. In addition, we utilize spatial-contextual information to refine the multi-level ADF features using superpixel segmentation, for the purpose of alleviating the effects of salt-and-pepper noise and representing the main angular characteristics within a local area. The experiments on ZY-3 multi-angle images confirm that the proposed

  9. Gist in time: Scene semantics and structure enhance recall of searched objects.

    Science.gov (United States)

    Josephs, Emilie L; Draschkow, Dejan; Wolfe, Jeremy M; Võ, Melissa L-H

    2016-09-01

    Previous work has shown that recall of objects that are incidentally encountered as targets in visual search is better than recall of objects that have been intentionally memorized (Draschkow, Wolfe, & Võ, 2014). However, this counter-intuitive result is not seen when these tasks are performed with non-scene stimuli. The goal of the current paper is to determine what features of search in a scene contribute to higher recall rates when compared to a memorization task. In each of four experiments, we compare the free recall rate for target objects following a search to the rate following a memorization task. Across the experiments, the stimuli include progressively more scene-related information. Experiment 1 provides the spatial relations between objects. Experiment 2 adds relative size and depth of objects. Experiments 3 and 4 include scene layout and semantic information. We find that search leads to better recall than explicit memorization in cases where scene layout and semantic information are present, as long as the participant has ample time (2500ms) to integrate this information with knowledge about the target object (Exp. 4). These results suggest that the integration of scene and target information not only leads to more efficient search, but can also contribute to stronger memory representations than intentional memorization. Copyright © 2016 Elsevier B.V. All rights reserved.

  10. Image Chunking: Defining Spatial Building Blocks for Scene Analysis.

    Science.gov (United States)

    1987-04-01

    mumgs0.USmusa 7.AUWOJO 4. CIUTAC Rm6ANT Wuugme*j James V/. Mlahoney DACA? 6-85-C-00 10 NOQ 1 4-85-K-O 124 Artificial Inteligence Laboratory US USS 545...0197 672 IMAGE CHUWING: DEINING SPATIAL UILDING PLOCKS FOR 142 SCENE ANRLYSIS(U) MASSACHUSETTS INST OF TECH CAIIAIDGE ARTIFICIAL INTELLIGENCE LAO J...Technical Report 980 F-Image Chunking: Defining Spatial Building Blocks for Scene DTm -Analysis S ELECTED James V. Mahoney’ MIT Artificial Intelligence

  11. Global Transsaccadic Change Blindness During Scene Perception

    National Research Council Canada - National Science Library

    Henderson, John

    2003-01-01

    .... The results from two experiments demonstrated a global transsaccadic change-blindness effect, suggesting that point-by-point visual representations are not functional across saccades during complex scene perception. Ahstract.

  12. Exploring eye movements in patients with glaucoma when viewing a driving scene.

    Science.gov (United States)

    Crabb, David P; Smith, Nicholas D; Rauscher, Franziska G; Chisholm, Catharine M; Barbur, John L; Edgar, David F; Garway-Heath, David F

    2010-03-16

    Glaucoma is a progressive eye disease and a leading cause of visual disability. Automated assessment of the visual field determines the different stages in the disease process: it would be desirable to link these measurements taken in the clinic with patient's actual function, or establish if patients compensate for their restricted field of view when performing everyday tasks. Hence, this study investigated eye movements in glaucomatous patients when viewing driving scenes in a hazard perception test (HPT). The HPT is a component of the UK driving licence test consisting of a series of short film clips of various traffic scenes viewed from the driver's perspective each containing hazardous situations that require the camera car to change direction or slow down. Data from nine glaucomatous patients with binocular visual field defects and ten age-matched control subjects were considered (all experienced drivers). Each subject viewed 26 different films with eye movements simultaneously monitored by an eye tracker. Computer software was purpose written to pre-process the data, co-register it to the film clips and to quantify eye movements and point-of-regard (using a dynamic bivariate contour ellipse analysis). On average, and across all HPT films, patients exhibited different eye movement characteristics to controls making, for example, significantly more saccades (P<0.001; 95% confidence interval for mean increase: 9.2 to 22.4%). Whilst the average region of 'point-of-regard' of the patients did not differ significantly from the controls, there were revealing cases where patients failed to see a hazard in relation to their binocular visual field defect. Characteristics of eye movement patterns in patients with bilateral glaucoma can differ significantly from age-matched controls when viewing a traffic scene. Further studies of eye movements made by glaucomatous patients could provide useful information about the definition of the visual field component required for

  13. Ontology of a scene based on Java 3D architecture.

    Directory of Open Access Journals (Sweden)

    Rubén González Crespo

    2009-12-01

    Full Text Available The present article seeks to make an approach to the class hierarchy of a scene built with the architecture Java 3D, to develop an ontology of a scene as from the semantic essential components for the semantic structuring of the Web3D. Java was selected because the language recommended by the W3C Consortium for the Development of the Web3D oriented applications as from X3D standard is Xj3D which compositionof their Schemas is based the architecture of Java3D In first instance identifies the domain and scope of the ontology, defining classes and subclasses that comprise from Java3D architecture and the essential elements of a scene, as its point of origin, the field of rotation, translation The limitation of the scene and the definition of shaders, then define the slots that are declared in RDF as a framework for describing the properties of the classes established from identifying thedomain and range of each class, then develops composition of the OWL ontology on SWOOP Finally, be perform instantiations of the ontology building for a Iconosphere object as from class expressions defined.

  14. Modelling Technology for Building Fire Scene with Virtual Geographic Environment

    Science.gov (United States)

    Song, Y.; Zhao, L.; Wei, M.; Zhang, H.; Liu, W.

    2017-09-01

    Building fire is a risky activity that can lead to disaster and massive destruction. The management and disposal of building fire has always attracted much interest from researchers. Integrated Virtual Geographic Environment (VGE) is a good choice for building fire safety management and emergency decisions, in which a more real and rich fire process can be computed and obtained dynamically, and the results of fire simulations and analyses can be much more accurate as well. To modelling building fire scene with VGE, the application requirements and modelling objective of building fire scene were analysed in this paper. Then, the four core elements of modelling building fire scene (the building space environment, the fire event, the indoor Fire Extinguishing System (FES) and the indoor crowd) were implemented, and the relationship between the elements was discussed also. Finally, with the theory and framework of VGE, the technology of building fire scene system with VGE was designed within the data environment, the model environment, the expression environment, and the collaborative environment as well. The functions and key techniques in each environment are also analysed, which may provide a reference for further development and other research on VGE.

  15. Guidance of Attention to Objects and Locations by Long-Term Memory of Natural Scenes

    Science.gov (United States)

    Becker, Mark W.; Rasmussen, Ian P.

    2008-01-01

    Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants…

  16. Eye guidance during real-world scene search: The role color plays in central and peripheral vision.

    Science.gov (United States)

    Nuthmann, Antje; Malcolm, George L

    2016-01-01

    The visual system utilizes environmental features to direct gaze efficiently when locating objects. While previous research has isolated various features' contributions to gaze guidance, these studies generally used sparse displays and did not investigate how features facilitated search as a function of their location on the visual field. The current study investigated how features across the visual field--particularly color--facilitate gaze guidance during real-world search. A gaze-contingent window followed participants' eye movements, restricting color information to specified regions. Scene images were presented in full color, with color in the periphery and gray in central vision or gray in the periphery and color in central vision, or in grayscale. Color conditions were crossed with a search cue manipulation, with the target cued either with a word label or an exact picture. Search times increased as color information in the scene decreased. A gaze-data based decomposition of search time revealed color-mediated effects on specific subprocesses of search. Color in peripheral vision facilitated target localization, whereas color in central vision facilitated target verification. Picture cues facilitated search, with the effects of cue specificity and scene color combining additively. When available, the visual system utilizes the environment's color information to facilitate different real-world visual search behaviors based on the location within the visual field.

  17. Converging modalities ground abstract categories: the case of politics.

    Directory of Open Access Journals (Sweden)

    Ana Rita Farias

    Full Text Available Three studies are reported examining the grounding of abstract concepts across two modalities (visual and auditory and their symbolic representation. A comparison of the outcomes across these studies reveals that the symbolic representation of political concepts and their visual and auditory modalities is convergent. In other words, the spatial relationships between specific instances of the political categories are highly overlapping across the symbolic, visual and auditory modalities. These findings suggest that abstract categories display redundancy across modal and amodal representations, and are multimodal.

  18. Distributional learning aids linguistic category formation in school-age children.

    Science.gov (United States)

    Hall, Jessica; Owen VAN Horne, Amanda; Farmer, Thomas

    2018-05-01

    The goal of this study was to determine if typically developing children could form grammatical categories from distributional information alone. Twenty-seven children aged six to nine listened to an artificial grammar which contained strategic gaps in its distribution. At test, we compared how children rated novel sentences that fit the grammar to sentences that were ungrammatical. Sentences could be distinguished only through the formation of categories of words with shared distributional properties. Children's ratings revealed that they could discriminate grammatical and ungrammatical sentences. These data lend support to the hypothesis that distributional learning is a potential mechanism for learning grammatical categories in a first language.

  19. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features

    Directory of Open Access Journals (Sweden)

    Linyi Li

    2017-01-01

    Full Text Available In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.

  20. Mirth and Murder: Crime Scene Investigation as a Work Context for Examining Humor Applications

    Science.gov (United States)

    Roth, Gene L.; Vivona, Brian

    2010-01-01

    Within work settings, humor is used by workers for a wide variety of purposes. This study examines humor applications of a specific type of worker in a unique work context: crime scene investigation. Crime scene investigators examine death and its details. Members of crime scene units observe death much more frequently than other police officers…

  1. Smoking scenes in popular Japanese serial television dramas: descriptive analysis during the same 3-month period in two consecutive years.

    Science.gov (United States)

    Kanda, Hideyuki; Okamura, Tomonori; Turin, Tanvir Chowdhury; Hayakawa, Takehito; Kadowaki, Takashi; Ueshima, Hirotsugu

    2006-06-01

    Japanese serial television dramas are becoming very popular overseas, particularly in other Asian countries. Exposure to smoking scenes in movies and television dramas has been known to trigger initiation of habitual smoking in young people. Smoking scenes in Japanese dramas may affect the smoking behavior of many young Asians. We examined smoking scenes and smoking-related items in serial television dramas targeting young audiences in Japan during the same season in two consecutive years. Fourteen television dramas targeting the young audience broadcast between July and September in 2001 and 2002 were analyzed. A total of 136 h 42 min of television programs were divided into unit scenes of 3 min (a total of 2734 unit scenes). All the unit scenes were reviewed for smoking scenes and smoking-related items. Of the 2734 3-min unit scenes, 205 (7.5%) were actual smoking scenes and 387 (14.2%) depicted smoking environments with the presence of smoking-related items, such as ash trays. In 185 unit scenes (90.2% of total smoking scenes), actors were shown smoking. Actresses were less frequently shown smoking (9.8% of total smoking scenes). Smoking characters in dramas were in the 20-49 age group in 193 unit scenes (94.1% of total smoking scenes). In 96 unit scenes (46.8% of total smoking scenes), at least one non-smoker was present in the smoking scenes. The smoking locations were mainly indoors, including offices, restaurants and homes (122 unit scenes, 59.6%). The most common smoking-related items shown were ash trays (in 45.5% of smoking-item-related scenes) and cigarettes (in 30.2% of smoking-item-related scenes). Only 3 unit scenes (0.1 % of all scenes) promoted smoking prohibition. This was a descriptive study to examine the nature of smoking scenes observed in Japanese television dramas from a public health perspective.

  2. Technicolor/INRIA team at the MediaEval 2013 Violent Scenes Detection Task

    OpenAIRE

    Penet , Cédric; Demarty , Claire-Hélène; Gravier , Guillaume; Gros , Patrick

    2013-01-01

    International audience; This paper presents the work done at Technicolor and INRIA regarding the MediaEval 2013 Violent Scenes Detection task, which aims at detecting violent scenes in movies. We participated in both the objective and the subjective subtasks.

  3. Multiple vehicle routing and dispatching to an emergency scene

    OpenAIRE

    M S Daskin; A Haghani

    1984-01-01

    A model of the distribution of arrival time at the scene of an emergency for the first of many vehicles is developed for the case in which travel times on the links of the network are normally distributed and the path travel times of different vehicles are correlated. The model suggests that the probability that the first vehicle arrives at the scene within a given time may be increased by reducing the path time correlations, even if doing so necessitates increasing the mean path travel time ...

  4. Orbital prefrontal cortex is required for object-in-place scene memory but not performance of a strategy implementation task.

    Science.gov (United States)

    Baxter, Mark G; Gaffan, David; Kyriazis, Diana A; Mitchell, Anna S

    2007-10-17

    The orbital prefrontal cortex is thought to be involved in behavioral flexibility in primates, and human neuroimaging studies have identified orbital prefrontal activation during episodic memory encoding. The goal of the present study was to ascertain whether deficits in strategy implementation and episodic memory that occur after ablation of the entire prefrontal cortex can be ascribed to damage to the orbital prefrontal cortex. Rhesus monkeys were preoperatively trained on two behavioral tasks, the performance of both of which is severely impaired by the disconnection of frontal cortex from inferotemporal cortex. In the strategy implementation task, monkeys were required to learn about two categories of objects, each associated with a different strategy that had to be performed to obtain food reward. The different strategies had to be applied flexibly to optimize the rate of reward delivery. In the scene memory task, monkeys learned 20 new object-in-place discrimination problems in each session. Monkeys were tested on both tasks before and after bilateral ablation of orbital prefrontal cortex. These lesions impaired new scene learning but had no effect on strategy implementation. This finding supports a role for the orbital prefrontal cortex in memory but places limits on the involvement of orbital prefrontal cortex in the representation and implementation of behavioral goals and strategies.

  5. Combined Influence of Visual Scene and Body Tilt on Arm Pointing Movements: Gravity Matters!

    Science.gov (United States)

    Scotto Di Cesare, Cécile; Sarlegna, Fabrice R.; Bourdin, Christophe; Mestre, Daniel R.; Bringoux, Lionel

    2014-01-01

    Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., ‘combined’ tilts equal to the sum of ‘single’ tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues. PMID:24925371

  6. Negotiating place and gendered violence in Canada's largest open drug scene.

    Science.gov (United States)

    McNeil, Ryan; Shannon, Kate; Shaver, Laura; Kerr, Thomas; Small, Will

    2014-05-01

    Vancouver's Downtown Eastside is home to Canada's largest street-based drug scene and only supervised injection facility (Insite). High levels of violence among men and women have been documented in this neighbourhood. This study was undertaken to explore the role of violence in shaping the socio-spatial relations of women and 'marginal men' (i.e., those occupying subordinate positions within the drug scene) in the Downtown Eastside, including access to Insite. Semi-structured qualitative interviews were conducted with 23 people who inject drugs (PWID) recruited through the Vancouver Area Network of Drug Users, a local drug user organization. Interviews included a mapping exercise. Interview transcripts and maps were analyzed thematically, with an emphasis on how gendered violence shaped participants' spatial practices. Hegemonic forms of masculinity operating within the Downtown Eastside framed the everyday violence experienced by women and marginal men. This violence shaped the spatial practices of women and marginal men, in that they avoided drug scene milieus where they had experienced violence or that they perceived to be dangerous. Some men linked their spatial restrictions to the perceived 'dope quality' of neighbourhood drug dealers to maintain claims to dominant masculinities while enacting spatial strategies to promote safety. Environmental supports provided by health and social care agencies were critical in enabling women and marginal men to negotiate place and survival within the context of drug scene violence. Access to Insite did not motivate participants to enter into "dangerous" drug scene milieus but they did venture into these areas if necessary to obtain drugs or generate income. Gendered violence is critical in restricting the geographies of men and marginal men within the street-based drug scene. There is a need to scale up existing environmental interventions, including supervised injection services, to minimize violence and potential drug

  7. Comparing two K-category assignments by a K-category correlation coefficient

    DEFF Research Database (Denmark)

    Gorodkin, Jan

    2004-01-01

    Predicted assignments of biological sequences are often evaluated by Matthews correlation coefficient. However, Matthews correlation coefficient applies only to cases where the assignments belong to two categories, and cases with more than two categories are often artificially forced into two...... categories by considering what belongs and what does not belong to one of the categories, leading to the loss of information. Here, an extended correlation coefficient that applies to K-categories is proposed, and this measure is shown to be highly applicable for evaluating prediction of RNA secondary...

  8. Real-time scene and signature generation for ladar and imaging sensors

    Science.gov (United States)

    Swierkowski, Leszek; Christie, Chad L.; Antanovskii, Leonid; Gouthas, Efthimios

    2014-05-01

    This paper describes development of two key functionalities within the VIRSuite scene simulation program, broadening its scene generation capabilities and increasing accuracy of thermal signatures. Firstly, a new LADAR scene generation module has been designed. It is capable of simulating range imagery for Geiger mode LADAR, in addition to the already existing functionality for linear mode systems. Furthermore, a new 3D heat diffusion solver has been developed within the VIRSuite signature prediction module. It is capable of calculating the temperature distribution in complex three-dimensional objects for enhanced dynamic prediction of thermal signatures. With these enhancements, VIRSuite is now a robust tool for conducting dynamic simulation for missiles with multi-mode seekers.

  9. Three-dimensional scene encryption and display based on computer-generated holograms.

    Science.gov (United States)

    Kong, Dezhao; Cao, Liangcai; Jin, Guofan; Javidi, Bahram

    2016-10-10

    An optical encryption and display method for a three-dimensional (3D) scene is proposed based on computer-generated holograms (CGHs) using a single phase-only spatial light modulator. The 3D scene is encoded as one complex Fourier CGH. The Fourier CGH is then decomposed into two phase-only CGHs with random distributions by the vector stochastic decomposition algorithm. Two CGHs are interleaved as one final phase-only CGH for optical encryption and reconstruction. The proposed method can support high-level nonlinear optical 3D scene security and complex amplitude modulation of the optical field. The exclusive phase key offers strong resistances of decryption attacks. Experimental results demonstrate the validity of the novel method.

  10. Anticipatory scene representation in preschool children's recall and recognition memory.

    Science.gov (United States)

    Kreindel, Erica; Intraub, Helene

    2017-09-01

    Behavioral and neuroscience research on boundary extension (false memory beyond the edges of a view of a scene) has provided new insights into the constructive nature of scene representation, and motivates questions about development. Early research with children (as young as 6-7 years) was consistent with boundary extension, but relied on an analysis of spatial errors in drawings which are open to alternative explanations (e.g. drawing ability). Experiment 1 replicated and extended prior drawing results with 4-5-year-olds and adults. In Experiment 2, a new, forced-choice immediate recognition memory test was implemented with the same children. On each trial, a card (photograph of a simple scene) was immediately replaced by a test card (identical view and either a closer or more wide-angle view) and participants indicated which one matched the original view. Error patterns supported boundary extension; identical photographs were more frequently rejected when the closer view was the original view, than vice versa. This asymmetry was not attributable to a selection bias (guessing tasks; Experiments 3-5). In Experiment 4, working memory load was increased by presenting more expansive views of more complex scenes. Again, children exhibited boundary extension, but now adults did not, unless stimulus duration was reduced to 5 s (limiting time to implement strategies; Experiment 5). We propose that like adults, children interpret photographs as views of places in the world; they extrapolate the anticipated continuation of the scene beyond the view and misattribute it to having been seen. Developmental differences in source attribution decision processes provide an explanation for the age-related differences observed. © 2016 John Wiley & Sons Ltd.

  11. From Theatre Improvisation To Video Scenes

    DEFF Research Database (Denmark)

    Larsen, Henry; Hvidt, Niels Christian; Friis, Preben

    2018-01-01

    At Sygehus Lillebaelt, a Danish hospital, there has been a focus for several years on patient communi- cation. This paper reflects on a course focusing on engaging with the patient’s existential themes in particular the negotiations around the creation of video scenes. In the initial workshops, w...

  12. Scene independent real-time indirect illumination

    DEFF Research Database (Denmark)

    Frisvad, Jeppe Revall; Christensen, Niels Jørgen; Falster, Peter

    2005-01-01

    A novel method for real-time simulation of indirect illumination is presented in this paper. The method, which we call Direct Radiance Mapping (DRM), is based on basal radiance calculations and does not impose any restrictions on scene geometry or dynamics. This makes the method tractable for rea...

  13. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Directory of Open Access Journals (Sweden)

    Jianfang Cao

    2015-01-01

    Full Text Available With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  14. Error Detection, Factorization and Correction for Multi-View Scene Reconstruction from Aerial Imagery

    Energy Technology Data Exchange (ETDEWEB)

    Hess-Flores, Mauricio [Univ. of California, Davis, CA (United States)

    2011-11-10

    Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in

  15. Scene recognition based on integrating active learning with dictionary learning

    Science.gov (United States)

    Wang, Chengxi; Yin, Xueyan; Yang, Lin; Gong, Chengrong; Zheng, Caixia; Yi, Yugen

    2018-04-01

    Scene recognition is a significant topic in the field of computer vision. Most of the existing scene recognition models require a large amount of labeled training samples to achieve a good performance. However, labeling image manually is a time consuming task and often unrealistic in practice. In order to gain satisfying recognition results when labeled samples are insufficient, this paper proposed a scene recognition algorithm named Integrating Active Learning and Dictionary Leaning (IALDL). IALDL adopts projective dictionary pair learning (DPL) as classifier and introduces active learning mechanism into DPL for improving its performance. When constructing sampling criterion in active learning, IALDL considers both the uncertainty and representativeness as the sampling criteria to effectively select the useful unlabeled samples from a given sample set for expanding the training dataset. Experiment results on three standard databases demonstrate the feasibility and validity of the proposed IALDL.

  16. Developing Scene Understanding Neural Software for Realistic Autonomous Outdoor Missions

    Science.gov (United States)

    2017-09-01

    computer using a single graphics processing unit (GPU). To the best of our knowledge, an implementation of the open-source Python -based AlexNet CNN on...1. Introduction Neurons in the brain enable us to understand scenes by assessing the spatial, temporal, and feature relations of objects in the...effort to use computer neural networks to augment human neural intelligence to improve our scene understanding (Krizhevsky et al. 2012; Zhou et al

  17. Deconstructing visual scenes in cortex: gradients of object and spatial layout information.

    Science.gov (United States)

    Harel, Assaf; Kravitz, Dwight J; Baker, Chris I

    2013-04-01

    Real-world visual scenes are complex cluttered, and heterogeneous stimuli engaging scene- and object-selective cortical regions including parahippocampal place area (PPA), retrosplenial complex (RSC), and lateral occipital complex (LOC). To understand the unique contribution of each region to distributed scene representations, we generated predictions based on a neuroanatomical framework adapted from monkey and tested them using minimal scenes in which we independently manipulated both spatial layout (open, closed, and gradient) and object content (furniture, e.g., bed, dresser). Commensurate with its strong connectivity with posterior parietal cortex, RSC evidenced strong spatial layout information but no object information, and its response was not even modulated by object presence. In contrast, LOC, which lies within the ventral visual pathway, contained strong object information but no background information. Finally, PPA, which is connected with both the dorsal and the ventral visual pathway, showed information about both objects and spatial backgrounds and was sensitive to the presence or absence of either. These results suggest that 1) LOC, PPA, and RSC have distinct representations, emphasizing different aspects of scenes, 2) the specific representations in each region are predictable from their patterns of connectivity, and 3) PPA combines both spatial layout and object information as predicted by connectivity.

  18. Categories from scratch

    NARCIS (Netherlands)

    Poss, R.

    2014-01-01

    The concept of category from mathematics happens to be useful to computer programmers in many ways. Unfortunately, all "good" explanations of categories so far have been designed by mathematicians, or at least theoreticians with a strong background in mathematics, and this makes categories

  19. A semi-interactive panorama based 3D reconstruction framework for indoor scenes

    NARCIS (Netherlands)

    Dang, T.K.; Worring, M.; Bui, T.D.

    2011-01-01

    We present a semi-interactive method for 3D reconstruction specialized for indoor scenes which combines computer vision techniques with efficient interaction. We use panoramas, popularly used for visualization of indoor scenes, but clearly not able to show depth, for their great field of view, as

  20. 3D Aware Correction and Completion of Depth Maps in Piecewise Planar Scenes

    KAUST Repository

    Thabet, Ali Kassem; Lahoud, Jean; Asmar, Daniel; Ghanem, Bernard

    2015-01-01

    RGB-D sensors are popular in the computer vision community, especially for problems of scene understanding, semantic scene labeling, and segmentation. However, most of these methods depend on reliable input depth measurements, while discarding

  1. Higher-order scene statistics of breast images

    Science.gov (United States)

    Abbey, Craig K.; Sohl-Dickstein, Jascha N.; Olshausen, Bruno A.; Eckstein, Miguel P.; Boone, John M.

    2009-02-01

    Researchers studying human and computer vision have found description and construction of these systems greatly aided by analysis of the statistical properties of naturally occurring scenes. More specifically, it has been found that receptive fields with directional selectivity and bandwidth properties similar to mammalian visual systems are more closely matched to the statistics of natural scenes. It is argued that this allows for sparse representation of the independent components of natural images [Olshausen and Field, Nature, 1996]. These theories have important implications for medical image perception. For example, will a system that is designed to represent the independent components of natural scenes, where objects occlude one another and illumination is typically reflected, be appropriate for X-ray imaging, where features superimpose on one another and illumination is transmissive? In this research we begin to examine these issues by evaluating higher-order statistical properties of breast images from X-ray projection mammography (PM) and dedicated breast computed tomography (bCT). We evaluate kurtosis in responses of octave bandwidth Gabor filters applied to PM and to coronal slices of bCT scans. We find that kurtosis in PM rises and quickly saturates for filter center frequencies with an average value above 0.95. By contrast, kurtosis in bCT peaks near 0.20 cyc/mm with kurtosis of approximately 2. Our findings suggest that the human visual system may be tuned to represent breast tissue more effectively in bCT over a specific range of spatial frequencies.

  2. Study on general design of dual-DMD based infrared two-band scene simulation system

    Science.gov (United States)

    Pan, Yue; Qiao, Yang; Xu, Xi-ping

    2017-02-01

    Mid-wave infrared(MWIR) and long-wave infrared(LWIR) two-band scene simulation system is a kind of testing equipment that used for infrared two-band imaging seeker. Not only it would be qualified for working waveband, but also realize the essence requests that infrared radiation characteristics should correspond to the real scene. Past single-digital micromirror device (DMD) based infrared scene simulation system does not take the huge difference between targets and background radiation into account, and it cannot realize the separated modulation to two-band light beam. Consequently, single-DMD based infrared scene simulation system cannot accurately express the thermal scene model that upper-computer built, and it is not that practical. To solve the problem, we design a dual-DMD based, dual-channel, co-aperture, compact-structure infrared two-band scene simulation system. The operating principle of the system is introduced in detail, and energy transfer process of the hardware-in-the-loop simulation experiment is analyzed as well. Also, it builds the equation about the signal-to-noise ratio of infrared detector in the seeker, directing the system overall design. The general design scheme of system is given, including the creation of infrared scene model, overall control, optical-mechanical structure design and image registration. By analyzing and comparing the past designs, we discuss the arrangement of optical engine framework in the system. According to the main content of working principle and overall design, we summarize each key techniques in the system.

  3. Cross-cultural differences in item and background memory: examining the influence of emotional intensity and scene congruency.

    Science.gov (United States)

    Mickley Steinmetz, Katherine R; Sturkie, Charlee M; Rochester, Nina M; Liu, Xiaodong; Gutchess, Angela H

    2018-07-01

    After viewing a scene, individuals differ in what they prioritise and remember. Culture may be one factor that influences scene memory, as Westerners have been shown to be more item-focused than Easterners (see Masuda, T., & Nisbett, R. E. (2001). Attending holistically versus analytically: Comparing the context sensitivity of Japanese and Americans. Journal of Personality and Social Psychology, 81, 922-934). However, cultures may differ in their sensitivity to scene incongruences and emotion processing, which may account for cross-cultural differences in scene memory. The current study uses hierarchical linear modeling (HLM) to examine scene memory while controlling for scene congruency and the perceived emotional intensity of the images. American and East Asian participants encoded pictures that included a positive, negative, or neutral item placed on a neutral background. After a 20-min delay, participants were shown the item and background separately along with similar and new items and backgrounds to assess memory specificity. Results indicated that even when congruency and emotional intensity were controlled, there was evidence that Americans had better item memory than East Asians. Incongruent scenes were better remembered than congruent scenes. However, this effect did not differ by culture. This suggests that Americans' item focus may result in memory changes that are robust despite variations in scene congruency and perceived emotion.

  4. Left Superior Temporal Gyrus Is Coupled to Attended Speech in a Cocktail-Party Auditory Scene.

    Science.gov (United States)

    Vander Ghinst, Marc; Bourguignon, Mathieu; Op de Beeck, Marc; Wens, Vincent; Marty, Brice; Hassid, Sergio; Choufani, Georges; Jousmäki, Veikko; Hari, Riitta; Van Bogaert, Patrick; Goldman, Serge; De Tiège, Xavier

    2016-02-03

    Using a continuous listening task, we evaluated the coupling between the listener's cortical activity and the temporal envelopes of different sounds in a multitalker auditory scene using magnetoencephalography and corticovocal coherence analysis. Neuromagnetic signals were recorded from 20 right-handed healthy adult humans who listened to five different recorded stories (attended speech streams), one without any multitalker background (No noise) and four mixed with a "cocktail party" multitalker background noise at four signal-to-noise ratios (5, 0, -5, and -10 dB) to produce speech-in-noise mixtures, here referred to as Global scene. Coherence analysis revealed that the modulations of the attended speech stream, presented without multitalker background, were coupled at ∼0.5 Hz to the activity of both superior temporal gyri, whereas the modulations at 4-8 Hz were coupled to the activity of the right supratemporal auditory cortex. In cocktail party conditions, with the multitalker background noise, the coupling was at both frequencies stronger for the attended speech stream than for the unattended Multitalker background. The coupling strengths decreased as the Multitalker background increased. During the cocktail party conditions, the ∼0.5 Hz coupling became left-hemisphere dominant, compared with bilateral coupling without the multitalker background, whereas the 4-8 Hz coupling remained right-hemisphere lateralized in both conditions. The brain activity was not coupled to the multitalker background or to its individual talkers. The results highlight the key role of listener's left superior temporal gyri in extracting the slow ∼0.5 Hz modulations, likely reflecting the attended speech stream within a multitalker auditory scene. When people listen to one person in a "cocktail party," their auditory cortex mainly follows the attended speech stream rather than the entire auditory scene. However, how the brain extracts the attended speech stream from the whole

  5. Wall grid structure for interior scene synthesis

    KAUST Repository

    Xu, Wenzhuo; Wang, Bin; Yan, Dongming

    2015-01-01

    We present a system for automatically synthesizing a diverse set of semantically valid, and well-arranged 3D interior scenes for a given empty room shape. Unlike existing work on layout synthesis, that typically knows potentially needed 3D models

  6. Popular music scenes and aging bodies.

    Science.gov (United States)

    Bennett, Andy

    2018-06-01

    During the last two decades there has been increasing interest in the phenomenon of the aging popular music audience (Bennett & Hodkinson, 2012). Although the specter of the aging fan is by no means new, the notion of, for example, the aging rocker or the aging punk has attracted significant sociological attention, not least of all because of what this says about the shifting socio-cultural significance of rock and punk and similar genres - which at the time of their emergence were inextricably tied to youth and vociferously marketed as "youth musics". As such, initial interpretations of aging music fans tended to paint a somewhat negative picture, suggesting a sense in which such fans were cultural misfits (Ross, 1994). In more recent times, however, work informed by cultural aging perspectives has begun to consider how so-called "youth cultural" identities may in fact provide the basis of more stable and evolving identities over the life course (Bennett, 2013). Starting from this position, the purpose of this article is to critically examine how aging members of popular music scenes might be recast as a salient example of the more pluralistic fashion in which aging is anticipated, managed and articulated in contemporary social settings. The article then branches out to consider two ways that aging members of music scenes continue their scene involvement. The first focuses on evolving a series of discourses that legitimately position them as aging bodies in cultural spaces that also continue to be inhabited by significant numbers of people in their teens, twenties and thirties. The second sees aging fans taking advantage of new opportunities for consuming live music including winery concerts and dinner and show events. Copyright © 2018 Elsevier Inc. All rights reserved.

  7. Eye Movements when Looking at Unusual/Weird Scenes: Are There Cultural Differences?

    Science.gov (United States)

    Rayner, Keith; Castelhano, Monica S.; Yang, Jinmian

    2009-01-01

    Recent studies have suggested that eye movement patterns while viewing scenes differ for people from different cultural backgrounds and that these differences in how scenes are viewed are due to differences in the prioritization of information (background or foreground). The current study examined whether there are cultural differences in how…

  8. The elephant in the room: inconsistency in scene viewing and representation

    OpenAIRE

    Spotorno, Sara; Tatler, Benjamin W.

    2017-01-01

    We examined the extent to which semantic informativeness, consistency with expectations and perceptual salience contribute to object prioritization in scene viewing and representation. In scene viewing (Experiments 1–2), semantic guidance overshadowed perceptual guidance in determining fixation order, with the greatest prioritization for objects that were diagnostic of the scene’s depicted event. Perceptual properties affected selection of consistent objects (regardless of their informativene...

  9. Z-depth integration: a new technique for manipulating z-depth properties in composited scenes

    Science.gov (United States)

    Steckel, Kayla; Whittinghill, David

    2014-02-01

    This paper presents a new technique in the production pipeline of asset creation for virtual environments called Z-Depth Integration (ZeDI). ZeDI is intended to reduce the time required to place elements at the appropriate z-depth within a scene. Though ZeDI is intended for use primarily in two-dimensional scene composition, depth-dependent "flat" animated objects are often critical elements of augmented and virtual reality applications (AR/VR). ZeDI is derived from "deep image compositing", a capacity implemented within the OpenEXR file format. In order to trick the human eye into perceiving overlapping scene elements as being in front of or behind one another, the developer must manually manipulate which pixels of an element are visible in relation to other objects embedded within the environment's image sequence. ZeDI improves on this process by providing a means for interacting with procedurally extracted z-depth data from a virtual environment scene. By streamlining the process of defining objects' depth characteristics, it is expected that the time and energy required for developers to create compelling AR/VR scenes will be reduced. In the proof of concept presented in this manuscript, ZeDI is implemented for pre-rendered virtual scene construction via an AfterEffects software plug-in.

  10. Special effects used in creating 3D animated scenes-part 1

    Science.gov (United States)

    Avramescu, A. M.

    2015-11-01

    In present, with the help of computer, we can create special effects that look so real that we almost don't perceive them as being different. These special effects are somehow hard to differentiate from the real elements like those on the screen. With the increasingly accesible 3D field that has more and more areas of application, the 3D technology goes easily from architecture to product designing. Real like 3D animations are used as means of learning, for multimedia presentations of big global corporations, for special effects and even for virtual actors in movies. Technology, as part of the movie art, is considered a prerequisite but the cinematography is the first art that had to wait for the correct intersection of technological development, innovation and human vision in order to attain full achievement. Increasingly more often, the majority of industries is using 3D sequences (three dimensional). 3D represented graphics, commercials and special effects from movies are all designed in 3D. The key for attaining real visual effects is to successfully combine various distinct elements: characters, objects, images and video scenes; like all these elements represent a whole that works in perfect harmony. This article aims to exhibit a game design from these days. Considering the advanced technology and futuristic vision of designers, nowadays we have different and multifarious game models. Special effects are decisively contributing in the creation of a realistic three-dimensional scene. These effects are essential for transmitting the emotional state of the scene. Creating the special effects is a work of finesse in order to achieve high quality scenes. Special effects can be used to get the attention of the onlooker on an object from a scene. Out of the conducted study, the best-selling game of the year 2010 was Call of Duty: Modern Warfare 2. This way, the article aims for the presented scene to be similar with many locations from this type of games, more

  11. Non-uniform crosstalk reduction for dynamic scenes

    NARCIS (Netherlands)

    Smit, F.A.; Liere, van R.; Fröhlich, B.

    2007-01-01

    Stereo displays suffer from crosstalk, an effect that reduces or even inhibits the viewer's ability to correctly perceive depth. Previous work on software crosstalk reduction focussed on the preprocessing of static scenes which are viewed from a fixed viewpoint. However, in virtual environments

  12. Multimodal computational attention for scene understanding and robotics

    CERN Document Server

    Schauerte, Boris

    2016-01-01

    This book presents state-of-the-art computational attention models that have been successfully tested in diverse application areas and can build the foundation for artificial systems to efficiently explore, analyze, and understand natural scenes. It gives a comprehensive overview of the most recent computational attention models for processing visual and acoustic input. It covers the biological background of visual and auditory attention, as well as bottom-up and top-down attentional mechanisms and discusses various applications. In the first part new approaches for bottom-up visual and acoustic saliency models are presented and applied to the task of audio-visual scene exploration of a robot. In the second part the influence of top-down cues for attention modeling is investigated. .

  13. Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior.

    Science.gov (United States)

    Groen, Iris Ia; Greene, Michelle R; Baldassano, Christopher; Fei-Fei, Li; Beck, Diane M; Baker, Chris I

    2018-03-07

    Inherent correlations between visual and semantic features in real-world scenes make it difficult to determine how different scene properties contribute to neural representations. Here, we assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection. Behavioral assessments of scene similarity reflected unique contributions from a functional feature model indicating potential actions in scenes as well as high-level visual features from a deep neural network (DNN). In contrast, similarity of cortical responses in scene-selective areas was uniquely explained by mid- and high-level DNN features only, while an object label model did not contribute uniquely to either domain. The striking dissociation between functional and DNN features in their contribution to behavioral and brain representations of scenes indicates that scene-selective cortex represents only a subset of behaviorally relevant scene information.

  14. Anticipatory Scene Representation in Preschool Children's Recall and Recognition Memory

    Science.gov (United States)

    Kreindel, Erica; Intraub, Helene

    2017-01-01

    Behavioral and neuroscience research on boundary extension (false memory beyond the edges of a view of a scene) has provided new insights into the constructive nature of scene representation, and motivates questions about development. Early research with children (as young as 6-7 years) was consistent with boundary extension, but relied on an…

  15. Attribute conjunctions and the part configuration advantage in object category learning.

    Science.gov (United States)

    Saiki, J; Hummel, J E

    1996-07-01

    Five experiments demonstrated that in object category learning people are particularly sensitive to conjunctions of part shapes and relative locations. Participants learned categories defined by a part's shape and color (part-color conjunctions) or by a part's shape and its location relative to another part (part-location conjunctions). The statistical properties of the categories were identical across these conditions, as were the salience of color and relative location. Participants were better at classifying objects defined by part-location conjunctions than objects defined by part-color conjunctions. Subsequent experiments revealed that this effect was not due to the specific color manipulation or the role of location per se. These results suggest that the shape bias in object categorization is at least partly due to sensitivity to part-location conjunctions and suggest a new processing constraint on category learning.

  16. Exploring eye movements in patients with glaucoma when viewing a driving scene.

    Directory of Open Access Journals (Sweden)

    David P Crabb

    Full Text Available BACKGROUND: Glaucoma is a progressive eye disease and a leading cause of visual disability. Automated assessment of the visual field determines the different stages in the disease process: it would be desirable to link these measurements taken in the clinic with patient's actual function, or establish if patients compensate for their restricted field of view when performing everyday tasks. Hence, this study investigated eye movements in glaucomatous patients when viewing driving scenes in a hazard perception test (HPT. METHODOLOGY/PRINCIPAL FINDINGS: The HPT is a component of the UK driving licence test consisting of a series of short film clips of various traffic scenes viewed from the driver's perspective each containing hazardous situations that require the camera car to change direction or slow down. Data from nine glaucomatous patients with binocular visual field defects and ten age-matched control subjects were considered (all experienced drivers. Each subject viewed 26 different films with eye movements simultaneously monitored by an eye tracker. Computer software was purpose written to pre-process the data, co-register it to the film clips and to quantify eye movements and point-of-regard (using a dynamic bivariate contour ellipse analysis. On average, and across all HPT films, patients exhibited different eye movement characteristics to controls making, for example, significantly more saccades (P<0.001; 95% confidence interval for mean increase: 9.2 to 22.4%. Whilst the average region of 'point-of-regard' of the patients did not differ significantly from the controls, there were revealing cases where patients failed to see a hazard in relation to their binocular visual field defect. CONCLUSIONS/SIGNIFICANCE: Characteristics of eye movement patterns in patients with bilateral glaucoma can differ significantly from age-matched controls when viewing a traffic scene. Further studies of eye movements made by glaucomatous patients could

  17. The Category of Definiteness Indefiniteness through the Prism of Functional Approach to Language Studies

    Directory of Open Access Journals (Sweden)

    Labetova Victoria

    2016-12-01

    Full Text Available Background: The studying of language phenomena with the emphasis on their functional component is topical for modern linguistics. This approach gives a boost to thinking over various (semantic, morphological, syntactic forms through the prism of their functional-semantic load. The analysis of the category of definiteness/indefiniteness from this point of view allows defining the actual theoretical categorial model and the system of its expressive means. Purpose: The aim of the article is to define the essence and the bounds of the category of definiteness/indefiniteness in the Ukrainian language and reveal its expressive potential. Results: The category of definiteness/indefiniteness is a universal category that is based on the symbiosis of psychic, cognitive and lingual spheres. Its functional potential is accumulated in cognitive and communicative fields so it is not restricted only to defining spheres of definiteness or indefiniteness. Not only do things or objects appear definite or indefinite in the process of communication or its interpretation but also their characteristics or properties may acquire these senses. The category of definiteness/indefiniteness correlates with other functional-semantic categories, which leads to its complicated and ramified field structure with diffuse and peripheral zones. Discussion: The studying of the widened structure of the functional-semantic field of definiteness/indefiniteness and organizing all possible (regular, typical, occasional, etc. means of its expression is a perspective approach. It is important to study the category of definiteness/indefiniteness in different types of discourse to reveal the logic of its functioning.

  18. Object Attention Patches for Text Detection and Recognition in Scene Images using SIFT

    NARCIS (Netherlands)

    Sriman, Bowornrat; Schomaker, Lambertus; De Marsico, Maria; Figueiredo, Mário; Fred, Ana

    2015-01-01

    Natural urban scene images contain many problems for character recognition such as luminance noise, varying font styles or cluttered backgrounds. Detecting and recognizing text in a natural scene is a difficult problem. Several techniques have been proposed to overcome these problems. These are,

  19. Oxytocin increases amygdala reactivity to threatening scenes in females.

    Science.gov (United States)

    Lischke, Alexander; Gamer, Matthias; Berger, Christoph; Grossmann, Annette; Hauenstein, Karlheinz; Heinrichs, Markus; Herpertz, Sabine C; Domes, Gregor

    2012-09-01

    The neuropeptide oxytocin (OT) is well known for its profound effects on social behavior, which appear to be mediated by an OT-dependent modulation of amygdala activity in the context of social stimuli. In humans, OT decreases amygdala reactivity to threatening faces in males, but enhances amygdala reactivity to similar faces in females, suggesting sex-specific differences in OT-dependent threat-processing. To further explore whether OT generally enhances amygdala-dependent threat-processing in females, we used functional magnetic resonance imaging (fMRI) in a randomized within-subject crossover design to measure amygdala activity in response to threatening and non-threatening scenes in 14 females following intranasal administration of OT or placebo. Participants' eye movements were recorded to investigate whether an OT-dependent modulation of amygdala activity is accompanied by enhanced exploration of salient scene features. Although OT had no effect on participants' gazing behavior, it increased amygdala reactivity to scenes depicting social and non-social threat. In females, OT may, thus, enhance the detection of threatening stimuli in the environment, potentially by interacting with gonadal steroids, such as progesterone and estrogen. Copyright © 2012 Elsevier Ltd. All rights reserved.

  20. A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images

    Directory of Open Access Journals (Sweden)

    David Vázquez

    2017-01-01

    Full Text Available Colorectal cancer (CRC is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs. We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.

  1. Where's Waldo? How perceptual, cognitive, and emotional brain processes cooperate during learning to categorize and find desired objects in a cluttered scene

    Directory of Open Access Journals (Sweden)

    Stephen eGrossberg

    2014-06-01

    Full Text Available The Where’s Waldo problem concerns how individuals can rapidly learn to search a scene to detect, attend, recognize, and look at a valued target object in it. This article develops the ARTSCAN Search neural model to clarify how brain mechanisms across the What and Where cortical streams are coordinated to solve the Where's Waldo problem. The What stream learns positionally-invariant object representations, whereas the Where stream controls positionally-selective spatial and action representations. The model overcomes deficiencies of these computationally complementary properties through What and Where stream interactions. Where stream processes of spatial attention and predictive eye movement control modulate What stream processes whereby multiple view- and positionally-specific object categories are learned and associatively linked to view- and positionally-invariant object categories through bottom-up and attentive top-down interactions. Gain fields control the coordinate transformations that enable spatial attention and predictive eye movements to carry out this role. What stream cognitive-emotional learning processes enable the focusing of motivated attention upon the invariant object categories of desired objects. What stream cognitive names or motivational drives can prime a view- and positionally-invariant object category of a desired target object. A volitional signal can convert these primes into top-down activations that can, in turn, prime What stream view- and positionally-specific categories. When it also receives bottom-up activation from a target, such a positionally-specific category can cause an attentional shift in the Where stream to the positional representation of the target, and an eye movement can then be elicited to foveate it. These processes describe interactions among brain regions that include visual cortex, parietal cortex inferotemporal cortex, prefrontal cortex, amygdala, basal ganglia, and superior colliculus.

  2. Face, Body, and Center of Gravity Mediate Person Detection in Natural Scenes

    Science.gov (United States)

    Bindemann, Markus; Scheepers, Christoph; Ferguson, Heather J.; Burton, A. Mike

    2010-01-01

    Person detection is an important prerequisite of social interaction, but is not well understood. Following suggestions that people in the visual field can capture a viewer's attention, this study examines the role of the face and the body for person detection in natural scenes. We observed that viewers tend first to look at the center of a scene,…

  3. Virtual Relighting of a Virtualized Scene by Estimating Surface Reflectance Properties

    OpenAIRE

    福富, 弘敦; 町田, 貴史; 横矢, 直和

    2011-01-01

    In mixed reality that merges real and virtual worlds, it is required to interactively manipulate the illumination conditions in a virtualized space. In general, specular reflections in a scene make it difficult to interactively manipulate the illumination conditions. Our goal is to provide an opportunity to simulate the original scene, including diffuse and specular relfections, with novel viewpoints and illumination conditions. Thus, we propose a new method for estimating diffuse and specula...

  4. Efficient 3D scene modeling and mosaicing

    CERN Document Server

    Nicosevici, Tudor

    2013-01-01

    This book proposes a complete pipeline for monocular (single camera) based 3D mapping of terrestrial and underwater environments. The aim is to provide a solution to large-scale scene modeling that is both accurate and efficient. To this end, we have developed a novel Structure from Motion algorithm that increases mapping accuracy by registering camera views directly with the maps. The camera registration uses a dual approach that adapts to the type of environment being mapped.   In order to further increase the accuracy of the resulting maps, a new method is presented, allowing detection of images corresponding to the same scene region (crossovers). Crossovers then used in conjunction with global alignment methods in order to highly reduce estimation errors, especially when mapping large areas. Our method is based on Visual Bag of Words paradigm (BoW), offering a more efficient and simpler solution by eliminating the training stage, generally required by state of the art BoW algorithms.   Also, towards dev...

  5. Can height categories replace weight categories in striking martial arts competitions? A pilot study.

    Science.gov (United States)

    Dubnov-Raz, Gal; Mashiach-Arazi, Yael; Nouriel, Ariella; Raz, Raanan; Constantini, Naama W

    2015-09-29

    In most combat sports and martial arts, athletes compete within weight categories. Disordered eating behaviors and intentional pre-competition rapid weight loss are commonly seen in this population, attributed to weight categorization. We examined if height categories can be used as an alternative to weight categories for competition, in order to protect the health of athletes. Height and weight of 169 child and adolescent competitive karate athletes were measured. Participants were divided into eleven hypothetical weight categories of 5 kg increments, and eleven hypothetical height categories of 5 cm increments. We calculated the coefficient of variation of height and weight by each division method. We also calculated how many participants fit into corresponding categories of both height and weight, and how many would shift a category if divided by height. There was a high correlation between height and weight (r = 0.91, p<0.001). The mean range of heights seen within current weight categories was reduced by 83% when participants were divided by height. When allocating athletes by height categories, 74% of athletes would shift up or down one weight category at most, compared with the current categorization method. We conclude that dividing young karate athletes by height categories significantly reduced the range of heights of competitors within the category. Such categorization would not cause athletes to compete against much heavier opponents in most cases. Using height categories as a means to reduce eating disorders in combat sports should be further examined.

  6. The cost of selective attention in category learning: Developmental differences between adults and infants

    Science.gov (United States)

    Best, Catherine A.; Yim, Hyungwook; Sloutsky, Vladimir M.

    2013-01-01

    Selective attention plays an important role in category learning. However, immaturities of top-down attentional control during infancy coupled with successful category learning suggest that early category learning is achieved without attending selectively. Research presented here examines this possibility by focusing on category learning in infants (6–8 months old) and adults. Participants were trained on a novel visual category. Halfway through the experiment, unbeknownst to participants, the to-be-learned category switched to another category, where previously relevant features became irrelevant and previously irrelevant features became relevant. If participants attend selectively to the relevant features of the first category, they should incur a cost of selective attention immediately after the unknown category switch. Results revealed that adults demonstrated a cost, as evidenced by a decrease in accuracy and response time on test trials as well as a decrease in visual attention to newly relevant features. In contrast, infants did not demonstrate a similar cost of selective attention as adults despite evidence of learning both to-be-learned categories. Findings are discussed as supporting multiple systems of category learning and as suggesting that learning mechanisms engaged by adults may be different from those engaged by infants. PMID:23773914

  7. CSIR optronic scene simulator finds real application in self-protection mechanisms of the South African Air Force

    CSIR Research Space (South Africa)

    Willers, CJ

    2010-09-01

    Full Text Available The Optronic Scene Simulator (OSSIM) is a second-generation scene simulator that creates synthetic images of arbitrary complex scenes in the visual and infrared (IR) bands, covering the 0.2 to 20 μm spectral region. These images are radiometrically...

  8. Surface-illuminant ambiguity and color constancy: effects of scene complexity and depth cues.

    Science.gov (United States)

    Kraft, James M; Maloney, Shannon I; Brainard, David H

    2002-01-01

    Two experiments were conducted to study how scene complexity and cues to depth affect human color constancy. Specifically, two levels of scene complexity were compared. The low-complexity scene contained two walls with the same surface reflectance and a test patch which provided no information about the illuminant. In addition to the surfaces visible in the low-complexity scene, the high-complexity scene contained two rectangular solid objects and 24 paper samples with diverse surface reflectances. Observers viewed illuminated objects in an experimental chamber and adjusted the test patch until it appeared achromatic. Achromatic settings made tinder two different illuminants were used to compute an index that quantified the degree of constancy. Two experiments were conducted: one in which observers viewed the stimuli directly, and one in which they viewed the scenes through an optical system that reduced cues to depth. In each experiment, constancy was assessed for two conditions. In the valid-cue condition, many cues provided valid information about the illuminant change. In the invalid-cue condition, some image cues provided invalid information. Four broad conclusions are drawn from the data: (a) constancy is generally better in the valid-cue condition than in the invalid-cue condition: (b) for the stimulus configuration used, increasing image complexity has little effect in the valid-cue condition but leads to increased constancy in the invalid-cue condition; (c) for the stimulus configuration used, reducing cues to depth has little effect for either constancy condition: and (d) there is moderate individual variation in the degree of constancy exhibited, particularly in the degree to which the complexity manipulation affects performance.

  9. Fast Binary Coding for the Scene Classification of High-Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Fan Hu

    2016-06-01

    Full Text Available Scene classification of high-resolution remote sensing (HRRS imagery is an important task in the intelligent processing of remote sensing images and has attracted much attention in recent years. Although the existing scene classification methods, e.g., the bag-of-words (BOW model and its variants, can achieve acceptable performance, these approaches strongly rely on the extraction of local features and the complicated coding strategy, which are usually time consuming and demand much expert effort. In this paper, we propose a fast binary coding (FBC method, to effectively generate efficient discriminative scene representations of HRRS images. The main idea is inspired by the unsupervised feature learning technique and the binary feature descriptions. More precisely, equipped with the unsupervised feature learning technique, we first learn a set of optimal “filters” from large quantities of randomly-sampled image patches and then obtain feature maps by convolving the image scene with the learned filters. After binarizing the feature maps, we perform a simple hashing step to convert the binary-valued feature map to the integer-valued feature map. Finally, statistical histograms computed on the integer-valued feature map are used as global feature representations of the scenes of HRRS images, similar to the conventional BOW model. The analysis of the algorithm complexity and experiments on HRRS image datasets demonstrate that, in contrast with existing scene classification approaches, the proposed FBC has much faster computational speed and achieves comparable classification performance. In addition, we also propose two extensions to FBC, i.e., the spatial co-occurrence matrix and different visual saliency maps, for further improving its final classification accuracy.

  10. Effect of Smoking Scenes in Films on Immediate Smoking

    Science.gov (United States)

    Shmueli, Dikla; Prochaska, Judith J.; Glantz, Stanton A.

    2010-01-01

    Background The National Cancer Institute has concluded that exposure to smoking in movies causes adolescent smoking and there are similar results for young adults. Purpose This study investigated whether exposure of young adult smokers to images of smoking in films stimulated smoking behavior. Methods 100 cigarette smokers aged 18–25 years were randomly assigned to watch a movie montage composed with or without smoking scenes and paraphernalia followed by a10-minute recess. The outcome was whether or not participants smoked during the recess. Data were collected and analyzed in 2008 and 2009. Results Smokers who watched the smoking scenes were more likely to smoke during the break (OR3.06, 95% CI=1.01, 9.29). In addition to this acute effect of exposure, smokers who had seen more smoking in movies before the day of the experiment were more likely to smoke during the break (OR 6.73; 1.00–45.25 comparing the top to bottom percentiles of exposure) were more likely to smoke during the break. Level of nicotine dependence (OR 1.71; 1.27–2.32 per point on the FTND scale), “contemplation” (OR 9.07; 1.71–47.99) and “precontemplation” (OR 7.30; 1.39–38.36) stages of change, and impulsivity (OR 1.21; 1.03–1.43), were also associated with smoking during the break. Participants who watched the montage with smoking scenes and those with a higher level of nicotine dependence were also more likely to have smoked within 30 minutes after the study. Conclusions There is a direct link between viewing smoking scenes and immediate subsequent smoking behavior. This finding suggests that individuals attempting to limit or quit smoking should be advised to refrain from or reduce their exposure to movies that contain smoking. PMID:20307802

  11. The Role of Binocular Disparity in Rapid Scene and Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Matteo Valsecchi

    2013-04-01

    Full Text Available We investigated the contribution of binocular disparity to the rapid recognition of scenes and simpler spatial patterns using a paradigm combining backward masked stimulus presentation and short-term match-to-sample recognition. First, we showed that binocular disparity did not contribute significantly to the recognition of briefly presented natural and artificial scenes, even when the availability of monocular cues was reduced. Subsequently, using dense random dot stereograms as stimuli, we showed that observers were in principle able to extract spatial patterns defined only by disparity under brief, masked presentations. Comparing our results with the predictions from a cue-summation model, we showed that combining disparity with luminance did not per se disrupt the processing of disparity. Our results suggest that the rapid recognition of scenes is mediated mostly by a monocular comparison of the images, although we can rely on stereo in fast pattern recognition.

  12. From groups to categorial algebra introduction to protomodular and mal’tsev categories

    CERN Document Server

    Bourn, Dominique

    2017-01-01

    This book gives a thorough and entirely self-contained, in-depth introduction to a specific approach to group theory, in a large sense of that word. The focus lie on the relationships which a group may have with other groups, via “universal properties”, a view on that group “from the outside”. This method of categorical algebra, is actually not limited to the study of groups alone, but applies equally well to other similar categories of algebraic objects. By introducing protomodular categories and Mal’tsev categories, which form a larger class, the structural properties of the category Gp of groups, show how they emerge from four very basic observations about the algebraic litteral calculus and how, studied for themselves at the conceptual categorical level, they lead to the main striking features of the category Gp of groups. Hardly any previous knowledge of category theory is assumed, and just a little experience with standard algebraic structures such as groups and monoids. Examples and exercises...

  13. The effects of alcohol intoxication on attention and memory for visual scenes.

    Science.gov (United States)

    Harvey, Alistair J; Kneller, Wendy; Campbell, Alison C

    2013-01-01

    This study tests the claim that alcohol intoxication narrows the focus of visual attention on to the more salient features of a visual scene. A group of alcohol intoxicated and sober participants had their eye movements recorded as they encoded a photographic image featuring a central event of either high or low salience. All participants then recalled the details of the image the following day when sober. We sought to determine whether the alcohol group would pay less attention to the peripheral features of the encoded scene than their sober counterparts, whether this effect of attentional narrowing was stronger for the high-salience event than for the low-salience event, and whether it would lead to a corresponding deficit in peripheral recall. Alcohol was found to narrow the focus of foveal attention to the central features of both images but did not facilitate recall from this region. It also reduced the overall amount of information accurately recalled from each scene. These findings demonstrate that the concept of alcohol myopia originally posited to explain the social consequences of intoxication (Steele & Josephs, 1990) may be extended to explain the relative neglect of peripheral information during the processing of visual scenes.

  14. Camera pose estimation for augmented reality in a small indoor dynamic scene

    Science.gov (United States)

    Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad

    2017-09-01

    Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.

  15. Relationship between Childhood Meal Scenes at Home Remembered by University Students and their Current Personality

    OpenAIRE

    恩村, 咲希; Onmura, Saki

    2013-01-01

    This study examines the relationship between childhood meal scenes at home that are remembered by university students and their current personality. The meal scenes are analyzed in terms of companions, conversation content, conversation frequency, atmosphere, and consideration of meals. The scale of the conversation content in childhood meal scenes was prepared on the basis of the results of a preliminary survey. The result showed that a relationship was found between personality traits and c...

  16. The Survival Processing Effect with Intentional Learning of Ad Hoc Categories

    Directory of Open Access Journals (Sweden)

    Anastasiya Savchenko

    2014-04-01

    Full Text Available Previous studies have shown that memory is adapted to remember information when it is processed in a survival context. This study investigates how procedural changes in Marinho (2012 study might have led to her failure to replicate the survival mnemonic advantage. In two between-subjects design experiments, participants were instructed to learn words from ad hoc categories and to rate their relevance to a survival or a control scenario. No survival advantage was obtained in either experiment. The Adjusted Ratio of Clustering (ARC scores revealed that including the category labels made the participants rely more on the category structure of the list. Various procedural aspects of the conducted experiments are discussed as possible reasons underlying the absence of the survival effect.

  17. Integration of an open interface PC scene generator using COTS DVI converter hardware

    Science.gov (United States)

    Nordland, Todd; Lyles, Patrick; Schultz, Bret

    2006-05-01

    Commercial-Off-The-Shelf (COTS) personal computer (PC) hardware is increasingly capable of computing high dynamic range (HDR) scenes for military sensor testing at high frame rates. New electro-optical and infrared (EO/IR) scene projectors feature electrical interfaces that can accept the DVI output of these PC systems. However, military Hardware-in-the-loop (HWIL) facilities such as those at the US Army Aviation and Missile Research Development and Engineering Center (AMRDEC) utilize a sizeable inventory of existing projection systems that were designed to use the Silicon Graphics Incorporated (SGI) digital video port (DVP, also known as DVP2 or DD02) interface. To mate the new DVI-based scene generation systems to these legacy projection systems, CG2 Inc., a Quantum3D Company (CG2), has developed a DVI-to-DVP converter called Delta DVP. This device takes progressive scan DVI input, converts it to digital parallel data, and combines and routes color components to derive a 16-bit wide luminance channel replicated on a DVP output interface. The HWIL Functional Area of AMRDEC has developed a suite of modular software to perform deterministic real-time, wave band-specific rendering of sensor scenes, leveraging the features of commodity graphics hardware and open source software. Together, these technologies enable sensor simulation and test facilities to integrate scene generation and projection components with diverse pedigrees.

  18. History of Reading Struggles Linked to Enhanced Learning in Low Spatial Frequency Scenes

    Science.gov (United States)

    Schneps, Matthew H.; Brockmole, James R.; Sonnert, Gerhard; Pomplun, Marc

    2012-01-01

    People with dyslexia, who face lifelong struggles with reading, exhibit numerous associated low-level sensory deficits including deficits in focal attention. Countering this, studies have shown that struggling readers outperform typical readers in some visual tasks that integrate distributed information across an expanse. Though such abilities would be expected to facilitate scene memory, prior investigations using the contextual cueing paradigm failed to find corresponding advantages in dyslexia. We suggest that these studies were confounded by task-dependent effects exaggerating known focal attention deficits in dyslexia, and that, if natural scenes were used as the context, advantages would emerge. Here, we investigate this hypothesis by comparing college students with histories of severe lifelong reading difficulties (SR) and typical readers (TR) in contexts that vary attention load. We find no differences in contextual-cueing when spatial contexts are letter-like objects, or when contexts are natural scenes. However, the SR group significantly outperforms the TR group when contexts are low-pass filtered natural scenes [F(3, 39) = 3.15, p<.05]. These findings suggest that perception or memory for low spatial frequency components in scenes is enhanced in dyslexia. These findings are important because they suggest strengths for spatial learning in a population otherwise impaired, carrying implications for the education and support of students who face challenges in school. PMID:22558210

  19. History of reading struggles linked to enhanced learning in low spatial frequency scenes.

    Directory of Open Access Journals (Sweden)

    Matthew H Schneps

    Full Text Available People with dyslexia, who face lifelong struggles with reading, exhibit numerous associated low-level sensory deficits including deficits in focal attention. Countering this, studies have shown that struggling readers outperform typical readers in some visual tasks that integrate distributed information across an expanse. Though such abilities would be expected to facilitate scene memory, prior investigations using the contextual cueing paradigm failed to find corresponding advantages in dyslexia. We suggest that these studies were confounded by task-dependent effects exaggerating known focal attention deficits in dyslexia, and that, if natural scenes were used as the context, advantages would emerge. Here, we investigate this hypothesis by comparing college students with histories of severe lifelong reading difficulties (SR and typical readers (TR in contexts that vary attention load. We find no differences in contextual-cueing when spatial contexts are letter-like objects, or when contexts are natural scenes. However, the SR group significantly outperforms the TR group when contexts are low-pass filtered natural scenes [F(3, 39 = 3.15, p<.05]. These findings suggest that perception or memory for low spatial frequency components in scenes is enhanced in dyslexia. These findings are important because they suggest strengths for spatial learning in a population otherwise impaired, carrying implications for the education and support of students who face challenges in school.

  20. The Hip-Hop club scene: Gender, grinding and sex.

    Science.gov (United States)

    Muñoz-Laboy, Miguel; Weinstein, Hannah; Parker, Richard

    2007-01-01

    Hip-Hop culture is a key social medium through which many young men and women from communities of colour in the USA construct their gender. In this study, we focused on the Hip-Hop club scene in New York City with the intention of unpacking narratives of gender dynamics from the perspective of young men and women, and how these relate to their sexual experiences. We conducted a three-year ethnographic study that included ethnographic observations of Hip-Hop clubs and their social scene, and in-depth interviews with young men and young women aged 15-21. This paper describes how young people negotiate gender relations on the dance floor of Hip-Hop clubs. The Hip-Hop club scene represents a context or setting where young men's masculinities are contested by the social environment, where women challenge hypermasculine privilege and where young people can set the stage for what happens next in their sexual and emotional interactions. Hip-Hop culture therefore provides a window into the gender and sexual scripts of many urban minority youth. A fuller understanding of these patterns can offer key insights into the social construction of sexual risk, as well as the possibilities for sexual health promotion, among young people in urban minority populations.

  1. Scene Classification Using High Spatial Resolution Multispectral Data

    National Research Council Canada - National Science Library

    Garner, Jamada

    2002-01-01

    ...), High-spatial resolution (8-meter), 4-color MSI data from IKONOS provide a new tool for scene classification, The utility of these data are studied for the purpose of classifying the Elkhorn Slough and surrounding wetlands in central...

  2. NEGOTIATING PLACE AND GENDERED VIOLENCE IN CANADA’S LARGEST OPEN DRUG SCENE

    Science.gov (United States)

    McNeil, Ryan; Shannon, Kate; Shaver, Laura; Kerr, Thomas; Small, Will

    2014-01-01

    Background Vancouver’s Downtown Eastside is home to Canada’s largest street-based drug scene and only supervised injection facility (Insite). High levels of violence among men and women have been documented in this neighbourhood. This study was undertaken to explore the role of violence in shaping the socio-spatial relations of women and ‘marginal men’ (i.e., those occupying subordinate positions within the drug scene) in the Downtown Eastside, including access to Insite. Methods Semi-structured qualitative interviews were conducted with 23 people who inject drugs (PWID) recruited through the Vancouver Area Network of Drug Users, a local drug user organization. Interviews included a mapping exercise. Interview transcripts and maps were analyzed thematically, with an emphasis on how gendered violence shaped participants’ spatial practices. Results Hegemonic forms of masculinity operating within the Downtown Eastside framed the everyday violence experienced by women and marginal men. This violence shaped the spatial practices of women and marginal men, in that they avoided drug scene milieus where they had experienced violence or that they perceived to be dangerous. Some men linked their spatial restrictions to the perceived 'dope quality' of neighbourhood drug dealers to maintain claims to dominant masculinities while enacting spatial strategies to promote safety. Environmental supports provided by health and social care agencies were critical in enabling women and marginal men to negotiate place and survival within the context of drug scene violence. Access to Insite did not motivate participants to enter into “dangerous” drug scene milieus but they did venture into these areas if necessary to obtain drugs or generate income. Conclusion Gendered violence is critical in restricting the geographies of men and marginal men within the street-based drug scene. There is a need to scale up existing environmental interventions, including supervised injection

  3. Spectral feature characterization methods for blood stain detection in crime scene backgrounds

    Science.gov (United States)

    Yang, Jie; Mathew, Jobin J.; Dube, Roger R.; Messinger, David W.

    2016-05-01

    Blood stains are one of the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Blood spectral signatures containing unique reflectance or absorption features are important both for forensic on-site investigation and laboratory testing. They can be used for target detection and identification applied to crime scene hyperspectral imagery, and also be utilized to analyze the spectral variation of blood on various backgrounds. Non-blood stains often mislead the detection and can generate false alarms at a real crime scene, especially for dark and red backgrounds. This paper measured the reflectance of liquid blood and 9 kinds of non-blood samples in the range of 350 nm - 2500 nm in various crime scene backgrounds, such as pure samples contained in petri dish with various thicknesses, mixed samples with different colors and materials of fabrics, and mixed samples with wood, all of which are examined to provide sub-visual evidence for detecting and recognizing blood from non-blood samples in a realistic crime scene. The spectral difference between blood and non-blood samples are examined and spectral features such as "peaks" and "depths" of reflectance are selected. Two blood stain detection methods are proposed in this paper. The first method uses index to denote the ratio of "depth" minus "peak" over"depth" add"peak" within a wavelength range of the reflectance spectrum. The second method uses relative band depth of the selected wavelength ranges of the reflectance spectrum. Results show that the index method is able to discriminate blood from non-blood samples in most tested crime scene backgrounds, but is not able to detect it from black felt. Whereas the relative band depth method is able to discriminate blood from non-blood samples on all of the tested background material types and colors.

  4. A Bayesian Model of Category-Specific Emotional Brain Responses

    Science.gov (United States)

    Wager, Tor D.; Kang, Jian; Johnson, Timothy D.; Nichols, Thomas E.; Satpute, Ajay B.; Barrett, Lisa Feldman

    2015-01-01

    Understanding emotion is critical for a science of healthy and disordered brain function, but the neurophysiological basis of emotional experience is still poorly understood. We analyzed human brain activity patterns from 148 studies of emotion categories (2159 total participants) using a novel hierarchical Bayesian model. The model allowed us to classify which of five categories—fear, anger, disgust, sadness, or happiness—is engaged by a study with 66% accuracy (43-86% across categories). Analyses of the activity patterns encoded in the model revealed that each emotion category is associated with unique, prototypical patterns of activity across multiple brain systems including the cortex, thalamus, amygdala, and other structures. The results indicate that emotion categories are not contained within any one region or system, but are represented as configurations across multiple brain networks. The model provides a precise summary of the prototypical patterns for each emotion category, and demonstrates that a sufficient characterization of emotion categories relies on (a) differential patterns of involvement in neocortical systems that differ between humans and other species, and (b) distinctive patterns of cortical-subcortical interactions. Thus, these findings are incompatible with several contemporary theories of emotion, including those that emphasize emotion-dedicated brain systems and those that propose emotion is localized primarily in subcortical activity. They are consistent with componential and constructionist views, which propose that emotions are differentiated by a combination of perceptual, mnemonic, prospective, and motivational elements. Such brain-based models of emotion provide a foundation for new translational and clinical approaches. PMID:25853490

  5. Behavioral evidence for differences in social and non-social category learning

    Directory of Open Access Journals (Sweden)

    Lucile eGamond

    2012-08-01

    Full Text Available When meeting someone for the very first time one spontaneously categorizes the seen person on the basis of his/her appearance. Categorization is based on the association between some physical features and category labels that can be social (character trait… or non-social (tall, thin. Surprisingly little is known about how such associations are formed, particularly in the social domain. Here, we aimed at testing whether social and non-social category learning may be dissociated. We presented subjects with a large number of faces that had to be rated according to social or non-social labels, and induced an association between a facial feature (inter-eye distance and the category labels using two different procedures. In a first experiment, we used a feedback procedure to reinforce the association; behavioral measures revealed an association between the physical feature manipulated and abstract non-social categories, while no evidence for an association with social labels could be found. In a second experiment, we used passive exposure to the association between physical features and labels; we obtained behavioral evidence for learning of both social and non-social categories. These results support the view of the specificity of social category learning; they suggest that social categories are best acquired through unsupervised procedures that can be considered as a simplified proxy for group transmission.

  6. CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes

    OpenAIRE

    Li, Yuhong; Zhang, Xiaofan; Chen, Deming

    2018-01-01

    We propose a network for Congested Scene Recognition called CSRNet to provide a data-driven and deep learning method that can understand highly congested scenes and perform accurate count estimation as well as present high-quality density maps. The proposed CSRNet is composed of two major components: a convolutional neural network (CNN) as the front-end for 2D feature extraction and a dilated CNN for the back-end, which uses dilated kernels to deliver larger reception fields and to replace po...

  7. Behind the scenes at the LHC inauguration

    CERN Document Server

    2008-01-01

    On 21 October the LHC inauguration ceremony will take place and people from all over CERN have been busy preparing. With delegations from 38 countries attending, including ministers and heads of state, the Bulletin has gone behind the scenes to see what it takes to put together an event of this scale.

  8. Desirable and undesirable future thoughts call for different scene construction processes.

    Science.gov (United States)

    de Vito, S; Neroni, M A; Gamboz, N; Della Sala, S; Brandimonte, M A

    2015-01-01

    Despite the growing interest in the ability of foreseeing (episodic future thinking), it is still unclear how healthy people construct possible future scenarios. We suggest that different future thoughts require different processes of scene construction. Thirty-five participants were asked to imagine desirable and less desirable future events. Imagining desirable events increased the ease of scene construction, the frequency of life scripts, the number of internal details, and the clarity of sensorial and spatial temporal information. The initial description of general personal knowledge lasted longer in undesirable than in desirable anticipations. Finally, participants were more prone to explicitly indicate autobiographical memory as the main source of their simulations of undesirable episodes, whereas they equally related the simulations of desirable events to autobiographical events or semantic knowledge. These findings show that desirable and undesirable scenarios call for different mechanisms of scene construction. The present study emphasizes that future thinking cannot be considered as a monolithic entity.

  9. Application of composite small calibration objects in traffic accident scene photogrammetry.

    Science.gov (United States)

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.

  10. Application of composite small calibration objects in traffic accident scene photogrammetry.

    Directory of Open Access Journals (Sweden)

    Qiang Chen

    Full Text Available In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.

  11. Number 13 / Part I. Music. 3. Mad Scenes: A Warning against Overwhelming Passions

    Directory of Open Access Journals (Sweden)

    Marisi Rossella

    2017-03-01

    Full Text Available This study focuses on mad scenes in poetry and musical theatre, stressing that, according to Aristotle’s theory on catharsis and the Affektenlehre, they had a pedagogical role on the audience. Some mad scenes by J.S. Bach, Handel and Mozart are briefly analyzed, highlighting their most relevant textual and musical characteristics.

  12. Incidental Learning of Sound Categories is Impaired in Developmental Dyslexia

    Science.gov (United States)

    Gabay, Yafit; Holt, Lori L.

    2015-01-01

    Developmental dyslexia is commonly thought to arise from specific phonological impairments. However, recent evidence is consistent with the possibility that phonological impairments arise as symptoms of an underlying dysfunction of procedural learning. The nature of the link between impaired procedural learning and phonological dysfunction is unresolved. Motivated by the observation that speech processing involves the acquisition of procedural category knowledge, the present study investigates the possibility that procedural learning impairment may affect phonological processing by interfering with the typical course of phonetic category learning. The present study tests this hypothesis while controlling for linguistic experience and possible speech-specific deficits by comparing auditory category learning across artificial, nonlinguistic sounds among dyslexic adults and matched controls in a specialized first-person shooter videogame that has been shown to engage procedural learning. Nonspeech auditory category learning was assessed online via within-game measures and also with a post-training task involving overt categorization of familiar and novel sound exemplars. Each measure reveals that dyslexic participants do not acquire procedural category knowledge as effectively as age- and cognitive-ability matched controls. This difference cannot be explained by differences in perceptual acuity for the sounds. Moreover, poor nonspeech category learning is associated with slower phonological processing. Whereas phonological processing impairments have been emphasized as the cause of dyslexia, the current results suggest that impaired auditory category learning, general in nature and not specific to speech signals, could contribute to phonological deficits in dyslexia with subsequent negative effects on language acquisition and reading. Implications for the neuro-cognitive mechanisms of developmental dyslexia are discussed. PMID:26409017

  13. Improving Remote Sensing Scene Classification by Integrating Global-Context and Local-Object Features

    Directory of Open Access Journals (Sweden)

    Dan Zeng

    2018-05-01

    Full Text Available Recently, many researchers have been dedicated to using convolutional neural networks (CNNs to extract global-context features (GCFs for remote-sensing scene classification. Commonly, accurate classification of scenes requires knowledge about both the global context and local objects. However, unlike the natural images in which the objects cover most of the image, objects in remote-sensing images are generally small and decentralized. Thus, it is hard for vanilla CNNs to focus on both global context and small local objects. To address this issue, this paper proposes a novel end-to-end CNN by integrating the GCFs and local-object-level features (LOFs. The proposed network includes two branches, the local object branch (LOB and global semantic branch (GSB, which are used to generate the LOFs and GCFs, respectively. Then, the concatenation of features extracted from the two branches allows our method to be more discriminative in scene classification. Three challenging benchmark remote-sensing datasets were extensively experimented on; the proposed approach outperformed the existing scene classification methods and achieved state-of-the-art results for all three datasets.

  14. The cost of selective attention in category learning: developmental differences between adults and infants.

    Science.gov (United States)

    Best, Catherine A; Yim, Hyungwook; Sloutsky, Vladimir M

    2013-10-01

    Selective attention plays an important role in category learning. However, immaturities of top-down attentional control during infancy coupled with successful category learning suggest that early category learning is achieved without attending selectively. Research presented here examines this possibility by focusing on category learning in infants (6-8months old) and adults. Participants were trained on a novel visual category. Halfway through the experiment, unbeknownst to participants, the to-be-learned category switched to another category, where previously relevant features became irrelevant and previously irrelevant features became relevant. If participants attend selectively to the relevant features of the first category, they should incur a cost of selective attention immediately after the unknown category switch. Results revealed that adults demonstrated a cost, as evidenced by a decrease in accuracy and response time on test trials as well as a decrease in visual attention to newly relevant features. In contrast, infants did not demonstrate a similar cost of selective attention as adults despite evidence of learning both to-be-learned categories. Findings are discussed as supporting multiple systems of category learning and as suggesting that learning mechanisms engaged by adults may be different from those engaged by infants. Copyright © 2013 Elsevier Inc. All rights reserved.

  15. The impact of category structure and training methodology on learning and generalizing within-category representations.

    Science.gov (United States)

    Ell, Shawn W; Smith, David B; Peralta, Gabriela; Hélie, Sébastien

    2017-08-01

    When interacting with categories, representations focused on within-category relationships are often learned, but the conditions promoting within-category representations and their generalizability are unclear. We report the results of three experiments investigating the impact of category structure and training methodology on the learning and generalization of within-category representations (i.e., correlational structure). Participants were trained on either rule-based or information-integration structures using classification (Is the stimulus a member of Category A or Category B?), concept (e.g., Is the stimulus a member of Category A, Yes or No?), or inference (infer the missing component of the stimulus from a given category) and then tested on either an inference task (Experiments 1 and 2) or a classification task (Experiment 3). For the information-integration structure, within-category representations were consistently learned, could be generalized to novel stimuli, and could be generalized to support inference at test. For the rule-based structure, extended inference training resulted in generalization to novel stimuli (Experiment 2) and inference training resulted in generalization to classification (Experiment 3). These data help to clarify the conditions under which within-category representations can be learned. Moreover, these results make an important contribution in highlighting the impact of category structure and training methodology on the generalization of categorical knowledge.

  16. Influence of semantic consistency and perceptual features on visual attention during scene viewing in toddlers.

    Science.gov (United States)

    Helo, Andrea; van Ommen, Sandrien; Pannasch, Sebastian; Danteny-Dordoigne, Lucile; Rämä, Pia

    2017-11-01

    Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. The Interplay of Episodic and Semantic Memory in Guiding Repeated Search in Scenes

    Science.gov (United States)

    Vo, Melissa L.-H.; Wolfe, Jeremy M.

    2013-01-01

    It seems intuitive to think that previous exposure or interaction with an environment should make it easier to search through it and, no doubt, this is true in many real-world situations. However, in a recent study, we demonstrated that previous exposure to a scene does not necessarily speed search within that scene. For instance, when observers…

  18. Tachistoscopic illumination and masking of real scenes.

    Science.gov (United States)

    Chichka, David; Philbeck, John W; Gajewski, Daniel A

    2015-03-01

    Tachistoscopic presentation of scenes has been valuable for studying the emerging properties of visual scene representations. The spatial aspects of this work have generally been focused on the conceptual locations (e.g., next to the refrigerator) and directional locations of objects in 2-D arrays and/or images. Less is known about how the perceived egocentric distance of objects develops. Here we describe a novel system for presenting brief glimpses of a real-world environment, followed by a mask. The system includes projectors with mechanical shutters for projecting the fixation and masking images, a set of LED floodlights for illuminating the environment, and computer-controlled electronics to set the timing and initiate the process. Because a real environment is used, most visual distance and depth cues can be manipulated using traditional methods. The system is inexpensive, robust, and its components are readily available in the marketplace. This article describes the system and the timing characteristics of each component. We verified the system's ability to control exposure to time scales as low as a few milliseconds.

  19. Standoff alpha radiation detection for hot cell imaging and crime scene investigation

    Science.gov (United States)

    Kerst, Thomas; Sand, Johan; Ihantola, Sakari; Peräjärvi, Kari; Nicholl, Adrian; Hrnecek, Erich; Toivonen, Harri; Toivonen, Juha

    2018-02-01

    This paper presents the remote detection of alpha contamination in a nuclear facility. Alpha-active material in a shielded nuclear radiation containment chamber has been localized by optical means. Furthermore, sources of radiation danger have been identified in a staged crime scene setting. For this purpose, an electron-multiplying charge-coupled device camera was used to capture photons generated by alpha-induced air scintillation (radioluminescence). The detected radioluminescence was superimposed with a regular photograph to reveal the origin of the light and thereby the alpha radioactive material. The experimental results show that standoff detection of alpha contamination is a viable tool in radiation threat detection. Furthermore, the radioluminescence spectrum in the air is spectrally analyzed. Possibilities of camera-based alpha threat detection under various background lighting conditions are discussed.

  20. Range and intensity vision for rock-scene segmentation

    CSIR Research Space (South Africa)

    Mkwelo, SG

    2007-11-01

    Full Text Available This paper presents another approach to segmenting a scene of rocks on a conveyor belt for the purposes of measuring rock size. Rock size estimation instruments are used to monitor, optimize and control milling and crushing in the mining industry...

  1. Falling out of time: enhanced memory for scenes presented at behaviorally irrelevant points in time in posttraumatic stress disorder (PTSD).

    Science.gov (United States)

    Levy-Gigi, Einat; Kéri, Szabolcs

    2012-01-01

    Spontaneous encoding of the visual environment depends on the behavioral relevance of the task performed simultaneously. If participants identify target letters or auditory tones while viewing a series of briefly presented natural and urban scenes, they demonstrate effective scene recognition only when a target, but not a behaviorally irrelevant distractor, appears together with the scene. Here, we show that individuals with posttraumatic stress disorder (PTSD), who witnessed the red sludge disaster in Hungary, show the opposite pattern of performance: enhanced recognition of scenes presented together with distractors and deficient recognition of scenes presented with targets. The recognition of trauma-related and neutral scenes was not different in individuals with PTSD. We found a positive correlation between memory for scenes presented with auditory distractors and re-experiencing symptoms (memory intrusions and flashbacks). These results suggest that abnormal encoding of visual scenes at behaviorally irrelevant events might be associated with intrusive experiences by disrupting the flow of time.

  2. Falling out of time: enhanced memory for scenes presented at behaviorally irrelevant points in time in posttraumatic stress disorder (PTSD.

    Directory of Open Access Journals (Sweden)

    Einat Levy-Gigi

    Full Text Available Spontaneous encoding of the visual environment depends on the behavioral relevance of the task performed simultaneously. If participants identify target letters or auditory tones while viewing a series of briefly presented natural and urban scenes, they demonstrate effective scene recognition only when a target, but not a behaviorally irrelevant distractor, appears together with the scene. Here, we show that individuals with posttraumatic stress disorder (PTSD, who witnessed the red sludge disaster in Hungary, show the opposite pattern of performance: enhanced recognition of scenes presented together with distractors and deficient recognition of scenes presented with targets. The recognition of trauma-related and neutral scenes was not different in individuals with PTSD. We found a positive correlation between memory for scenes presented with auditory distractors and re-experiencing symptoms (memory intrusions and flashbacks. These results suggest that abnormal encoding of visual scenes at behaviorally irrelevant events might be associated with intrusive experiences by disrupting the flow of time.

  3. Using 3D range cameras for crime scene documentation and legal medicine

    Science.gov (United States)

    Cavagnini, Gianluca; Sansoni, Giovanna; Trebeschi, Marco

    2009-01-01

    Crime scene documentation and legal medicine analysis are part of a very complex process which is aimed at identifying the offender starting from the collection of the evidences on the scene. This part of the investigation is very critical, since the crime scene is extremely volatile, and once it is removed, it can not be precisely created again. For this reason, the documentation process should be as complete as possible, with minimum invasiveness. The use of optical 3D imaging sensors has been considered as a possible aid to perform the documentation step, since (i) the measurement is contactless and (ii) the process required to editing and modeling the 3D data is quite similar to the reverse engineering procedures originally developed for the manufacturing field. In this paper we show the most important results obtained in the experimentation.

  4. Making a scene: exploring the dimensions of place through Dutch popular music, 1960-2010

    NARCIS (Netherlands)

    Brandellero, A.; Pfeffer, K.

    2015-01-01

    This paper applies a multi-layered conceptualisation of place to the analysis of particular music scenes in the Netherlands, 1960-2010. We focus on: the clustering of music-related activities in locations; the delineation of spatially tied music scenes, based on a shared identity, reproduced over

  5. More than words: Adults learn probabilities over categories and relationships between them.

    Science.gov (United States)

    Hudson Kam, Carla L

    2009-04-01

    This study examines whether human learners can acquire statistics over abstract categories and their relationships to each other. Adult learners were exposed to miniature artificial languages containing variation in the ordering of the Subject, Object, and Verb constituents. Different orders (e.g. SOV, VSO) occurred in the input with different frequencies, but the occurrence of one order versus another was not predictable. Importantly, the language was constructed such that participants could only match the overall input probabilities if they were tracking statistics over abstract categories, not over individual words. At test, participants reproduced the probabilities present in the input with a high degree of accuracy. Closer examination revealed that learner's were matching the probabilities associated with individual verbs rather than the category as a whole. However, individual nouns had no impact on word orders produced. Thus, participants learned the probabilities of a particular ordering of the abstract grammatical categories Subject and Object associated with each verb. Results suggest that statistical learning mechanisms are capable of tracking relationships between abstract linguistic categories in addition to individual items.

  6. Using selected scenes from Brazilian films to teach about substance use disorders, within medical education.

    Science.gov (United States)

    Castaldelli-Maia, João Mauricio; Oliveira, Hercílio Pereira; Andrade, Arthur Guerra; Lotufo-Neto, Francisco; Bhugra, Dinesh

    2012-01-01

    Themes like alcohol and drug abuse, relationship difficulties, psychoses, autism and personality dissociation disorders have been widely used in films. Psychiatry and psychiatric conditions in various cultural settings are increasingly taught using films. Many articles on cinema and psychiatry have been published but none have presented any methodology on how to select material. Here, the authors look at the portrayal of abusive use of alcohol and drugs during the Brazilian cinema revival period (1994 to 2008). Qualitative study at two universities in the state of São Paulo. Scenes were selected from films available at rental stores and were analyzed using a specifically designed protocol. We assessed how realistic these scenes were and their applicability for teaching. One author selected 70 scenes from 50 films (graded for realism and teaching applicability > 8). These were then rated by another two judges. Rating differences among the three judges were assessed using nonparametric tests (P 8) were defined as "quality scenes". Thirty-nine scenes from 27 films were identified as "quality scenes". Alcohol, cannabis, cocaine, hallucinogens and inhalants were included in these. Signs and symptoms of intoxication, abusive/harmful use and dependence were shown. We have produced rich teaching material for discussing psychopathology relating to alcohol and drug use that can be used both at undergraduate and at postgraduate level. Moreover, it could be seen that certain drug use behavioral patterns are deeply rooted in some Brazilian films and groups.

  7. Encoding tasks dissociate the effects of divided attention on category-cued recall and category-exemplar generation.

    Science.gov (United States)

    Parker, Andrew; Dagnall, Neil; Munley, Gary

    2012-01-01

    The combined effects of encoding tasks and divided attention upon category-exemplar generation and category-cued recall were examined. Participants were presented with pairs of words each comprising a category name and potential example of that category. They were then asked to indicate either (i) their liking for both of the words or (ii) if the exemplar was a member of the category. It was found that divided attention reduced performance on the category-cued recall task under both encoding conditions. However, performance on the category-exemplar generation task remained invariant across the attention manipulation following the category judgment task. This provides further evidence that the processes underlying performance on conceptual explicit and implicit memory tasks can be dissociated, and that the intentional formation of category-exemplar associations attenuates the effects of divided attention on category-exemplar generation.

  8. Land-use Scene Classification in High-Resolution Remote Sensing Images by Multiscale Deeply Described Correlatons

    Science.gov (United States)

    Qi, K.; Qingfeng, G.

    2017-12-01

    With the popular use of High-Resolution Satellite (HRS) images, more and more research efforts have been placed on land-use scene classification. However, it makes the task difficult with HRS images for the complex background and multiple land-cover classes or objects. This article presents a multiscale deeply described correlaton model for land-use scene classification. Specifically, the convolutional neural network is introduced to learn and characterize the local features at different scales. Then, learnt multiscale deep features are explored to generate visual words. The spatial arrangement of visual words is achieved through the introduction of adaptive vector quantized correlograms at different scales. Experiments on two publicly available land-use scene datasets demonstrate that the proposed model is compact and yet discriminative for efficient representation of land-use scene images, and achieves competitive classification results with the state-of-art methods.

  9. Computing color categories

    NARCIS (Netherlands)

    Yendrikhovskij, S.N.; Rogowitz, B.E.; Pappas, T.N.

    2000-01-01

    This paper is an attempt to develop a coherent framework for understanding, modeling, and computing color categories. The main assumption is that the structure of color category systems originates from the statistical structure of the perceived color environment. This environment can be modeled as

  10. Incidental learning of sound categories is impaired in developmental dyslexia.

    Science.gov (United States)

    Gabay, Yafit; Holt, Lori L

    2015-12-01

    Developmental dyslexia is commonly thought to arise from specific phonological impairments. However, recent evidence is consistent with the possibility that phonological impairments arise as symptoms of an underlying dysfunction of procedural learning. The nature of the link between impaired procedural learning and phonological dysfunction is unresolved. Motivated by the observation that speech processing involves the acquisition of procedural category knowledge, the present study investigates the possibility that procedural learning impairment may affect phonological processing by interfering with the typical course of phonetic category learning. The present study tests this hypothesis while controlling for linguistic experience and possible speech-specific deficits by comparing auditory category learning across artificial, nonlinguistic sounds among dyslexic adults and matched controls in a specialized first-person shooter videogame that has been shown to engage procedural learning. Nonspeech auditory category learning was assessed online via within-game measures and also with a post-training task involving overt categorization of familiar and novel sound exemplars. Each measure reveals that dyslexic participants do not acquire procedural category knowledge as effectively as age- and cognitive-ability matched controls. This difference cannot be explained by differences in perceptual acuity for the sounds. Moreover, poor nonspeech category learning is associated with slower phonological processing. Whereas phonological processing impairments have been emphasized as the cause of dyslexia, the current results suggest that impaired auditory category learning, general in nature and not specific to speech signals, could contribute to phonological deficits in dyslexia with subsequent negative effects on language acquisition and reading. Implications for the neuro-cognitive mechanisms of developmental dyslexia are discussed. Copyright © 2015 Elsevier Ltd. All rights

  11. Fixation and saliency during search of natural scenes: the case of visual agnosia.

    Science.gov (United States)

    Foulsham, Tom; Barton, Jason J S; Kingstone, Alan; Dewhurst, Richard; Underwood, Geoffrey

    2009-07-01

    Models of eye movement control in natural scenes often distinguish between stimulus-driven processes (which guide the eyes to visually salient regions) and those based on task and object knowledge (which depend on expectations or identification of objects and scene gist). In the present investigation, the eye movements of a patient with visual agnosia were recorded while she searched for objects within photographs of natural scenes and compared to those made by students and age-matched controls. Agnosia is assumed to disrupt the top-down knowledge available in this task, and so may increase the reliance on bottom-up cues. The patient's deficit in object recognition was seen in poor search performance and inefficient scanning. The low-level saliency of target objects had an effect on responses in visual agnosia, and the most salient region in the scene was more likely to be fixated by the patient than by controls. An analysis of model-predicted saliency at fixation locations indicated a closer match between fixations and low-level saliency in agnosia than in controls. These findings are discussed in relation to saliency-map models and the balance between high and low-level factors in eye guidance.

  12. Extracting flat-field images from scene-based image sequences using phase correlation

    Energy Technology Data Exchange (ETDEWEB)

    Caron, James N., E-mail: Caron@RSImd.com [Research Support Instruments, 4325-B Forbes Boulevard, Lanham, Maryland 20706 (United States); Montes, Marcos J. [Naval Research Laboratory, Code 7231, 4555 Overlook Avenue, SW, Washington, DC 20375 (United States); Obermark, Jerome L. [Naval Research Laboratory, Code 8231, 4555 Overlook Avenue, SW, Washington, DC 20375 (United States)

    2016-06-15

    Flat-field image processing is an essential step in producing high-quality and radiometrically calibrated images. Flat-fielding corrects for variations in the gain of focal plane array electronics and unequal illumination from the system optics. Typically, a flat-field image is captured by imaging a radiometrically uniform surface. The flat-field image is normalized and removed from the images. There are circumstances, such as with remote sensing, where a flat-field image cannot be acquired in this manner. For these cases, we developed a phase-correlation method that allows the extraction of an effective flat-field image from a sequence of scene-based displaced images. The method uses sub-pixel phase correlation image registration to align the sequence to estimate the static scene. The scene is removed from sequence producing a sequence of misaligned flat-field images. An average flat-field image is derived from the realigned flat-field sequence.

  13. Category I structures program

    International Nuclear Information System (INIS)

    Endebrock, E.G.; Dove, R.C.

    1981-01-01

    The objective of the Category I Structure Program is to supply experimental and analytical information needed to assess the structural capacity of Category I structures (excluding the reactor cntainment building). Because the shear wall is a principal element of a Category I structure, and because relatively little experimental information is available on the shear walls, it was selected as the test element for the experimental program. The large load capacities of shear walls in Category I structures dictates that the experimental tests be conducted on small size shear wall structures that incorporates the general construction details and characteristics of as-built shear walls

  14. The perception of naturalness correlates with low-level visual features of environmental scenes.

    Directory of Open Access Journals (Sweden)

    Marc G Berman

    Full Text Available Previous research has shown that interacting with natural environments vs. more urban or built environments can have salubrious psychological effects, such as improvements in attention and memory. Even viewing pictures of nature vs. pictures of built environments can produce similar effects. A major question is: What is it about natural environments that produces these benefits? Problematically, there are many differing qualities between natural and urban environments, making it difficult to narrow down the dimensions of nature that may lead to these benefits. In this study, we set out to uncover visual features that related to individuals' perceptions of naturalness in images. We quantified naturalness in two ways: first, implicitly using a multidimensional scaling analysis and second, explicitly with direct naturalness ratings. Features that seemed most related to perceptions of naturalness were related to the density of contrast changes in the scene, the density of straight lines in the scene, the average color saturation in the scene and the average hue diversity in the scene. We then trained a machine-learning algorithm to predict whether a scene was perceived as being natural or not based on these low-level visual features and we could do so with 81% accuracy. As such we were able to reliably predict subjective perceptions of naturalness with objective low-level visual features. Our results can be used in future studies to determine if these features, which are related to naturalness, may also lead to the benefits attained from interacting with nature.

  15. Attention in natural scenes: Affective-motivational factors guide gaze independently of visual salience.

    Science.gov (United States)

    Schomaker, Judith; Walper, Daniel; Wittmann, Bianca C; Einhäuser, Wolfgang

    2017-04-01

    In addition to low-level stimulus characteristics and current goals, our previous experience with stimuli can also guide attentional deployment. It remains unclear, however, if such effects act independently or whether they interact in guiding attention. In the current study, we presented natural scenes including every-day objects that differed in affective-motivational impact. In the first free-viewing experiment, we presented visually-matched triads of scenes in which one critical object was replaced that varied mainly in terms of motivational value, but also in terms of valence and arousal, as confirmed by ratings by a large set of observers. Treating motivation as a categorical factor, we found that it affected gaze. A linear-effect model showed that arousal, valence, and motivation predicted fixations above and beyond visual characteristics, like object size, eccentricity, or visual salience. In a second experiment, we experimentally investigated whether the effects of emotion and motivation could be modulated by visual salience. In a medium-salience condition, we presented the same unmodified scenes as in the first experiment. In a high-salience condition, we retained the saturation of the critical object in the scene, and decreased the saturation of the background, and in a low-salience condition, we desaturated the critical object while retaining the original saturation of the background. We found that highly salient objects guided gaze, but still found additional additive effects of arousal, valence and motivation, confirming that higher-level factors can also guide attention, as measured by fixations towards objects in natural scenes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Effect of Viewing Smoking Scenes in Motion Pictures on Subsequent Smoking Desire in Audiences in South Korea.

    Science.gov (United States)

    Sohn, Minsung; Jung, Minsoo

    2017-07-17

    In the modern era of heightened awareness of public health, smoking scenes in movies remain relatively free from public monitoring. The effect of smoking scenes in movies on the promotion of viewers' smoking desire remains unknown. The study aimed to explore whether exposure of adolescent smokers to images of smoking in fılms could stimulate smoking behavior. Data were derived from a national Web-based sample survey of 748 Korean high-school students. Participants aged 16-18 years were randomly assigned to watch three short video clips with or without smoking scenes. After adjusting covariates using propensity score matching, paired sample t test and logistic regression analyses compared the difference in smoking desire before and after exposure of participants to smoking scenes. For male adolescents, cigarette craving was significantly higher in those who watched movies with smoking scenes than in the control group who did not view smoking scenes (t 307.96 =2.066, Pfilms and assigning a smoking-related screening grade to films is warranted. ©Minsung Sohn, Minsoo Jung. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 17.07.2017.

  17. Position-Invariant Robust Features for Long-Term Recognition of Dynamic Outdoor Scenes

    Science.gov (United States)

    Kawewong, Aram; Tangruamsub, Sirinart; Hasegawa, Osamu

    A novel Position-Invariant Robust Feature, designated as PIRF, is presented to address the problem of highly dynamic scene recognition. The PIRF is obtained by identifying existing local features (i.e. SIFT) that have a wide baseline visibility within a place (one place contains more than one sequential images). These wide-baseline visible features are then represented as a single PIRF, which is computed as an average of all descriptors associated with the PIRF. Particularly, PIRFs are robust against highly dynamical changes in scene: a single PIRF can be matched correctly against many features from many dynamical images. This paper also describes an approach to using these features for scene recognition. Recognition proceeds by matching an individual PIRF to a set of features from test images, with subsequent majority voting to identify a place with the highest matched PIRF. The PIRF system is trained and tested on 2000+ outdoor omnidirectional images and on COLD datasets. Despite its simplicity, PIRF offers a markedly better rate of recognition for dynamic outdoor scenes (ca. 90%) than the use of other features. Additionally, a robot navigation system based on PIRF (PIRF-Nav) can outperform other incremental topological mapping methods in terms of time (70% less) and memory. The number of PIRFs can be reduced further to reduce the time while retaining high accuracy, which makes it suitable for long-term recognition and localization.

  18. The probability of object-scene co-occurrence influences object identification processes.

    Science.gov (United States)

    Sauvé, Geneviève; Harmand, Mariane; Vanni, Léa; Brodeur, Mathieu B

    2017-07-01

    Contextual information allows the human brain to make predictions about the identity of objects that might be seen and irregularities between an object and its background slow down perception and identification processes. Bar and colleagues modeled the mechanisms underlying this beneficial effect suggesting that the brain stocks information about the statistical regularities of object and scene co-occurrence. Their model suggests that these recurring regularities could be conceptualized along a continuum in which the probability of seeing an object within a given scene can be high (probable condition), moderate (improbable condition) or null (impossible condition). In the present experiment, we propose to disentangle the electrophysiological correlates of these context effects by directly comparing object-scene pairs found along this continuum. We recorded the event-related potentials of 30 healthy participants (18-34 years old) and analyzed their brain activity in three time windows associated with context effects. We observed anterior negativities between 250 and 500 ms after object onset for the improbable and impossible conditions (improbable more negative than impossible) compared to the probable condition as well as a parieto-occipital positivity (improbable more positive than impossible). The brain may use different processing pathways to identify objects depending on whether the probability of co-occurrence with the scene is moderate (rely more on top-down effects) or null (rely more on bottom-up influences). The posterior positivity could index error monitoring aimed to ensure that no false information is integrated into mental representations of the world.

  19. Semantic Categorization Precedes Affective Evaluation of Visual Scenes

    Science.gov (United States)

    Nummenmaa, Lauri; Hyona, Jukka; Calvo, Manuel G.

    2010-01-01

    We compared the primacy of affective versus semantic categorization by using forced-choice saccadic and manual response tasks. Participants viewed paired emotional and neutral scenes involving humans or animals flashed rapidly in extrafoveal vision. Participants were instructed to categorize the targets by saccading toward the location occupied by…

  20. Stakeholders analysis on criteria for protected areas management categories in Peninsular Malaysia

    Science.gov (United States)

    Hashim, Z.; Abdullah, S. A.; Nor, S. Md.

    2017-10-01

    The establishment of protected areas has always been associated with a strategy to conserve biodiversity. A well-managed protected areas not only protect the ecosystem and threatened species but also provides benefits to the public. These indeed require sound management practices through the application of protected areas management categories which can be is seen as tools for planning, establishment and administration of protected areas as well as to regulate the activities in the protected areas. However, in Peninsular Malaysia the implementation of the protected areas management categories was carried out based on the ‘ad-hoc’ basis without realising the important of the criteria based on the local values. Thus, an investigation has been sought to establish the criteria used in application to the protected areas management categories in Peninsular Malaysia. The outcomes revealed the significant of social, environment and economic criteria in establishing the protected area management categories in Peninsular Malaysia.

  1. The Effect of Scene Variation on the Redundant Use of Color in Definite Reference

    Science.gov (United States)

    Koolen, Ruud; Goudbeek, Martijn; Krahmer, Emiel

    2013-01-01

    This study investigates to what extent the amount of variation in a visual scene causes speakers to mention the attribute color in their definite target descriptions, focusing on scenes in which this attribute is not needed for identification of the target. The results of our three experiments show that speakers are more likely to redundantly…

  2. Deep Salient Feature Based Anti-Noise Transfer Network for Scene Classification of Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Xi Gong

    2018-03-01

    Full Text Available Remote sensing (RS scene classification is important for RS imagery semantic interpretation. Although tremendous strides have been made in RS scene classification, one of the remaining open challenges is recognizing RS scenes in low quality variance (e.g., various scales and noises. This paper proposes a deep salient feature based anti-noise transfer network (DSFATN method that effectively enhances and explores the high-level features for RS scene classification in different scales and noise conditions. In DSFATN, a novel discriminative deep salient feature (DSF is introduced by saliency-guided DSF extraction, which conducts a patch-based visual saliency (PBVS algorithm using “visual attention” mechanisms to guide pre-trained CNNs for producing the discriminative high-level features. Then, an anti-noise network is proposed to learn and enhance the robust and anti-noise structure information of RS scene by directly propagating the label information to fully-connected layers. A joint loss is used to minimize the anti-noise network by integrating anti-noise constraint and a softmax classification loss. The proposed network architecture can be easily trained with a limited amount of training data. The experiments conducted on three different scale RS scene datasets show that the DSFATN method has achieved excellent performance and great robustness in different scales and noise conditions. It obtains classification accuracy of 98.25%, 98.46%, and 98.80%, respectively, on the UC Merced Land Use Dataset (UCM, the Google image dataset of SIRI-WHU, and the SAT-6 dataset, advancing the state-of-the-art substantially.

  3. Using selected scenes from Brazilian films to teach about substance use disorders, within medical education

    Directory of Open Access Journals (Sweden)

    João Mauricio Castaldelli-Maia

    Full Text Available CONTEXT AND OBJECTIVES: Themes like alcohol and drug abuse, relationship difficulties, psychoses, autism and personality dissociation disorders have been widely used in films. Psychiatry and psychiatric conditions in various cultural settings are increasingly taught using films. Many articles on cinema and psychiatry have been published but none have presented any methodology on how to select material. Here, the authors look at the portrayal of abusive use of alcohol and drugs during the Brazilian cinema revival period (1994 to 2008. DESIGN AND SETTING: Qualitative study at two universities in the state of São Paulo. METHODS: Scenes were selected from films available at rental stores and were analyzed using a specifically designed protocol. We assessed how realistic these scenes were and their applicability for teaching. One author selected 70 scenes from 50 films (graded for realism and teaching applicability > 8. These were then rated by another two judges. Rating differences among the three judges were assessed using nonparametric tests (P 8 were defined as "quality scenes". RESULTS: Thirty-nine scenes from 27 films were identified as "quality scenes". Alcohol, cannabis, cocaine, hallucinogens and inhalants were included in these. Signs and symptoms of intoxication, abusive/harmful use and dependence were shown. CONCLUSIONS: We have produced rich teaching material for discussing psychopathology relating to alcohol and drug use that can be used both at undergraduate and at postgraduate level. Moreover, it could be seen that certain drug use behavioral patterns are deeply rooted in some Brazilian films and groups.

  4. Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing

    Directory of Open Access Journals (Sweden)

    Qianghui Zhang

    2016-07-01

    Full Text Available Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS, which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD based on Stolt interpolation. Finally, a modified TSP (MTSP is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application.

  5. Symbiont modulates expression of specific gene categories in Angomonas deanei

    Directory of Open Access Journals (Sweden)

    Luciana Loureiro Penha

    Full Text Available Trypanosomatids are parasites that cause disease in humans, animals, and plants. Most are non-pathogenic and some harbor a symbiotic bacterium. Endosymbiosis is part of the evolutionary process of vital cell functions such as respiration and photosynthesis. Angomonas deanei is an example of a symbiont-containing trypanosomatid. In this paper, we sought to investigate how symbionts influence host cells by characterising and comparing the transcriptomes of the symbiont-containing A. deanei (wild type and the symbiont-free aposymbiotic strains. The comparison revealed that the presence of the symbiont modulates several differentially expressed genes. Empirical analysis of differential gene expression showed that 216 of the 7625 modulated genes were significantly changed. Finally, gene set enrichment analysis revealed that the largest categories of genes that downregulated in the absence of the symbiont were those involved in oxidation-reduction process, ATP hydrolysis coupled proton transport and glycolysis. In contrast, among the upregulated gene categories were those involved in proteolysis, microtubule-based movement, and cellular metabolic process. Our results provide valuable information for dissecting the mechanism of endosymbiosis in A. deanei.

  6. Scene data fusion: Real-time standoff volumetric gamma-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Barnowski, Ross [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720, United States of America (United States); Haefner, Andrew; Mihailescu, Lucian [Lawrence Berkeley National Lab - Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720, United States of America (United States); Vetter, Kai [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720, United States of America (United States); Lawrence Berkeley National Lab - Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720, United States of America (United States)

    2015-11-11

    An approach to gamma-ray imaging has been developed that enables near real-time volumetric (3D) imaging of unknown environments thus improving the utility of gamma-ray imaging for source-search and radiation mapping applications. The approach, herein dubbed scene data fusion (SDF), is based on integrating mobile radiation imagers with real-time tracking and scene reconstruction algorithms to enable a mobile mode of operation and 3D localization of gamma-ray sources. A 3D model of the scene, provided in real-time by a simultaneous localization and mapping (SLAM) algorithm, is incorporated into the image reconstruction reducing the reconstruction time and improving imaging performance. The SDF concept is demonstrated in this work with a Microsoft Kinect RGB-D sensor, a real-time SLAM solver, and a cart-based Compton imaging platform comprised of two 3D position-sensitive high purity germanium (HPGe) detectors. An iterative algorithm based on Compton kinematics is used to reconstruct the gamma-ray source distribution in all three spatial dimensions. SDF advances the real-world applicability of gamma-ray imaging for many search, mapping, and verification scenarios by improving the tractiblity of the gamma-ray image reconstruction and providing context for the 3D localization of gamma-ray sources within the environment in real-time.

  7. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  8. Registration of eye reflection and scene images using an aspherical eye model.

    Science.gov (United States)

    Nakazawa, Atsushi; Nitschke, Christian; Nishida, Toyoaki

    2016-11-01

    This paper introduces an image registration algorithm between an eye reflection and a scene image. Although there are currently a large number of image registration algorithms, this task remains difficult due to nonlinear distortions at the eye surface and large amounts of noise, such as iris texture, eyelids, eyelashes, and their shadows. To overcome this issue, we developed an image registration method combining an aspherical eye model that simulates nonlinear distortions considering eye geometry and a two-step iterative registration strategy that obtains dense correspondence of the feature points to achieve accurate image registrations for the entire image region. We obtained a database of eye reflection and scene images featuring four subjects in indoor and outdoor scenes and compared the registration performance with different asphericity conditions. Results showed that the proposed approach can perform accurate registration with an average accuracy of 1.05 deg by using the aspherical cornea model. This work is relevant for eye image analysis in general, enabling novel applications and scenarios.

  9. Qualitative spatial logic descriptors from 3D indoor scenes to generate explanations in natural language.

    Science.gov (United States)

    Falomir, Zoe; Kluth, Thomas

    2018-05-01

    The challenge of describing 3D real scenes is tackled in this paper using qualitative spatial descriptors. A key point to study is which qualitative descriptors to use and how these qualitative descriptors must be organized to produce a suitable cognitive explanation. In order to find answers, a survey test was carried out with human participants which openly described a scene containing some pieces of furniture. The data obtained in this survey are analysed, and taking this into account, the QSn3D computational approach was developed which uses a XBox 360 Kinect to obtain 3D data from a real indoor scene. Object features are computed on these 3D data to identify objects in indoor scenes. The object orientation is computed, and qualitative spatial relations between the objects are extracted. These qualitative spatial relations are the input to a grammar which applies saliency rules obtained from the survey study and generates cognitive natural language descriptions of scenes. Moreover, these qualitative descriptors can be expressed as first-order logical facts in Prolog for further reasoning. Finally, a validation study is carried out to test whether the descriptions provided by QSn3D approach are human readable. The obtained results show that their acceptability is higher than 82%.

  10. Ambient visual information confers a context-specific, long-term benefit on memory for haptic scenes.

    Science.gov (United States)

    Pasqualotto, Achille; Finucane, Ciara M; Newell, Fiona N

    2013-09-01

    We investigated the effects of indirect, ambient visual information on haptic spatial memory. Using touch only, participants first learned an array of objects arranged in a scene and were subsequently tested on their recognition of that scene which was always hidden from view. During haptic scene exploration, participants could either see the surrounding room or were blindfolded. We found a benefit in haptic memory performance only when ambient visual information was available in the early stages of the task but not when participants were initially blindfolded. Specifically, when ambient visual information was available a benefit on performance was found in a subsequent block of trials during which the participant was blindfolded (Experiment 1), and persisted over a delay of one week (Experiment 2). However, we found that the benefit for ambient visual information did not transfer to a novel environment (Experiment 3). In Experiment 4 we further investigated the nature of the visual information that improved haptic memory and found that geometric information about a surrounding (virtual) room rather than isolated object landmarks, facilitated haptic scene memory. Our results suggest that vision improves haptic memory for scenes by providing an environment-centred, allocentric reference frame for representing object location through touch. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. Semantic category interference in overt picture naming: sharpening current density localization by PCA.

    Science.gov (United States)

    Maess, Burkhard; Friederici, Angela D; Damian, Markus; Meyer, Antje S; Levelt, Willem J M

    2002-04-01

    The study investigated the neuronal basis of the retrieval of words from the mental lexicon. The semantic category interference effect was used to locate lexical retrieval processes in time and space. This effect reflects the finding that, for overt naming, volunteers are slower when naming pictures out of a sequence of items from the same semantic category than from different categories. Participants named pictures blockwise either in the context of same- or mixed-category items while the brain response was registered using magnetoencephalography (MEG). Fifteen out of 20 participants showed longer response latencies in the same-category compared to the mixed-category condition. Event-related MEG signals for the participants demonstrating the interference effect were submitted to a current source density (CSD) analysis. As a new approach, a principal component analysis was applied to decompose the grand average CSD distribution into spatial subcomponents (factors). The spatial factor indicating left temporal activity revealed significantly different activation for the same-category compared to the mixed-category condition in the time window between 150 and 225 msec post picture onset. These findings indicate a major involvement of the left temporal cortex in the semantic interference effect. As this effect has been shown to take place at the level of lexical selection, the data suggest that the left temporal cortex supports processes of lexical retrieval during production.

  12. Panoramic Search: The Interaction of Memory and Vision in Search through a Familiar Scene

    Science.gov (United States)

    Oliva, Aude; Wolfe, Jeremy M. Arsenio, Helga C.

    2004-01-01

    How do observers search through familiar scenes? A novel panoramic search method is used to study the interaction of memory and vision in natural search behavior. In panoramic search, observers see part of an unchanging scene larger than their current field of view. A target object can be visible, present in the display but hidden from view, or…

  13. Combination of Morphological Operations with Structure based Partitioning and grouping for Text String detection from Natural Scenes

    OpenAIRE

    Vyankatesh V. Rampurkar; Gyankamal J. Chhajed

    2014-01-01

    Text information in natural scene images serves as important clues for many image-based applications such as scene perceptive, content-based image retrieval, assistive direction-finding and automatic geocoding. Now days different approaches like countours based, Image binarization and enhancement based, Gradient based and colour reduction based techniques can be used for the text detection from natural scenes. In this paper the combination of morphological operations with structure based part...

  14. Video Scene Parsing with Predictive Feature Learning

    OpenAIRE

    Jin, Xiaojie; Li, Xin; Xiao, Huaxin; Shen, Xiaohui; Lin, Zhe; Yang, Jimei; Chen, Yunpeng; Dong, Jian; Liu, Luoqi; Jie, Zequn; Feng, Jiashi; Yan, Shuicheng

    2016-01-01

    In this work, we address the challenging video scene parsing problem by developing effective representation learning methods given limited parsing annotations. In particular, we contribute two novel methods that constitute a unified parsing framework. (1) \\textbf{Predictive feature learning}} from nearly unlimited unlabeled video data. Different from existing methods learning features from single frame parsing, we learn spatiotemporal discriminative features by enforcing a parsing network to ...

  15. How context information and target information guide the eyes from the first epoch of search in real-world scenes.

    Science.gov (United States)

    Spotorno, Sara; Malcolm, George L; Tatler, Benjamin W

    2014-02-11

    This study investigated how the visual system utilizes context and task information during the different phases of a visual search task. The specificity of the target template (the picture or the name of the target) and the plausibility of target position in real-world scenes were manipulated orthogonally. Our findings showed that both target template information and guidance of spatial context are utilized to guide eye movements from the beginning of scene inspection. In both search initiation and subsequent scene scanning, the availability of a specific visual template was particularly useful when the spatial context of the scene was misleading and the availability of a reliable scene context facilitated search mainly when the template was abstract. Target verification was affected principally by the level of detail of target template, and was quicker in the case of a picture cue. The results indicate that the visual system can utilize target template guidance and context guidance flexibly from the beginning of scene inspection, depending upon the amount and the quality of the available information supplied by either of these high-level sources. This allows for optimization of oculomotor behavior throughout the different phases of search within a real-world scene.

  16. Blocking in Category Learning

    OpenAIRE

    Bott, Lewis; Hoffman, Aaron B.; Murphy, Gregory L.

    2007-01-01

    Many theories of category learning assume that learning is driven by a need to minimize classification error. When there is no classification error, therefore, learning of individual features should be negligible. We tested this hypothesis by conducting three category learning experiments adapted from an associative learning blocking paradigm. Contrary to an error-driven account of learning, participants learned a wide range of information when they learned about categories, and blocking effe...

  17. The lifesaving potential of specialized on-scene medical support for urban tactical operations.

    Science.gov (United States)

    Metzger, Jeffery C; Eastman, Alexander L; Benitez, Fernando L; Pepe, Paul E

    2009-01-01

    Since the 1980s, the specialized field of tactical medicine has evolved with growing support from numerous law-enforcement and medical organizations. On-scene backup from tactical emergency medical support (TEMS) providers has not only permitted more immediate advanced medical aid to injured officers, victims, bystanders, and suspects, but also allows for rapid after-incident medical screening or minor treatments that can obviate an unnecessary transport to an emergency department. The purpose of this report is to document one very explicit benefit of TEMS deployment, namely, a situation in which a police officer's life was saved by the routine on-scene presence of specialized TEMS physicians. In this specific case, a police officer was shot in the anterior neck during a law-enforcement operation and became moribund with massive hemorrhage and compromised airway. Two TEMS physicians, who had been integrated into the tactical law-enforcement team, were on scene, controlled the hemorrhage, and provided a surgical airway. By the time of arrival at the hospital, the patient had begun purposeful movements and, within 12 hours, was alert and oriented. Considering the rapid decline in the patient's condition, it was later deemed by quality assurance reviewers that the on-scene presence of these TEMS providers was lifesaving.

  18. The effects of scene characteristics, resolution, and compression on the ability to recognize objects in video

    Science.gov (United States)

    Dumke, Joel; Ford, Carolyn G.; Stange, Irena W.

    2011-03-01

    Public safety practitioners increasingly use video for object recognition tasks. These end users need guidance regarding how to identify the level of video quality necessary for their application. The quality of video used in public safety applications must be evaluated in terms of its usability for specific tasks performed by the end user. The Public Safety Communication Research (PSCR) project performed a subjective test as one of the first in a series to explore visual intelligibility in video-a user's ability to recognize an object in a video stream given various conditions. The test sought to measure the effects on visual intelligibility of three scene parameters (target size, scene motion, scene lighting), several compression rates, and two resolutions (VGA (640x480) and CIF (352x288)). Seven similarly sized objects were used as targets in nine sets of near-identical source scenes, where each set was created using a different combination of the parameters under study. Viewers were asked to identify the objects via multiple choice questions. Objective measurements were performed on each of the scenes, and the ability of the measurement to predict visual intelligibility was studied.

  19. Language categories in Russian morphology

    OpenAIRE

    زهرایی زهرایی

    2009-01-01

    When studying Russian morphology, one can distinguish two categories. These categories are “grammatical” and “lexico-grammatical”. Grammatical categories can be specified through a series of grammatical features of words. Considering different criteria, Russian grammarians and linguists divide grammatical categories of their language into different types. In determining lexico-grammatical types, in addition to a series of grammatical features, they also consider a series of lexico-semantic fe...

  20. Representation of Gravity-Aligned Scene Structure in Ventral Pathway Visual Cortex.

    Science.gov (United States)

    Vaziri, Siavash; Connor, Charles E

    2016-03-21

    The ventral visual pathway in humans and non-human primates is known to represent object information, including shape and identity [1]. Here, we show the ventral pathway also represents scene structure aligned with the gravitational reference frame in which objects move and interact. We analyzed shape tuning of recently described macaque monkey ventral pathway neurons that prefer scene-like stimuli to objects [2]. Individual neurons did not respond to a single shape class, but to a variety of scene elements that are typically aligned with gravity: large planes in the orientation range of ground surfaces under natural viewing conditions, planes in the orientation range of ceilings, and extended convex and concave edges in the orientation range of wall/floor/ceiling junctions. For a given neuron, these elements tended to share a common alignment in eye-centered coordinates. Thus, each neuron integrated information about multiple gravity-aligned structures as they would be seen from a specific eye and head orientation. This eclectic coding strategy provides only ambiguous information about individual structures but explicit information about the environmental reference frame and the orientation of gravity in egocentric coordinates. In the ventral pathway, this could support perceiving and/or predicting physical events involving objects subject to gravity, recognizing object attributes like animacy based on movement not caused by gravity, and/or stabilizing perception of the world against changes in head orientation [3-5]. Our results, like the recent discovery of object weight representation [6], imply that the ventral pathway is involved not just in recognition, but also in physical understanding of objects and scenes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  1. Concurrent Dynamics of Category Learning and Metacognitive Judgments

    Directory of Open Access Journals (Sweden)

    Valnea Žauhar

    2016-09-01

    Full Text Available In two experiments, we examined the correspondence between the dynamics of metacognitive judgments and classification accuracy when participants were asked to learn category structures of different levels of complexity, i.e., to learn tasks of types I, II and III according to Shepard, Hovland, and Jenkins (1961. The stimuli were simple geometrical figures varying in the following three dimensions: color, shape, and size. In Experiment 1, we found moderate positive correlations between confidence and accuracy in task type II and weaker correlation in task type I and III. Moreover, the trend analysis in the backward learning curves revealed that there is a non-linear trend in accuracy for all three task types, but the same trend was observed in confidence for the task type I and II but not for task type III. In Experiment 2, we found that the feeling-of-warmth judgments (FOWs showed moderate positive correlation with accuracy in all task types. Trend analysis revealed a similar non-linear component in accuracy and metacognitive judgments in task type II and III but not in task type I. Our results suggest that FOWs are a more sensitive measure of the progress of learning than confidence because FOWs capture global knowledge about the category structure, while confidence judgments are given at the level of an individual exemplar.

  2. Organizational Categories as Viewing Categories

    OpenAIRE

    Mik-Meyer, Nanna

    2005-01-01

    This paper explores how two Danish rehabilitation organizations textual guidelines for assessment of clients’ personality traits influence the actual evaluation of clients. The analysis will show how staff members produce institutional identities corresponding to organizational categories, which very often have little or no relevance for the clients evaluated. The goal of the article is to demonstrate how the institutional complex that frames the work of the organizations produces the client ...

  3. Assembling a game development scene? Uncovering Finland’s largest demo party

    Directory of Open Access Journals (Sweden)

    Heikki Tyni

    2014-03-01

    Full Text Available The study takes look at Assembly, a large-scale LAN and demo party founded in 1992 and organized annually in Helsinki, Finland. Assembly is used as a case study to explore the relationship between computer hobbyism – including gaming, demoscene and other related activities – and professional game development. Drawing from expert interviews, a visitor query and news coverage we ask what kind of functions Assembly has played for the scene in general, and on the formation and fostering of the Finnish game industry in particular. The conceptual contribution of the paper is constructed around the interrelated concepts of scene, technicity and gaming capital.

  4. Image policy, subjectivation and argument scenes

    Directory of Open Access Journals (Sweden)

    Ângela Cristina Salgueiro Marques

    2014-12-01

    Full Text Available This paper is aimed at discussing, with focus on Jacques Rancière, how an image policy can be noticed in the creative production of scenes of dissent from which the political agent emerge, appears and constitute himself in a process of subjectivation. The political and critical power of the image is linked to survival acts: operations and attempts that enable to resist to captures, silences and excesses comitted by the media discourses, by the social institutions and by the State.

  5. John Lennon, autograph hound: The fan-musician community in Hamburg's early rock-and-roll scene, 1960–65

    Directory of Open Access Journals (Sweden)

    Julia Sneeringer

    2011-03-01

    Full Text Available This article explores the Beat music scene in Hamburg, West Germany, in the early 1960s. This scene became famous for its role in incubating the Beatles, who played over 250 nights there in 1960–62, but this article focuses on the prominent role of fans in this scene. Here fans were welcomed by bands and club owners as cocreators of a scene that offered respite from the prevailing conformism of West Germany during the Economic Miracle. This scene, born at the confluence of commercial and subcultural impulses, was also instrumental in transforming rock and roll from a working-class niche product to a cross-class lingua franca for youth. It was also a key element in West Germany's broader processes of democratization during the 1960s, opening up social space in which the meanings of authority, respectability, and democracy itself could be questioned and reworked.

  6. Automatic structural scene digitalization.

    Science.gov (United States)

    Tang, Rui; Wang, Yuhan; Cosker, Darren; Li, Wenbin

    2017-01-01

    In this paper, we present an automatic system for the analysis and labeling of structural scenes, floor plan drawings in Computer-aided Design (CAD) format. The proposed system applies a fusion strategy to detect and recognize various components of CAD floor plans, such as walls, doors, windows and other ambiguous assets. Technically, a general rule-based filter parsing method is fist adopted to extract effective information from the original floor plan. Then, an image-processing based recovery method is employed to correct information extracted in the first step. Our proposed method is fully automatic and real-time. Such analysis system provides high accuracy and is also evaluated on a public website that, on average, archives more than ten thousands effective uses per day and reaches a relatively high satisfaction rate.

  7. A STEP TOWARDS DYNAMIC SCENE ANALYSIS WITH ACTIVE MULTI-VIEW RANGE IMAGING SYSTEMS

    Directory of Open Access Journals (Sweden)

    M. Weinmann

    2012-07-01

    Full Text Available Obtaining an appropriate 3D description of the local environment remains a challenging task in photogrammetric research. As terrestrial laser scanners (TLSs perform a highly accurate, but time-dependent spatial scanning of the local environment, they are only suited for capturing static scenes. In contrast, new types of active sensors provide the possibility of simultaneously capturing range and intensity information by images with a single measurement, and the high frame rate also allows for capturing dynamic scenes. However, due to the limited field of view, one observation is not sufficient to obtain a full scene coverage and therefore, typically, multiple observations are collected from different locations. This can be achieved by either placing several fixed sensors at different known locations or by using a moving sensor. In the latter case, the relation between different observations has to be estimated by using information extracted from the captured data and then, a limited field of view may lead to problems if there are too many moving objects within it. Hence, a moving sensor platform with multiple and coupled sensor devices offers the advantages of an extended field of view which results in a stabilized pose estimation, an improved registration of the recorded point clouds and an improved reconstruction of the scene. In this paper, a new experimental setup for investigating the potentials of such multi-view range imaging systems is presented which consists of a moving cable car equipped with two synchronized range imaging devices. The presented setup allows for monitoring in low altitudes and it is suitable for getting dynamic observations which might arise from moving cars or from moving pedestrians. Relying on both 3D geometry and 2D imagery, a reliable and fully automatic approach for co-registration of captured point cloud data is presented which is essential for a high quality of all subsequent tasks. The approach involves using

  8. Memory-guided attention during active viewing of edited dynamic scenes.

    Science.gov (United States)

    Valuch, Christian; König, Peter; Ansorge, Ulrich

    2017-01-01

    Films, TV shows, and other edited dynamic scenes contain many cuts, which are abrupt transitions from one video shot to the next. Cuts occur within or between scenes, and often join together visually and semantically related shots. Here, we tested to which degree memory for the visual features of the precut shot facilitates shifting attention to the postcut shot. We manipulated visual similarity across cuts, and measured how this affected covert attention (Experiment 1) and overt attention (Experiments 2 and 3). In Experiments 1 and 2, participants actively viewed a target movie that randomly switched locations with a second, distractor movie at the time of the cuts. In Experiments 1 and 2, participants were able to deploy attention more rapidly and accurately to the target movie's continuation when visual similarity was high than when it was low. Experiment 3 tested whether this could be explained by stimulus-driven (bottom-up) priming by feature similarity, using one clip at screen center that was followed by two alternative continuations to the left and right. Here, even the highest similarity across cuts did not capture attention. We conclude that following cuts of high visual similarity, memory-guided attention facilitates the deployment of attention, but this effect is (top-down) dependent on the viewer's active matching of scene content across cuts.

  9. Planarity constrained multi-view depth map reconstruction for urban scenes

    Science.gov (United States)

    Hou, Yaolin; Peng, Jianwei; Hu, Zhihua; Tao, Pengjie; Shan, Jie

    2018-05-01

    Multi-view depth map reconstruction is regarded as a suitable approach for 3D generation of large-scale scenes due to its flexibility and scalability. However, there are challenges when this technique is applied to urban scenes where apparent man-made regular shapes may present. To address this need, this paper proposes a planarity constrained multi-view depth (PMVD) map reconstruction method. Starting with image segmentation and feature matching for each input image, the main procedure is iterative optimization under the constraints of planar geometry and smoothness. A set of candidate local planes are first generated by an extended PatchMatch method. The image matching costs are then computed and aggregated by an adaptive-manifold filter (AMF), whereby the smoothness constraint is applied to adjacent pixels through belief propagation. Finally, multiple criteria are used to eliminate image matching outliers. (Vertical) aerial images, oblique (aerial) images and ground images are used for qualitative and quantitative evaluations. The experiments demonstrated that the PMVD outperforms the popular multi-view depth map reconstruction with an accuracy two times better for the aerial datasets and achieves an outcome comparable to the state-of-the-art for ground images. As expected, PMVD is able to preserve the planarity for piecewise flat structures in urban scenes and restore the edges in depth discontinuous areas.

  10. Subject categories and scope descriptions

    International Nuclear Information System (INIS)

    2002-01-01

    This document is one in a series of publications known as the ETDE/INIS Joint Reference Series. It defines the subject categories and provides the scope descriptions to be used for categorization of the nuclear literature for the preparation of INIS and ETDE input by national and regional centres. Together with the other volumes of the INIS Reference Series it defines the rules, standards and practices and provides the authorities to be used in the International Nuclear Information System and ETDE. A complete list of the volumes published in the INIS Reference Series may be found on the inside front cover of this publication. This INIS/ETDE Reference Series document is intended to serve two purposes: to define the subject scope of the International Nuclear Information System (INIS) and the Energy Technology Data Exchange (ETDE) and to define the subject classification scheme of INIS and ETDE. It is thus the guide to the inputting centres in determining which items of literature should be reported, and in determining where the full bibliographic entry and abstract of each item should be included in INIS or ETDE database. Each category is identified by a category code consisting of three alphanumeric characters. A scope description is given for each subject category. The scope of INIS is the sum of the scopes of all the categories. With most categories cross references are provided to other categories where appropriate. Cross references should be of assistance in finding the appropriate category; in fact, by indicating topics that are excluded from the category in question, the cross references help to clarify and define the scope of the category to which they are appended. A Subject Index is included as an aid to subject classifiers, but it is only an aid and not a means for subject classification. It facilitates the use of this document, but is no substitute for the description of the scope of the subject categories

  11. Was That Levity or Livor Mortis? Crime Scene Investigators' Perspectives on Humor and Work

    Science.gov (United States)

    Vivona, Brian D.

    2012-01-01

    Humor is common and purposeful in most work settings. Although researchers have examined humor and joking behavior in various work settings, minimal research has been done on humor applications in the field of crime scene investigation. The crime scene investigator encounters death, trauma, and tragedy in a more intimate manner than any other…

  12. Color categories only affect post-perceptual processes when same- and different-category colors are equally discriminable.

    Science.gov (United States)

    He, Xun; Witzel, Christoph; Forder, Lewis; Clifford, Alexandra; Franklin, Anna

    2014-04-01

    Prior claims that color categories affect color perception are confounded by inequalities in the color space used to equate same- and different-category colors. Here, we equate same- and different-category colors in the number of just-noticeable differences, and measure event-related potentials (ERPs) to these colors on a visual oddball task to establish if color categories affect perceptual or post-perceptual stages of processing. Category effects were found from 200 ms after color presentation, only in ERP components that reflect post-perceptual processes (e.g., N2, P3). The findings suggest that color categories affect post-perceptual processing, but do not affect the perceptual representation of color.

  13. An Indoor Scene Recognition-Based 3D Registration Mechanism for Real-Time AR-GIS Visualization in Mobile Applications

    Directory of Open Access Journals (Sweden)

    Wei Ma

    2018-03-01

    Full Text Available Mobile Augmented Reality (MAR systems are becoming ideal platforms for visualization, permitting users to better comprehend and interact with spatial information. Subsequently, this technological development, in turn, has prompted efforts to enhance mechanisms for registering virtual objects in real world contexts. Most existing AR 3D Registration techniques lack the scene recognition capabilities needed to describe accurately the positioning of virtual objects in scenes representing reality. Moreover, the application of such registration methods in indoor AR-GIS systems is further impeded by the limited capacity of these systems to detect the geometry and semantic information in indoor environments. In this paper, we propose a novel method for fusing virtual objects and indoor scenes, based on indoor scene recognition technology. To accomplish scene fusion in AR-GIS, we first detect key points in reference images. Then, we perform interior layout extraction using a Fully Connected Networks (FCN algorithm to acquire layout coordinate points for the tracking targets. We detect and recognize the target scene in a video frame image to track targets and estimate the camera pose. In this method, virtual 3D objects are fused precisely to a real scene, according to the camera pose and the previously extracted layout coordinate points. Our results demonstrate that this approach enables accurate fusion of virtual objects with representations of real world indoor environments. Based on this fusion technique, users can better grasp virtual three-dimensional representations on an AR-GIS platform.

  14. Complex scenes and situations visualization in hierarchical learning algorithm with dynamic 3D NeoAxis engine

    Science.gov (United States)

    Graham, James; Ternovskiy, Igor V.

    2013-06-01

    We applied a two stage unsupervised hierarchical learning system to model complex dynamic surveillance and cyber space monitoring systems using a non-commercial version of the NeoAxis visualization software. The hierarchical scene learning and recognition approach is based on hierarchical expectation maximization, and was linked to a 3D graphics engine for validation of learning and classification results and understanding the human - autonomous system relationship. Scene recognition is performed by taking synthetically generated data and feeding it to a dynamic logic algorithm. The algorithm performs hierarchical recognition of the scene by first examining the features of the objects to determine which objects are present, and then determines the scene based on the objects present. This paper presents a framework within which low level data linked to higher-level visualization can provide support to a human operator and be evaluated in a detailed and systematic way.

  15. Act, scene, agency: The drama of medical imaging

    International Nuclear Information System (INIS)

    Murphy, Frederick

    2009-01-01

    This paper investigates the use of a novel research paradigm in order to describe practice and behaviour within the context of magnetic resonance imaging (MRI) departments. Using a thematic analysis of patient and radiographer transcripts, a social model of the interactions that can occur is constructed from a theatrical perspective. A review of the social scientific literature was undertaken to identify the main concepts associated with this paradigm. Results: Radiographers and patients fitted into the roles and categories, as described by the original philosophers. Behaviour and ritual were seen to be markedly different in the presence of the patient as opposed to being in the control room. The deliberate 'acting out' of roles was also revealed in order to maintain self-identity and professional image. Patients provided an insightful account of their experiences and demonstrated some sophisticated coping strategies during the scanning procedure. Conclusion: The use of this alternative qualitative method revealed some very interesting, complex rituals and behaviour patterns amongst the sample of radiographers and their patients

  16. Context modulates attention to social scenes in toddlers with autism

    Science.gov (United States)

    Chawarska, Katarzyna; Macari, Suzanne; Shic, Frederick

    2013-01-01

    Background In typical development, the unfolding of social and communicative skills hinges upon the ability to allocate and sustain attention towards people, a skill present moments after birth. Deficits in social attention have been well documented in autism, though the underlying mechanisms are poorly understood. Methods In order to parse the factors that are responsible for limited social attention in toddlers with autism, we manipulated the context in which a person appeared in their visual field with regard to the presence of salient social (child-directed speech and eye contact) and nonsocial (distractor toys) cues for attention. Participants included 13- to 25-month-old toddlers with autism (AUT; n=54), developmental delay (DD; n=22), and typical development (TD; n=48). Their visual responses were recorded with an eye-tracker. Results In conditions devoid of eye contact and speech, the distribution of attention between key features of the social scene in toddlers with autism was comparable to that in DD and TD controls. However, when explicit dyadic cues were introduced, toddlers with autism showed decreased attention to the entire scene and, when they looked at the scene, they spent less time looking at the speaker’s face and monitoring her lip movements than the control groups. In toddlers with autism, decreased time spent exploring the entire scene was associated with increased symptom severity and lower nonverbal functioning; atypical language profiles were associated with decreased monitoring of the speaker’s face and her mouth. Conclusions While in certain contexts toddlers with autism attend to people and objects in a typical manner, they show decreased attentional response to dyadic cues for attention. Given that mechanisms supporting responsivity to dyadic cues are present shortly after birth and are highly consequential for development of social cognition and communication, these findings have important implications for the understanding of the

  17. Far-infrared pedestrian detection for advanced driver assistance systems using scene context

    Science.gov (United States)

    Wang, Guohua; Liu, Qiong; Wu, Qingyao

    2016-04-01

    Pedestrian detection is one of the most critical but challenging components in advanced driver assistance systems. Far-infrared (FIR) images are well-suited for pedestrian detection even in a dark environment. However, most current detection approaches just focus on pedestrian patterns themselves, where robust and real-time detection cannot be well achieved. We propose a fast FIR pedestrian detection approach, called MAP-HOGLBP-T, to explicitly exploit the scene context for the driver assistance system. In MAP-HOGLBP-T, three algorithms are developed to exploit the scene contextual information from roads, vehicles, and background objects of high homogeneity, and we employ the Bayesian approach to build a classifier learner which respects the scene contextual information. We also develop a multiframe approval scheme to enhance the detection performance based on spatiotemporal continuity of pedestrians. Our empirical study on real-world datasets has demonstrated the efficiency and effectiveness of the proposed method. The performance is shown to be better than that of state-of-the-art low-level feature-based approaches.

  18. Prevalence of Metabolic Syndrome and Its Components among Japanese Workers by Clustered Business Category.

    Science.gov (United States)

    Hidaka, Tomoo; Hayakawa, Takehito; Kakamu, Takeyasu; Kumagai, Tomohiro; Hiruta, Yuhei; Hata, Junko; Tsuji, Masayoshi; Fukushima, Tetsuhito

    2016-01-01

    The present study was a cross-sectional study conducted to reveal the prevalence of metabolic syndrome and its components and describe the features of such prevalence among Japanese workers by clustered business category using big data. The data of approximately 120,000 workers were obtained from a national representative insurance organization, and the study analyzed the health checkup and questionnaire results according to the field of business of each subject. Abnormalities found during the checkups such as excessive waist circumference, hypertension or glucose intolerance, and metabolic syndrome, were recorded. All subjects were classified by business field into 18 categories based on The North American Industry Classification System. Based on the criteria of the Japanese Committee for the Diagnostic Criteria of Metabolic Syndrome, the standardized prevalence ratio (SPR) of metabolic syndrome and its components by business category was calculated, and the 95% confidence interval of the SPR was computed. Hierarchical cluster analysis was then performed based on the SPR of metabolic syndrome components, and the 18 business categories were classified into three clusters for both males and females. The following business categories were at significantly high risk of metabolic syndrome: among males, Construction, Transportation, Professional Services, and Cooperative Association; and among females, Health Care and Cooperative Association. The results of the cluster analysis indicated one cluster for each gender with a higher prevalence of metabolic syndrome components; among males, a cluster consisting of Manufacturing, Transportation, Finance, and Cooperative Association, and among females, a cluster consisting of Mining, Transportation, Finance, Accommodation, and Cooperative Association. These findings reveal that, when providing health guidance and support regarding metabolic syndrome, consideration must be given to its components and the variety of its

  19. Prevalence of Metabolic Syndrome and Its Components among Japanese Workers by Clustered Business Category.

    Directory of Open Access Journals (Sweden)

    Tomoo Hidaka

    Full Text Available The present study was a cross-sectional study conducted to reveal the prevalence of metabolic syndrome and its components and describe the features of such prevalence among Japanese workers by clustered business category using big data. The data of approximately 120,000 workers were obtained from a national representative insurance organization, and the study analyzed the health checkup and questionnaire results according to the field of business of each subject. Abnormalities found during the checkups such as excessive waist circumference, hypertension or glucose intolerance, and metabolic syndrome, were recorded. All subjects were classified by business field into 18 categories based on The North American Industry Classification System. Based on the criteria of the Japanese Committee for the Diagnostic Criteria of Metabolic Syndrome, the standardized prevalence ratio (SPR of metabolic syndrome and its components by business category was calculated, and the 95% confidence interval of the SPR was computed. Hierarchical cluster analysis was then performed based on the SPR of metabolic syndrome components, and the 18 business categories were classified into three clusters for both males and females. The following business categories were at significantly high risk of metabolic syndrome: among males, Construction, Transportation, Professional Services, and Cooperative Association; and among females, Health Care and Cooperative Association. The results of the cluster analysis indicated one cluster for each gender with a higher prevalence of metabolic syndrome components; among males, a cluster consisting of Manufacturing, Transportation, Finance, and Cooperative Association, and among females, a cluster consisting of Mining, Transportation, Finance, Accommodation, and Cooperative Association. These findings reveal that, when providing health guidance and support regarding metabolic syndrome, consideration must be given to its components and the

  20. Fundamental remote sensing science research program. Part 1: Scene radiation and atmospheric effects characterization project

    Science.gov (United States)

    Murphy, R. E.; Deering, D. W.

    1984-01-01

    Brief articles summarizing the status of research in the scene radiation and atmospheric effect characterization (SRAEC) project are presented. Research conducted within the SRAEC program is focused on the development of empirical characterizations and mathematical process models which relate the electromagnetic energy reflected or emitted from a scene to the biophysical parameters of interest.

  1. Eye Movement Control in Scene Viewing and Reading: Evidence from the Stimulus Onset Delay Paradigm

    Science.gov (United States)

    Luke, Steven G.; Nuthmann, Antje; Henderson, John M.

    2013-01-01

    The present study used the stimulus onset delay paradigm to investigate eye movement control in reading and in scene viewing in a within-participants design. Short onset delays (0, 25, 50, 200, and 350 ms) were chosen to simulate the type of natural processing difficulty encountered in reading and scene viewing. Fixation duration increased…

  2. The Role of Forensic Botany in Solving a Case: Scientific Evidence on the Falsification of a Crime Scene.

    Science.gov (United States)

    Aquila, Isabella; Gratteri, Santo; Sacco, Matteo A; Ricci, Pietrantonio

    2018-05-01

    Forensic botany can provide useful information for pathologists, particularly on crime scene investigation. We report the case of a man who arrived at the hospital and died shortly afterward. The body showed widespread electrical lesions. The statements of his brother and wife about the incident aroused a large amount of suspicion in the investigators. A crime scene investigation was carried out, along with a botanical morphological survey on small vegetations found on the corpse. An autopsy was also performed. Botanical analysis showed some samples of Xanthium spinosum, thus leading to the discovery of the falsification of the crime scene although the location of the true crime scene remained a mystery. The botanical analysis, along with circumstantial data and autopsy findings, led to the discovery of the real crime scene and became crucial as part of the legal evidence regarding the falsity of the statements made to investigators. © 2017 American Academy of Forensic Sciences.

  3. Triangulated categories (AM-148)

    CERN Document Server

    Neeman, Amnon

    2014-01-01

    The first two chapters of this book offer a modern, self-contained exposition of the elementary theory of triangulated categories and their quotients. The simple, elegant presentation of these known results makes these chapters eminently suitable as a text for graduate students. The remainder of the book is devoted to new research, providing, among other material, some remarkable improvements on Brown''s classical representability theorem. In addition, the author introduces a class of triangulated categories""--the ""well generated triangulated categories""--and studies their properties. This

  4. Research on 3D virtual campus scene modeling based on 3ds Max and VRML

    Science.gov (United States)

    Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue

    2015-12-01

    With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.

  5. The ART of CSI: An augmented reality tool (ART) to annotate crime scenes in forensic investigation

    NARCIS (Netherlands)

    Streefkerk, J.W.; Houben, M.; Amerongen, P. van; Haar, F. ter; Dijk, J.

    2013-01-01

    Forensic professionals have to collect evidence at crime scenes quickly and without contamination. A handheld Augmented Reality (AR) annotation tool allows these users to virtually tag evidence traces at crime scenes and to review, share and export evidence lists. In an user walkthrough with this

  6. Consider the category: The effect of spacing depends on individual learning histories.

    Science.gov (United States)

    Slone, Lauren K; Sandhofer, Catherine M

    2017-07-01

    The spacing effect refers to increased retention following learning instances that are spaced out in time compared with massed together in time. By one account, the advantages of spaced learning should be independent of task particulars and previous learning experiences given that spacing effects have been demonstrated in a variety of tasks across the lifespan. However, by another account, spaced learning should be affected by previous learning because past learning affects the memory and attention processes that form the crux of the spacing effect. The current study investigated whether individuals' learning histories affect the role of spacing in category learning. We examined the effect of spacing on 24 2- to 3.5-year-old children's learning of categories organized by properties to which children's previous learning experiences have biased them to attend (i.e., shape) and properties to which children are less biased to attend (i.e., texture and color). Spaced presentations led to significantly better learning of shape categories, but not of texture or color categories, compared with massed presentations. In addition, generalized estimating equations analyses revealed positive relations between the size of children's "shape-side" productive vocabularies and their shape category learning and between the size of children's "against-the-system" productive vocabularies and their texture category learning. These results suggest that children's attention to and memory for novel object categories are strongly related to their individual word-learning histories. Moreover, children's learned attentional biases affected the types of categories for which spacing facilitated learning. These findings highlight the importance of considering how learners' previous experiences may influence future learning. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Parietal cortex integrates contextual and saliency signals during the encoding of natural scenes in working memory.

    Science.gov (United States)

    Santangelo, Valerio; Di Francesco, Simona Arianna; Mastroberardino, Serena; Macaluso, Emiliano

    2015-12-01

    The Brief presentation of a complex scene entails that only a few objects can be selected, processed indepth, and stored in memory. Both low-level sensory salience and high-level context-related factors (e.g., the conceptual match/mismatch between objects and scene context) contribute to this selection process, but how the interplay between these factors affects memory encoding is largely unexplored. Here, during fMRI we presented participants with pictures of everyday scenes. After a short retention interval, participants judged the position of a target object extracted from the initial scene. The target object could be either congruent or incongruent with the context of the scene, and could be located in a region of the image with maximal or minimal salience. Behaviourally, we found a reduced impact of saliency on visuospatial working memory performance when the target was out-of-context. Encoding-related fMRI results showed that context-congruent targets activated dorsoparietal regions, while context-incongruent targets de-activated the ventroparietal cortex. Saliency modulated activity both in dorsal and ventral regions, with larger context-related effects for salient targets. These findings demonstrate the joint contribution of knowledge-based and saliency-driven attention for memory encoding, highlighting a dissociation between dorsal and ventral parietal regions. © 2015 Wiley Periodicals, Inc.

  8. The composition of category conjunctions.

    Science.gov (United States)

    Hutter, Russell R C; Crisp, Richard J

    2005-05-01

    In three experiments, the authors investigated the impression formation process resulting from the perception of familiar or unfamiliar social category combinations. In Experiment 1, participants were asked to generate attributes associated with either a familiar or unfamiliar social category conjunction. Compared to familiar combinations, the authors found that when the conjunction was unfamiliar, participants formed their impression less from the individual constituent categories and relatively more from novel emergent attributes. In Experiment 2, the authors replicated this effect using alternative experimental materials. In Experiment 3, the effect generalized to additional (orthogonally combined) gender and occupation categories. The implications of these findings for understanding the processes involved in the conjunction of social categories, and the formation of new stereotypes, are discussed.

  9. Interactive Procedural Modelling of Coherent Waterfall Scenes

    OpenAIRE

    Emilien , Arnaud; Poulin , Pierre; Cani , Marie-Paule; Vimont , Ulysse

    2015-01-01

    International audience; Combining procedural generation and user control is a fundamental challenge for the interactive design of natural scenery. This is particularly true for modelling complex waterfall scenes where, in addition to taking charge of geometric details, an ideal tool should also provide a user with the freedom to shape the running streams and falls, while automatically maintaining physical plausibility in terms of flow network, embedding into the terrain, and visual aspects of...

  10. Imaging infrared: Scene simulation, modeling, and real image tracking; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    Science.gov (United States)

    Triplett, Milton J.; Wolverton, James R.; Hubert, August J.

    1989-09-01

    Various papers on scene simulation, modeling, and real image tracking using IR imaging are presented. Individual topics addressed include: tactical IR scene generator, dynamic FLIR simulation in flight training research, high-speed dynamic scene simulation in UV to IR spectra, development of an IR sensor calibration facility, IR celestial background scene description, transmission measurement of optical components at cryogenic temperatures, diffraction model for a point-source generator, silhouette-based tracking for tactical IR systems, use of knowledge in electrooptical trackers, detection and classification of target formations in IR image sequences, SMPRAD: simplified three-dimensional cloud radiance model, IR target generator, recent advances in testing of thermal imagers, generic IR system models with dynamic image generation, modeling realistic target acquisition using IR sensors in multiple-observer scenarios, and novel concept of scene generation and comprehensive dynamic sensor test.

  11. Exploiting current-generation graphics hardware for synthetic-scene generation

    Science.gov (United States)

    Tanner, Michael A.; Keen, Wayne A.

    2010-04-01

    Increasing seeker frame rate and pixel count, as well as the demand for higher levels of scene fidelity, have driven scene generation software for hardware-in-the-loop (HWIL) and software-in-the-loop (SWIL) testing to higher levels of parallelization. Because modern PC graphics cards provide multiple computational cores (240 shader cores for a current NVIDIA Corporation GeForce and Quadro cards), implementation of phenomenology codes on graphics processing units (GPUs) offers significant potential for simultaneous enhancement of simulation frame rate and fidelity. To take advantage of this potential requires algorithm implementation that is structured to minimize data transfers between the central processing unit (CPU) and the GPU. In this paper, preliminary methodologies developed at the Kinetic Hardware In-The-Loop Simulator (KHILS) will be presented. Included in this paper will be various language tradeoffs between conventional shader programming, Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), including performance trades and possible pathways for future tool development.

  12. Acute stress influences the discrimination of complex scenes and complex faces in young healthy men.

    Science.gov (United States)

    Paul, M; Lech, R K; Scheil, J; Dierolf, A M; Suchan, B; Wolf, O T

    2016-04-01

    The stress-induced release of glucocorticoids has been demonstrated to influence hippocampal functions via the modulation of specific receptors. At the behavioral level stress is known to influence hippocampus dependent long-term memory. In recent years, studies have consistently associated the hippocampus with the non-mnemonic perception of scenes, while adjacent regions in the medial temporal lobe were associated with the perception of objects, and faces. So far it is not known whether and how stress influences non-mnemonic perceptual processes. In a behavioral study, fifty male participants were subjected either to the stressful socially evaluated cold-pressor test or to a non-stressful control procedure, before they completed a visual discrimination task, comprising scenes and faces. The complexity of the face and scene stimuli was manipulated in easy and difficult conditions. A significant three way interaction between stress, stimulus type and complexity was found. Stressed participants tended to commit more errors in the complex scenes condition. For complex faces a descriptive tendency in the opposite direction (fewer errors under stress) was observed. As a result the difference between the number of errors for scenes and errors for faces was significantly larger in the stress group. These results indicate that, beyond the effects of stress on long-term memory, stress influences the discrimination of spatial information, especially when the perception is characterized by a high complexity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. The Effects of Linguistic Labels Related to Abstract Scenes on Memory

    Directory of Open Access Journals (Sweden)

    Kentaro Inomata

    2011-10-01

    Full Text Available Boundary extension is the false memory beyond the actual boundary of a picture scene. Gagnier (2011 suggested that a linguistic label has no effect on the magnitude of boundary extension. Although she controlled the timing of the presentation or information of the linguistic label, the information of stimulus was not changed. In the present study, the depiction of the main object was controlled in order to change the contextual information of a scene. In experiment, the 68 participants were shown 12 pictures. The stimulus consisted pictures that depicted the main object or did not depict the main object, and half of them were presented with linguistic description. Participants rated the object-less pictures more closely than the original pictures, when the former were presented with linguistic labels. However, when they were presented without linguistic labels, boundary extension did not occur. There was no effect of labels on the pictures that depicted the main objects. On the basis of these results, the linguistic label enhances the representation of the abstract scene like a homogeneous field or a wall. This finding suggests that boundary extension may be affected by not only visual information but also by other sensory information mediated by linguistic representation.

  14. Cerebral responses to across- and within-category change of vowel durations measured by near-infrared spectroscopy

    Science.gov (United States)

    Minagawa-Kawai, Yasuyo; Mori, Koichi; Furuya, Izumi; Hayashi, Ryoko; Sato, Yutaka

    2002-05-01

    The present study examined cerebral responses to phoneme categories, using near-infrared spectroscopy (NIRS) by measuring the concentration and oxygenation of hemoglobin accompanying local brain activities. Targeted phonemes used here are Japanese long and short vowel categories realized only by durational differences. Results of NIRS and behavioral test revealed NIRS could capture phoneme-specific information. The left side of the auditory area showed large hemodynamic changes only for contrasting stimuli between which phonemic boundary was estimated (across-category condition), but not for stimuli differing by an equal duration but belonging to the same phoneme category (within-category condition). Left dominance in phoneme processing was also confirmed for the across-category stimuli. These findings indicate that the Japanese vowel contrast based only on duration is dealt with in the same language-dominant hemisphere as the other phonemic categories as studied with MEG and PET, and that the cortical activities related to its processing can be detected with NIRS. [Work supported by Japan Society for Promotion of Science (No. 8484) and a grant from Ministry of Health and Welfare of Japan.

  15. How categories come to matter

    DEFF Research Database (Denmark)

    Leahu, Lucian; Cohn, Marisa; March, Wendy

    2013-01-01

    In a study of users' interactions with Siri, the iPhone personal assistant application, we noticed the emergence of overlaps and blurrings between explanatory categories such as "human" and "machine". We found that users work to purify these categories, thus resolving the tensions related to the ...... initial data analysis, due to our own forms of latent purification, and outline the particular analytic techniques that helped lead to this discovery. We thus provide an illustrative case of how categories come to matter in HCI research and design.......In a study of users' interactions with Siri, the iPhone personal assistant application, we noticed the emergence of overlaps and blurrings between explanatory categories such as "human" and "machine". We found that users work to purify these categories, thus resolving the tensions related...

  16. 14 CFR 23.3 - Airplane categories.

    Science.gov (United States)

    2010-01-01

    ... 14 Aeronautics and Space 1 2010-01-01 2010-01-01 false Airplane categories. 23.3 Section 23.3... STANDARDS: NORMAL, UTILITY, ACROBATIC, AND COMMUTER CATEGORY AIRPLANES General § 23.3 Airplane categories. (a) The normal category is limited to airplanes that have a seating configuration, excluding pilot...

  17. Setting the scene

    International Nuclear Information System (INIS)

    Curran, S.

    1977-01-01

    The reasons for the special meeting on the breeder reactor are outlined with some reference to the special Scottish interest in the topic. Approximately 30% of the electrical energy generated in Scotland is nuclear and the special developments at Dounreay make policy decisions on the future of the commercial breeder reactor urgent. The participants review the major questions arising in arriving at such decisions. In effect an attempt is made to respond to the wish of the Secretary of State for Energy to have informed debate. To set the scene the importance of energy availability as regards to the strength of the national economy is stressed and the reasons for an increasing energy demand put forward. Examination of alternative sources of energy shows that none is definitely capable of filling the foreseen energy gap. This implies an integrated thermal/breeder reactor programme as the way to close the anticipated gap. The problems of disposal of radioactive waste and the safeguards in the handling of plutonium are outlined. Longer-term benefits, including the consumption of plutonium and naturally occurring radioactive materials, are examined. (author)

  18. Atypical category processing and hemispheric asymmetries in high-functioning children with autism: revealed through high-density EEG mapping.

    Science.gov (United States)

    Fiebelkorn, Ian C; Foxe, John J; McCourt, Mark E; Dumas, Kristina N; Molholm, Sophie

    2013-05-01

    Behavioral evidence for an impaired ability to group objects based on similar physical or semantic properties in autism spectrum disorders (ASD) has been mixed. Here, we recorded brain activity from high-functioning children with ASD as they completed a visual-target detection task. We then assessed the extent to which object-based selective attention automatically generalized from targets to non-target exemplars from the same well-known object class (e.g., dogs). Our results provide clear electrophysiological evidence that children with ASD (N=17, aged 8-13 years) process the similarity between targets (e.g., a specific dog) and same-category non-targets (SCNT) (e.g., another dog) to a lesser extent than do their typically developing (TD) peers (N=21). A closer examination of the data revealed striking hemispheric asymmetries that were specific to the ASD group. These findings align with mounting evidence in the autism literature of anatomic underconnectivity between the cerebral hemispheres. Years of research in individuals with TD have demonstrated that the left hemisphere (LH) is specialized toward processing local (or featural) stimulus properties and the right hemisphere (RH) toward processing global (or configural) stimulus properties. We therefore propose a model where a lack of communication between the hemispheres in ASD, combined with typical hemispheric specialization, is a root cause for impaired categorization and the oft-observed bias to process local over global stimulus properties. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. Bundles of C*-categories and duality

    OpenAIRE

    Vasselli, Ezio

    2005-01-01

    We introduce the notions of multiplier C*-category and continuous bundle of C*-categories, as the categorical analogues of the corresponding C*-algebraic notions. Every symmetric tensor C*-category with conjugates is a continuous bundle of C*-categories, with base space the spectrum of the C*-algebra associated with the identity object. We classify tensor C*-categories with fibre the dual of a compact Lie group in terms of suitable principal bundles. This also provides a classification for ce...

  20. A Knowledge Base for Automatic Feature Recognition from Point Clouds in an Urban Scene

    Directory of Open Access Journals (Sweden)

    Xu-Feng Xing

    2018-01-01

    Full Text Available LiDAR technology can provide very detailed and highly accurate geospatial information on an urban scene for the creation of Virtual Geographic Environments (VGEs for different applications. However, automatic 3D modeling and feature recognition from LiDAR point clouds are very complex tasks. This becomes even more complex when the data is incomplete (occlusion problem or uncertain. In this paper, we propose to build a knowledge base comprising of ontology and semantic rules aiming at automatic feature recognition from point clouds in support of 3D modeling. First, several modules for ontology are defined from different perspectives to describe an urban scene. For instance, the spatial relations module allows the formalized representation of possible topological relations extracted from point clouds. Then, a knowledge base is proposed that contains different concepts, their properties and their relations, together with constraints and semantic rules. Then, instances and their specific relations form an urban scene and are added to the knowledge base as facts. Based on the knowledge and semantic rules, a reasoning process is carried out to extract semantic features of the objects and their components in the urban scene. Finally, several experiments are presented to show the validity of our approach to recognize different semantic features of buildings from LiDAR point clouds.

  1. Forensic botany as a useful tool in the crime scene: Report of a case.

    Science.gov (United States)

    Margiotta, Gabriele; Bacaro, Giovanni; Carnevali, Eugenia; Severini, Simona; Bacci, Mauro; Gabbrielli, Mario

    2015-08-01

    The ubiquitous presence of plant species makes forensic botany useful for many criminal cases. Particularly, bryophytes are useful for forensic investigations because many of them are clonal and largely distributed. Bryophyte shoots can easily become attached to shoes and clothes and it is possible to be found on footwear, providing links between crime scene and individuals. We report a case of suicide of a young girl happened in Siena, Tuscany, Italia. The cause of traumatic injuries could be ascribed to suicide, to homicide, or to accident. In absence of eyewitnesses who could testify the dynamics of the event, the crime scene investigation was fundamental to clarify the accident. During the scene analysis, some fragments of Tortula muralis Hedw. and Bryum capillare Hedw were found. The fragments were analyzed by a bryologists in order to compare them with the moss present on the stairs that the victim used immediately before the death. The analysis of these bryophytes found at the crime scene allowed to reconstruct the accident. Even if this evidence, of course, is circumstantial, it can be useful in forensic cases, together with the other evidences, to reconstruct the dynamics of events. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  2. Models as Relational Categories

    Science.gov (United States)

    Kokkonen, Tommi

    2017-11-01

    Model-based learning (MBL) has an established position within science education. It has been found to enhance conceptual understanding and provide a way for engaging students in authentic scientific activity. Despite ample research, few studies have examined the cognitive processes regarding learning scientific concepts within MBL. On the other hand, recent research within cognitive science has examined the learning of so-called relational categories. Relational categories are categories whose membership is determined on the basis of the common relational structure. In this theoretical paper, I argue that viewing models as relational categories provides a well-motivated cognitive basis for MBL. I discuss the different roles of models and modeling within MBL (using ready-made models, constructive modeling, and generative modeling) and discern the related cognitive aspects brought forward by the reinterpretation of models as relational categories. I will argue that relational knowledge is vital in learning novel models and in the transfer of learning. Moreover, relational knowledge underlies the coherent, hierarchical knowledge of experts. Lastly, I will examine how the format of external representations may affect the learning of models and the relevant relations. The nature of the learning mechanisms underlying students' mental representations of models is an interesting open question to be examined. Furthermore, the ways in which the expert-like knowledge develops and how to best support it is in need of more research. The discussion and conceptualization of models as relational categories allows discerning students' mental representations of models in terms of evolving relational structures in greater detail than previously done.

  3. The formation of music-scenes in Manchester and their relation to urban space and the image of the city

    DEFF Research Database (Denmark)

    Nielsen, Tom

    2013-01-01

    and the image of the city. This image has later been utilised in the branding of Manchester as a creative city. The case is interesting in relation to the current ideals of planning ‘creative cities’ and local cultural scenes. The music scenes cannot be seen as participatory projects and has developed in more......The formation of music-scenes in Manchester and their relation to urban space and the image of the city The paper I would like to present derives from a study of the relation between the atmospheric qualities of a city and the formation of music scenes. I have studied Manchester which is a known...

  4. Cultural heritage and history in the European metal scene

    NARCIS (Netherlands)

    Klepper, de S.; Molpheta, S.; Pille, S.; Saouma, R.; During, R.; Muilwijk, M.

    2007-01-01

    This paper represents an inquiry on the use of history and cultural heritage in the metal scene. It is an attempt to show how history and cultural heritage can possibly be spread among people using an unconventional way. The followed research method was built on an explorative study that included an

  5. Words Matter: Scene Text for Image Classification and Retrieval

    NARCIS (Netherlands)

    Karaoglu, S.; Tao, R.; Gevers, T.; Smeulders, A.W.M.

    Text in natural images typically adds meaning to an object or scene. In particular, text specifies which business places serve drinks (e.g., cafe, teahouse) or food (e.g., restaurant, pizzeria), and what kind of service is provided (e.g., massage, repair). The mere presence of text, its words, and

  6. Decoding visual object categories from temporal correlations of ECoG signals.

    Science.gov (United States)

    Majima, Kei; Matsuo, Takeshi; Kawasaki, Keisuke; Kawai, Kensuke; Saito, Nobuhito; Hasegawa, Isao; Kamitani, Yukiyasu

    2014-04-15

    How visual object categories are represented in the brain is one of the key questions in neuroscience. Studies on low-level visual features have shown that relative timings or phases of neural activity between multiple brain locations encode information. However, whether such temporal patterns of neural activity are used in the representation of visual objects is unknown. Here, we examined whether and how visual object categories could be predicted (or decoded) from temporal patterns of electrocorticographic (ECoG) signals from the temporal cortex in five patients with epilepsy. We used temporal correlations between electrodes as input features, and compared the decoding performance with features defined by spectral power and phase from individual electrodes. While using power or phase alone, the decoding accuracy was significantly better than chance, correlations alone or those combined with power outperformed other features. Decoding performance with correlations was degraded by shuffling the order of trials of the same category in each electrode, indicating that the relative time series between electrodes in each trial is critical. Analysis using a sliding time window revealed that decoding performance with correlations began to rise earlier than that with power. This earlier increase in performance was replicated by a model using phase differences to encode categories. These results suggest that activity patterns arising from interactions between multiple neuronal units carry additional information on visual object categories. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Joint Multi-scale Convolution Neural Network for Scene Classification of High Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    ZHENG Zhuo

    2018-05-01

    Full Text Available High resolution remote sensing imagery scene classification is important for automatic complex scene recognition, which is the key technology for military and disaster relief, etc. In this paper, we propose a novel joint multi-scale convolution neural network (JMCNN method using a limited amount of image data for high resolution remote sensing imagery scene classification. Different from traditional convolutional neural network, the proposed JMCNN is an end-to-end training model with joint enhanced high-level feature representation, which includes multi-channel feature extractor, joint multi-scale feature fusion and Softmax classifier. Multi-channel and scale convolutional extractors are used to extract scene middle features, firstly. Then, in order to achieve enhanced high-level feature representation in a limit dataset, joint multi-scale feature fusion is proposed to combine multi-channel and scale features using two feature fusions. Finally, enhanced high-level feature representation can be used for classification by Softmax. Experiments were conducted using two limit public UCM and SIRI datasets. Compared to state-of-the-art methods, the JMCNN achieved improved performance and great robustness with average accuracies of 89.3% and 88.3% on the two datasets.

  8. Characterization, propagation, and simulation of infrared scenes; Proceedings of the Meeting, Orlando, FL, Apr. 16-20, 1990

    Science.gov (United States)

    Watkins, Wendell R.; Zegel, Ferdinand H.; Triplett, Milton J.

    1990-09-01

    Various papers on the characterization, propagation, and simulation of IR scenes are presented. Individual topics addressed include: total radiant exitance measurements, absolute measurement of diffuse and specular reflectance using an FTIR spectrometer with an integrating sphere, fundamental limits in temperature estimation, incorporating the BRDF into an IR scene-generation system, characterizing IR dynamic response for foliage backgrounds, modeling sea surface effects in FLIR performance codes, automated imaging IR seeker performance evaluation system, generation of signature data bases with fast codes, background measurements using the NPS-IRST system. Also discussed are: naval ocean IR background analysis, camouflage simulation and effectiveness assessment for the individual soldier, discussion of IR scene generators, multiwavelength Scophony IR scene projector, LBIR target generator and calibrator for preflight seeker tests, dual-mode hardware-in-the-loop simulation facility, development of the IR blackbody source of gravity-type heat pipe and study of its characteristic.

  9. Relative risk of probabilistic category learning deficits in patients with schizophrenia and their siblings

    Science.gov (United States)

    Weickert, Thomas W.; Goldberg, Terry E.; Egan, Michael F.; Apud, Jose A.; Meeter, Martijn; Myers, Catherine E.; Gluck, Mark A; Weinberger, Daniel R.

    2010-01-01

    Background While patients with schizophrenia display an overall probabilistic category learning performance deficit, the extent to which this deficit occurs in unaffected siblings of patients with schizophrenia is unknown. There are also discrepant findings regarding probabilistic category learning acquisition rate and performance in patients with schizophrenia. Methods A probabilistic category learning test was administered to 108 patients with schizophrenia, 82 unaffected siblings, and 121 healthy participants. Results Patients with schizophrenia displayed significant differences from their unaffected siblings and healthy participants with respect to probabilistic category learning acquisition rates. Although siblings on the whole failed to differ from healthy participants on strategy and quantitative indices of overall performance and learning acquisition, application of a revised learning criterion enabling classification into good and poor learners based on individual learning curves revealed significant differences between percentages of sibling and healthy poor learners: healthy (13.2%), siblings (34.1%), patients (48.1%), yielding a moderate relative risk. Conclusions These results clarify previous discrepant findings pertaining to probabilistic category learning acquisition rate in schizophrenia and provide the first evidence for the relative risk of probabilistic category learning abnormalities in unaffected siblings of patients with schizophrenia, supporting genetic underpinnings of probabilistic category learning deficits in schizophrenia. These findings also raise questions regarding the contribution of antipsychotic medication to the probabilistic category learning deficit in schizophrenia. The distinction between good and poor learning may be used to inform genetic studies designed to detect schizophrenia risk alleles. PMID:20172502

  10. Categories and logical syntax

    NARCIS (Netherlands)

    Klev, Ansten Morch

    2014-01-01

    The notions of category and type are here studied through the lens of logical syntax: Aristotle's as well as Kant's categories through the traditional form of proposition `S is P', and modern doctrines of type through the Fregean form of proposition `F(a)', function applied to argument. Topics

  11. Data categories for marine planning

    Science.gov (United States)

    Lightsom, Frances L.; Cicchetti, Giancarlo; Wahle, Charles M.

    2015-01-01

    The U.S. National Ocean Policy calls for a science- and ecosystem-based approach to comprehensive planning and management of human activities and their impacts on America’s oceans. The Ocean Community in Data.gov is an outcome of 2010–2011 work by an interagency working group charged with designing a national information management system to support ocean planning. Within the working group, a smaller team developed a list of the data categories specifically relevant to marine planning. This set of categories is an important consensus statement of the breadth of information types required for ocean planning from a national, multidisciplinary perspective. Although the categories were described in a working document in 2011, they have not yet been fully implemented explicitly in online services or geospatial metadata, in part because authoritative definitions were not created formally. This document describes the purpose of the data categories, provides definitions, and identifies relations among the categories and between the categories and external standards. It is intended to be used by ocean data providers, managers, and users in order to provide a transparent and consistent framework for organizing and describing complex information about marine ecosystems and their connections to humans.

  12. Enhancing Visual Basic GUI Applications using VRML Scenes

    OpenAIRE

    Bala Dhandayuthapani Veerasamy

    2010-01-01

    Rapid Application Development (RAD) enables ever expanding needs for speedy development of computer application programs that are sophisticated, reliable, and full-featured. Visual Basic was the first RAD tool for the Windows operating system, and too many people say still it is the best. To provide very good attraction in visual basic 6 applications, this paper directing to use VRML scenes over the visual basic environment.

  13. Influence of 3D effects on 1D aerosol retrievals in synthetic, partially clouded scenes

    International Nuclear Information System (INIS)

    Stap, F.A.; Hasekamp, O.P.; Emde, C.; Röckmann, T.

    2016-01-01

    An important challenge in aerosol remote sensing is to retrieve aerosol properties in the vicinity of clouds and in cloud contaminated scenes. Satellite based multi-wavelength, multi-angular, photo-polarimetric instruments are particularly suited for this task as they have the ability to separate scattering by aerosol and cloud particles. Simultaneous aerosol/cloud retrievals using 1D radiative transfer codes cannot account for 3D effects such as shadows, cloud induced enhancements and darkening of cloud edges. In this study we investigate what errors are introduced on the retrieved optical and micro-physical aerosol properties, when these 3D effects are neglected in retrievals where the partial cloud cover is modeled using the Independent Pixel Approximation. To this end a generic, synthetic data set of PARASOL like observations for 3D scenes with partial, liquid water cloud cover is created. It is found that in scenes with random cloud distributions (i.e. broken cloud fields) and either low cloud optical thickness or low cloud fraction, the inversion algorithm can fit the observations and retrieve optical and micro-physical aerosol properties with sufficient accuracy. In scenes with non-random cloud distributions (e.g. at the edge of a cloud field) the inversion algorithm can fit the observations, however, here the retrieved real part of the refractive indices of both modes is biased. - Highlights: • An algorithm for retrieval of both aerosol and cloud properties is presented. • Radiative transfer models of 3D, partially clouded scenes are simulated. • Errors introduced in the retrieved aerosol properties are discussed.

  14. Coping with Perceived Ethnic Prejudice on the Gay Scene

    Science.gov (United States)

    Jaspal, Rusi

    2017-01-01

    There has been only cursory research into the sociological and psychological aspects of ethnic/racial discrimination among ethnic minority gay and bisexual men, and none that focuses specifically upon British ethnic minority gay men. This article focuses on perceptions of intergroup relations on the gay scene among young British South Asian gay…

  15. Fire Engine Support and On-scene Time in Prehospital Stroke Care - A Prospective Observational Study.

    Science.gov (United States)

    Puolakka, Tuukka; Väyrynen, Taneli; Erkkilä, Elja-Pekka; Kuisma, Markku

    2016-06-01

    Introduction On-scene time (OST) previously has been shown to be a significant component of Emergency Medical Services' (EMS') operational delay in acute stroke. Since stroke patients are managed routinely by two-person ambulance crews, increasing the number of personnel available on the scene is a possible method to improve their performance. Hypothesis Using fire engine crews to support ambulances on the scene in acute stroke is hypothesized to be associated with a shorter OST. All patients transported to hospital as thrombolysis candidates during a one-year study period were registered by the ambulance crews using a case report form that included patient characteristics and operational EMS data. Seventy-seven patients (41 [53%] male; mean age of 68.9 years [SD=15]; mean Glasgow Coma Score [GCS] of 15 points [IQR=14-15]) were eligible for the study. Forty-five cases were managed by ambulance and fire engine crews together and 32 by the ambulance crews alone. The median ambulance response time was seven minutes (IQR=5-10) and the fire engine response time was six minutes (IQR=5-8). The number of EMS personnel on the scene was six (IQR=5-7) and two (IQR=2-2), and the OST was 21 minutes (IQR=18-26) and 24 minutes (IQR=20-32; P =.073) for the groups, respectively. In a following regression analysis, using stroke as the dispatch code was the only variable associated with short (engine crews to support ambulances in acute stroke care was not associated with a shorter on-scene stay when compared to standard management by two-person ambulance crews alone. Using stroke as the dispatch code was the only variable that was associated independently with a short OST. Puolakka T , Väyrynen T , Erkkilä E-P , Kuisma M . Fire engine support and on-scene time in prehospital stroke care - a prospective observational study. Prehosp Disaster Med. 2016;31(3):278-281.

  16. Perceptual salience affects the contents of working memory during free-recollection of objects from natural scenes

    Directory of Open Access Journals (Sweden)

    Tiziana ePedale

    2015-02-01

    Full Text Available One of the most important issues in the study of cognition is to understand which are the factors determining internal representation of the external world. Previous literature has started to highlight the impact of low-level sensory features (indexed by saliency-maps in driving attention selection, hence increasing the probability for objects presented in complex and natural scenes to be successfully encoded into working memory(WM and then correctly remembered. Here we asked whether the probability of retrieving high-saliency objects modulates the overall contents of WM, by decreasing the probability of retrieving other, lower-saliency objects. We presented pictures of natural scenes for 4 secs. After a retention period of 8 secs, we asked participants to verbally report as many objects/details as possible of the previous scenes. We then computed how many times the objects located at either the peak of maximal or minimal saliency in the scene (as indexed by a saliency-map; Itti et al., 1998 were recollected by participants. Results showed that maximal-saliency objects were recollected more often and earlier in the stream of successfully reported items than minimal-saliency objects. This indicates that bottom-up sensory salience increases the recollection probability and facilitates the access to memory representation at retrieval, respectively. Moreover, recollection of the maximal- (but not the minimal- saliency objects predicted the overall amount of successfully recollected objects: The higher the probability of having successfully reported the most-salient object in the scene, the lower the amount of recollected objects. These findings highlight that bottom-up sensory saliency modulates the current contents of WM during recollection of objects from natural scenes, most likely by reducing available resources to encode and then retrieve other (lower saliency objects.

  17. Low-threshold Care for Marginalised Hard Drug Users: Marginalisation and Socialisation in the Rotterdam Hard Drug Scene

    NARCIS (Netherlands)

    J. van der Poel (Agnes)

    2009-01-01

    textabstractSince the early 1990s several developments have taken place in the hard drug scene in the Netherlands. Key elements in these developments were harm reduction measures, introduction of crack, open drug scenes, police interventions, drug-related nuisance, low-threshold care facilities and

  18. Cultural adaptation of visual attention: calibration of the oculomotor control system in accordance with cultural scenes.

    Directory of Open Access Journals (Sweden)

    Yoshiyuki Ueda

    Full Text Available Previous studies have found that Westerners are more likely than East Asians to attend to central objects (i.e., analytic attention, whereas East Asians are more likely than Westerners to focus on background objects or context (i.e., holistic attention. Recently, it has been proposed that the physical environment of a given culture influences the cultural form of scene cognition, although the underlying mechanism is yet unclear. This study examined whether the physical environment influences oculomotor control. Participants saw culturally neutral stimuli (e.g., a dog in a park as a baseline, followed by Japanese or United States scenes, and finally culturally neutral stimuli again. The results showed that participants primed with Japanese scenes were more likely to move their eyes within a broader area and they were less likely to fixate on central objects compared with the baseline, whereas there were no significant differences in the eye movements of participants primed with American scenes. These results suggest that culturally specific patterns in eye movements are partly caused by the physical environment.

  19. A comparison of directed search target detection versus in-scene target detection in Worldview-2 datasets

    Science.gov (United States)

    Grossman, S.

    2015-05-01

    Since the events of September 11, 2001, the intelligence focus has moved from large order-of-battle targets to small targets of opportunity. Additionally, the business community has discovered the use of remotely sensed data to anticipate demand and derive data on their competition. This requires the finer spectral and spatial fidelity now available to recognize those targets. This work hypothesizes that directed searches using calibrated data perform at least as well as inscene manually intensive target detection searches. It uses calibrated Worldview-2 multispectral images with NEF generated signatures and standard detection algorithms to compare bespoke directed search capabilities against ENVI™ in-scene search capabilities. Multiple execution runs are performed at increasing thresholds to generate detection rates. These rates are plotted and statistically analyzed. While individual head-to-head comparison results vary, 88% of the directed searches performed at least as well as in-scene searches with 50% clearly outperforming in-scene methods. The results strongly support the premise that directed searches perform at least as well as comparable in-scene searches.

  20. Range sections as rock models for intensity rock scene segmentation

    CSIR Research Space (South Africa)

    Mkwelo, S

    2007-11-01

    Full Text Available This paper presents another approach to segmenting a scene of rocks on a conveyor belt for the purposes of measuring rock size. Rock size estimation instruments are used to monitor, optimize and control milling and crushing in the mining industry...

  1. Dynamical scene analysis with a moving camera: mobile targets detection system

    International Nuclear Information System (INIS)

    Hennebert, Christine

    1996-01-01

    This thesis work deals with the detection of moving objects in monocular image sequences acquired with a mobile camera. We propose a method able to detect small moving objects in visible or infrared images of real outdoor scenes. In order to detect objects of very low apparent motion, we consider an analysis on a large temporal interval. We have chosen to compensate for the dominant motion due to the camera displacement for several consecutive images in order to form a sub-sequence of images for which the camera seems virtually static. We have also developed a new approach allowing to extract the different layers of a real scene in order to deal with cases where the 2D motion due to the camera displacement cannot be globally compensated for. To this end, we use a hierarchical model with two levels: the local merging step and the global merging one. Then, an appropriate temporal filtering is applied to registered image sub-sequence to enhance signals corresponding to moving objects. The detection issue is stated as a labeling problem within a statistical regularization based on Markov Random Fields. Our method has been validated on numerous real image sequences depicting complex outdoor scenes. Finally, the feasibility of an integrated circuit for mobile object detection has been proved. This circuit could lead to an ASIC creation. (author) [fr

  2. End User Perceptual Distorted Scenes Enhancement Algorithm Using Partition-Based Local Color Values for QoE-Guaranteed IPTV

    Science.gov (United States)

    Kim, Jinsul

    In this letter, we propose distorted scenes enhancement algorithm in order to provide end user perceptual QoE-guaranteed IPTV service. The block edge detection with weight factor and partition-based local color values method can be applied for the degraded video frames which are affected by network transmission errors such as out of order, jitter, and packet loss to improve QoE efficiently. Based on the result of quality metric after using the distorted scenes enhancement algorithm, the distorted scenes have been restored better than others.

  3. Category-selective attention modulates unconscious processes in the middle occipital gyrus.

    Science.gov (United States)

    Tu, Shen; Qiu, Jiang; Martens, Ulla; Zhang, Qinglin

    2013-06-01

    Many studies have revealed the top-down modulation (spatial attention, attentional load, etc.) on unconscious processing. However, there is little research about how category-selective attention could modulate the unconscious processing. In the present study, using functional magnetic resonance imaging (fMRI), the results showed that category-selective attention modulated unconscious face/tool processing in the middle occipital gyrus (MOG). Interestingly, MOG effects were of opposed direction for face and tool processes. During unconscious face processing, activation in MOG decreased under the face-selective attention compared with tool-selective attention. This result was in line with the predictive coding theory. During unconscious tool processing, however, activation in MOG increased under the tool-selective attention compared with face-selective attention. The different effects might be ascribed to an interaction between top-down category-selective processes and bottom-up processes in the partial awareness level as proposed by Kouider, De Gardelle, Sackur, and Dupoux (2010). Specifically, we suppose an "excessive activation" hypothesis. Copyright © 2013 Elsevier Inc. All rights reserved.

  4. Striatal and Hippocampal Entropy and Recognition Signals in Category Learning: Simultaneous Processes Revealed by Model-Based fMRI

    Science.gov (United States)

    Davis, Tyler; Love, Bradley C.; Preston, Alison R.

    2012-01-01

    Category learning is a complex phenomenon that engages multiple cognitive processes, many of which occur simultaneously and unfold dynamically over time. For example, as people encounter objects in the world, they simultaneously engage processes to determine their fit with current knowledge structures, gather new information about the objects, and…

  5. The capture and recreation of 3D auditory scenes

    Science.gov (United States)

    Li, Zhiyun

    The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.

  6. Navigating the auditory scene: an expert role for the hippocampus.

    Science.gov (United States)

    Teki, Sundeep; Kumar, Sukhbinder; von Kriegstein, Katharina; Stewart, Lauren; Lyness, C Rebecca; Moore, Brian C J; Capleton, Brian; Griffiths, Timothy D

    2012-08-29

    Over a typical career piano tuners spend tens of thousands of hours exploring a specialized acoustic environment. Tuning requires accurate perception and adjustment of beats in two-note chords that serve as a navigational device to move between points in previously learned acoustic scenes. It is a two-stage process that depends on the following: first, selective listening to beats within frequency windows, and, second, the subsequent use of those beats to navigate through a complex soundscape. The neuroanatomical substrates underlying brain specialization for such fundamental organization of sound scenes are unknown. Here, we demonstrate that professional piano tuners are significantly better than controls matched for age and musical ability on a psychophysical task simulating active listening to beats within frequency windows that is based on amplitude modulation rate discrimination. Tuners show a categorical increase in gray matter volume in the right frontal operculum and right superior temporal lobe. Tuners also show a striking enhancement of gray matter volume in the anterior hippocampus, parahippocampal gyrus, and superior temporal gyrus, and an increase in white matter volume in the posterior hippocampus as a function of years of tuning experience. The relationship with gray matter volume is sensitive to years of tuning experience and starting age but not actual age or level of musicality. Our findings support a role for a core set of regions in the hippocampus and superior temporal cortex in skilled exploration of complex sound scenes in which precise sound "templates" are encoded and consolidated into memory over time in an experience-dependent manner.

  7. Signs of depth-luminance covariance in 3-D cluttered scenes.

    Science.gov (United States)

    Scaccia, Milena; Langer, Michael S

    2018-03-01

    In three-dimensional (3-D) cluttered scenes such as foliage, deeper surfaces often are more shadowed and hence darker, and so depth and luminance often have negative covariance. We examined whether the sign of depth-luminance covariance plays a role in depth perception in 3-D clutter. We compared scenes rendered with negative and positive depth-luminance covariance where positive covariance means that deeper surfaces are brighter and negative covariance means deeper surfaces are darker. For each scene, the sign of the depth-luminance covariance was given by occlusion cues. We tested whether subjects could use this sign information to judge the depth order of two target surfaces embedded in 3-D clutter. The clutter consisted of distractor surfaces that were randomly distributed in a 3-D volume. We tested three independent variables: the sign of the depth-luminance covariance, the colors of the targets and distractors, and the background luminance. An analysis of variance showed two main effects: Subjects performed better when the deeper surfaces were darker and when the color of the target surfaces was the same as the color of the distractors. There was also a strong interaction: Subjects performed better under a negative depth-luminance covariance condition when targets and distractors had different colors than when they had the same color. Our results are consistent with a "dark means deep" rule, but the use of this rule depends on the similarity between the color of the targets and color of the 3-D clutter.

  8. A hybrid multiview stereo algorithm for modeling urban scenes.

    Science.gov (United States)

    Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu; Vu, Hoang-Hiep

    2013-01-01

    We present an original multiview stereo reconstruction algorithm which allows the 3D-modeling of urban scenes as a combination of meshes and geometric primitives. The method provides a compact model while preserving details: Irregular elements such as statues and ornaments are described by meshes, whereas regular structures such as columns and walls are described by primitives (planes, spheres, cylinders, cones, and tori). We adopt a two-step strategy consisting first in segmenting the initial meshbased surface using a multilabel Markov Random Field-based model and second in sampling primitive and mesh components simultaneously on the obtained partition by a Jump-Diffusion process. The quality of a reconstruction is measured by a multi-object energy model which takes into account both photo-consistency and semantic considerations (i.e., geometry and shape layout). The segmentation and sampling steps are embedded into an iterative refinement procedure which provides an increasingly accurate hybrid representation. Experimental results on complex urban structures and large scenes are presented and compared to state-of-the-art multiview stereo meshing algorithms.

  9. HDR IMAGING FOR FEATURE DETECTION ON DETAILED ARCHITECTURAL SCENES

    Directory of Open Access Journals (Sweden)

    G. Kontogianni

    2015-02-01

    Full Text Available 3D reconstruction relies on accurate detection, extraction, description and matching of image features. This is even truer for complex architectural scenes that pose needs for 3D models of high quality, without any loss of detail in geometry or color. Illumination conditions influence the radiometric quality of images, as standard sensors cannot depict properly a wide range of intensities in the same scene. Indeed, overexposed or underexposed pixels cause irreplaceable information loss and degrade digital representation. Images taken under extreme lighting environments may be thus prohibitive for feature detection/extraction and consequently for matching and 3D reconstruction. High Dynamic Range (HDR images could be helpful for these operators because they broaden the limits of illumination range that Standard or Low Dynamic Range (SDR/LDR images can capture and increase in this way the amount of details contained in the image. Experimental results of this study prove this assumption as they examine state of the art feature detectors applied both on standard dynamic range and HDR images.

  10. 3D Aware Correction and Completion of Depth Maps in Piecewise Planar Scenes

    KAUST Repository

    Thabet, Ali Kassem

    2015-04-16

    RGB-D sensors are popular in the computer vision community, especially for problems of scene understanding, semantic scene labeling, and segmentation. However, most of these methods depend on reliable input depth measurements, while discarding unreliable ones. This paper studies how reliable depth values can be used to correct the unreliable ones, and how to complete (or extend) the available depth data beyond the raw measurements of the sensor (i.e. infer depth at pixels with unknown depth values), given a prior model on the 3D scene. We consider piecewise planar environments in this paper, since many indoor scenes with man-made objects can be modeled as such. We propose a framework that uses the RGB-D sensor’s noise profile to adaptively and robustly fit plane segments (e.g. floor and ceiling) and iteratively complete the depth map, when possible. Depth completion is formulated as a discrete labeling problem (MRF) with hard constraints and solved efficiently using graph cuts. To regularize this problem, we exploit 3D and appearance cues that encourage pixels to take on depth values that will be compatible in 3D to the piecewise planar assumption. Extensive experiments, on a new large-scale and challenging dataset, show that our approach results in more accurate depth maps (with 20 % more depth values) than those recorded by the RGB-D sensor. Additional experiments on the NYUv2 dataset show that our method generates more 3D aware depth. These generated depth maps can also be used to improve the performance of a state-of-the-art RGB-D SLAM method.

  11. TESSITURA OF A CREATIVE PROCESS: AUTOBIOGRAPHY ON SCENE

    Directory of Open Access Journals (Sweden)

    Daniel Santos Costa

    2015-04-01

    Full Text Available This text presents weavings of a way to make the arts scene using the autobiographical support the creative process. Thus, we elucidate some of these weavings process while legitimizing the production of knowledge through artistic praxis, of sensitive experience. Introducing the concept of autobiography in analogy to the artistic and sequentially present the possibility of a laboratory setting amalgamated into reality/fiction. Keywords: creative process; autobiography; body.

  12. The Sport Expert's Attention Superiority on Skill-related Scene Dynamic by the Activation of left Medial Frontal Gyrus: An ERP and LORETA Study.

    Science.gov (United States)

    He, Mengyang; Qi, Changzhu; Lu, Yang; Song, Amanda; Hayat, Saba Z; Xu, Xia

    2018-05-21

    Extensive studies have shown that a sports expert is superior to a sports novice in visually perceptual-cognitive processes of sports scene information, however the attentional and neural basis of it has not been thoroughly explored. The present study examined whether a sport expert has the attentional superiority on scene information relevant to his/her sport skill, and explored what factor drives this superiority. To address this problem, EEGs were recorded as participants passively viewed sport scenes (tennis vs. non-tennis) and negative emotional faces in the context of a visual attention task, where the pictures of sport scenes or of negative emotional faces randomly followed the pictures with overlapping sport scenes and negative emotional faces. ERP results showed that for experts, the evoked potential of attentional competition elicited by the overlap of tennis scene was significantly larger than that evoked by the overlap of non-tennis scene, while this effect was absent for novices. The LORETA showed that the experts' left medial frontal gyrus (MFG) cortex was significantly more active as compared to the right MFG when processing the overlap of tennis scene, but the lateralization effect was not significant in novices. Those results indicate that experts have attentional superiority on skill-related scene information, despite intruding the scene through negative emotional faces that are prone to cause negativity bias toward their visual field as a strong distractor. This superiority is actuated by the activation of left MFG cortex and probably due to self-reference. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. A novel scene-based non-uniformity correction method for SWIR push-broom hyperspectral sensors

    Science.gov (United States)

    Hu, Bin-Lin; Hao, Shi-Jing; Sun, De-Xin; Liu, Yin-Nian

    2017-09-01

    A novel scene-based non-uniformity correction (NUC) method for short-wavelength infrared (SWIR) push-broom hyperspectral sensors is proposed and evaluated. This method relies on the assumption that for each band there will be ground objects with similar reflectance to form uniform regions when a sufficient number of scanning lines are acquired. The uniform regions are extracted automatically through a sorting algorithm, and are used to compute the corresponding NUC coefficients. SWIR hyperspectral data from airborne experiment are used to verify and evaluate the proposed method, and results show that stripes in the scenes have been well corrected without any significant information loss, and the non-uniformity is less than 0.5%. In addition, the proposed method is compared to two other regular methods, and they are evaluated based on their adaptability to the various scenes, non-uniformity, roughness and spectral fidelity. It turns out that the proposed method shows strong adaptability, high accuracy and efficiency.

  14. Generation of Variations on Theme Music Based on Impressions of Story Scenes Considering Human's Feeling of Music and Stories

    Directory of Open Access Journals (Sweden)

    Kenkichi Ishizuka

    2008-01-01

    Full Text Available This paper describes a system which generates variations on theme music fitting to story scenes represented by texts and/or pictures. Inputs to the present system are original theme music and numerical information on given story scenes. The present system varies melodies, tempos, tones, tonalities, and accompaniments of given theme music based on impressions of story scenes. Genetic algorithms (GAs using modular neural network (MNN models as fitness functions are applied to music generation in order to reflect user's feeling of music and stories. The present system adjusts MNN models for each user on line. This paper also describes the evaluation experiments to confirm whether the generated variations on theme music reflect impressions of story scenes appropriately or not.

  15. You’re a good structure, Charlie Brown: The distribution of narrative categories in comic strips

    Science.gov (United States)

    Cohn, Neil

    2014-01-01

    Cohn’s (2013) theory of “Visual Narrative Grammar” argues that sequential images take on categorical roles in a narrative structure, which organizes them into hierarchic constituents analogous to the organization of syntactic categories in sentences. This theory proposes that narrative categories, like syntactic categories, can be identified through diagnostic tests that reveal tendencies for their distribution throughout a sequence. This paper describes four experiments testing these diagnostics to provide support for the validity of these narrative categories. In Experiment 1, participants reconstructed unordered panels of a comic strip into an order that makes sense. Experiment 2 measured viewing times to panels in sequences where the order of panels was reversed. In Experiment 3 participants again reconstructed strips, but also deleted a panel from the sequence. Finally, in Experiment 4 participants identified where a panel had been deleted from a comic strip, and rated that strip’s coherence. Overall, categories had consistent distributional tendencies within experiments and complementary tendencies across experiments. These results point toward an interaction between categorical roles and a global narrative structure. PMID:24646175

  16. You're a good structure, Charlie Brown: the distribution of narrative categories in comic strips.

    Science.gov (United States)

    Cohn, Neil

    2014-01-01

    Cohn's (2013) theory of "Visual Narrative Grammar" argues that sequential images take on categorical roles in a narrative structure, which organizes them into hierarchic constituents analogous to the organization of syntactic categories in sentences. This theory proposes that narrative categories, like syntactic categories, can be identified through diagnostic tests that reveal tendencies for their distribution throughout a sequence. This paper describes four experiments testing these diagnostics to provide support for the validity of these narrative categories. In Experiment 1, participants reconstructed unordered panels of a comic strip into an order that makes sense. Experiment 2 measured viewing times to panels in sequences where the order of panels was reversed. In Experiment 3, participants again reconstructed strips but also deleted a panel from the sequence. Finally, in Experiment 4 participants identified where a panel had been deleted from a comic strip and rated that strip's coherence. Overall, categories had consistent distributional tendencies within experiments and complementary tendencies across experiments. These results point toward an interaction between categorical roles and a global narrative structure. © 2014 Cognitive Science Society, Inc.

  17. Memory for sound, with an ear toward hearing in complex auditory scenes.

    Science.gov (United States)

    Snyder, Joel S; Gregg, Melissa K

    2011-10-01

    An area of research that has experienced recent growth is the study of memory during perception of simple and complex auditory scenes. These studies have provided important information about how well auditory objects are encoded in memory and how well listeners can notice changes in auditory scenes. These are significant developments because they present an opportunity to better understand how we hear in realistic situations, how higher-level aspects of hearing such as semantics and prior exposure affect perception, and the similarities and differences between auditory perception and perception in other modalities, such as vision and touch. The research also poses exciting challenges for behavioral and neural models of how auditory perception and memory work.

  18. Right away: A late, right-lateralized category effect complements an early, left-lateralized category effect in visual search.

    Science.gov (United States)

    Constable, Merryn D; Becker, Stefanie I

    2017-10-01

    According to the Sapir-Whorf hypothesis, learned semantic categories can influence early perceptual processes. A central finding in support of this view is the lateralized category effect-namely, the finding that categorically different colors (e.g., blue and green hues) can be discriminated faster than colors within the same color category (e.g., different hues of green), especially when they are presented in the right visual field. Because the right visual field projects to the left hemisphere, this finding has been popularly couched in terms of the left-lateralization of language. However, other studies have reported bilateral category effects, which has led some researchers to question the linguistic origins of the effect. Here we examined the time course of lateralized and bilateral category effects in the classical visual search paradigm by means of eyetracking and RT distribution analyses. Our results show a bilateral category effect in the manual responses, which is combined of an early, left-lateralized category effect and a later, right-lateralized category effect. The newly discovered late, right-lateralized category effect occurred only when observers had difficulty locating the target, indicating a specialization of the right hemisphere to find categorically different targets after an initial error. The finding that early and late stages of visual search show different lateralized category effects can explain a wide range of previously discrepant findings.

  19. The helpfulness of category labels in semi-supervised learning depends on category structure.

    Science.gov (United States)

    Vong, Wai Keen; Navarro, Daniel J; Perfors, Amy

    2016-02-01

    The study of semi-supervised category learning has generally focused on how additional unlabeled information with given labeled information might benefit category learning. The literature is also somewhat contradictory, sometimes appearing to show a benefit to unlabeled information and sometimes not. In this paper, we frame the problem differently, focusing on when labels might be helpful to a learner who has access to lots of unlabeled information. Using an unconstrained free-sorting categorization experiment, we show that labels are useful to participants only when the category structure is ambiguous and that people's responses are driven by the specific set of labels they see. We present an extension of Anderson's Rational Model of Categorization that captures this effect.

  20. Walk This Way: Improving Pedestrian Agent-Based Models through Scene Activity Analysis

    Directory of Open Access Journals (Sweden)

    Andrew Crooks

    2015-09-01

    Full Text Available Pedestrian movement is woven into the fabric of urban regions. With more people living in cities than ever before, there is an increased need to understand and model how pedestrians utilize and move through space for a variety of applications, ranging from urban planning and architecture to security. Pedestrian modeling has been traditionally faced with the challenge of collecting data to calibrate and validate such models of pedestrian movement. With the increased availability of mobility datasets from video surveillance and enhanced geolocation capabilities in consumer mobile devices we are now presented with the opportunity to change the way we build pedestrian models. Within this paper we explore the potential that such information offers for the improvement of agent-based pedestrian models. We introduce a Scene- and Activity-Aware Agent-Based Model (SA2-ABM, a method for harvesting scene activity information in the form of spatiotemporal trajectories, and incorporate this information into our models. In order to assess and evaluate the improvement offered by such information, we carry out a range of experiments using real-world datasets. We demonstrate that the use of real scene information allows us to better inform our model and enhance its predictive capabilities.

  1. The anatomy of the crime scene

    DEFF Research Database (Denmark)

    Sandvik, Kjetil

    2010-01-01

    in the way that the certain actions and events which have taken place have left a variety of marks and traces which may be read and interpreted. Traces of blood, nails, hair constitutes (DNA)codes which can be decrypted and deciphered, in the same way as traces of gun powder, bullet holes, physical damage...... and interpretation. During her investigation the detective's ability to make logical reasoning and deductive thinking as well as to make use of her imagination is crucial to how the crime scene is first deconstructed and then reconstructed as a setting for the story (that is the actions of crime). By decoding...

  2. Signature modelling and radiometric rendering equations in infrared scene simulation systems

    CSIR Research Space (South Africa)

    Willers, CJ

    2011-09-01

    Full Text Available The development and optimisation of modern infrared systems necessitates the use of simulation systems to create radiometrically realistic representations (e.g. images) of infrared scenes. Such simulation systems are used in signature prediction...

  3. Scenes of Violence and Sex in Recent Award-Winning LGBT-Themed Young Adult Novels and the Ideologies They Offer Their Readers

    Science.gov (United States)

    Clark, Caroline T.; Blackburn, Mollie V.

    2016-01-01

    This study examines LGBT-inclusive and queering discourses in five recent award-winning LGBT-themed young adult books. The analysis brought scenes of violence and sex/love scenes to the fore. Violent scenes offered readers messages that LGBT people are either the victims of violence-fueled hatred and fear, or, in some cases, showed a gay person…

  4. Differential processing of natural scenes in typical and atypical Alzheimer disease measured with a saccade choice task

    Directory of Open Access Journals (Sweden)

    Muriel eBoucart

    2014-07-01

    Full Text Available Though atrophy of the medial temporal lobe, including structures (hippocampus and parahippocampal cortex that support scene perception and the binding of an object to its context, appears early in Alzheimer disease (AD few studies have investigated scene perception in people with AD. We assessed the ability to find a target object within a natural scene in people with typical AD and in people with atypical AD (posterior cortical atrophy. Pairs of colored photographs were displayed left and right of fixation for one second. Participants were asked to categorize the target (an animal either in moving their eyes toward the photograph containing the target (saccadic choice task or in pressing a key corresponding to the location of the target (manual choice task in separate blocks of trials. For both tasks performance was compared in two conditions: with isolated objects and with objects in scenes. Patients with atypical AD were more impaired to detect a target within a scene than people with typical AD who exhibited a pattern of performance more similar to that of age-matched controls in terms of accuracy, saccade latencies and benefit from contextual information. People with atypical AD benefited less from contextual information in both the saccade and the manual choice tasks suggesting a higher sensitivity to crowding and deficits in figure/ground segregation in people with lesions in posterior areas of the brain.

  5. Wooing-Scenes in “Richard III”: A Parody of Courtliness?

    Directory of Open Access Journals (Sweden)

    Agnieszka Stępkowska

    2009-11-01

    Full Text Available In the famous opening soliloquy of Shakespeare’s Richard III, Richard mightily voices his repugnance to “fair well-spoken days” and their “idle pleasures”. He realizes his physical deformity and believes that it sets him apart from others. He openly admits that he is “not shaped for sportive tricks, nor made to court an amorous looking-glass”. Yet, his monstrosity constitutes more perhaps of his aggressive masculine exceptionality rather than of his deformity. Richard’s bullying masculinity manifests itself in his contempt for women. In the wooing scenes we clearly see his pugnacious pursuit of power over effeminate contentment by reducing women to mere objects. Additionally, those scenes are interesting from a psychological viewpoint as they brim over with conflicting emotions. Therefore, the paper explores two wooing encounters of the play, which belong the best examples of effective persuasion and also something we may refer to as ‘the power of eloquence’.

  6. On the (un)suitability of semantic categories

    DEFF Research Database (Denmark)

    Rijkhoff, Jan

    2009-01-01

    Since Greenberg’s groundbreaking publication on universals of grammar, typologists have used semantic categories to investigate (constraints on) morphological and syntactic variation in the world’s languages and this tradition has been continued in the WALS project. It is argued here that the emp......Since Greenberg’s groundbreaking publication on universals of grammar, typologists have used semantic categories to investigate (constraints on) morphological and syntactic variation in the world’s languages and this tradition has been continued in the WALS project. It is argued here...... that the employment of semantic categories has some serious drawbacks, however, suggesting that semantic categories, just like formal categories, cannot be equated across languages in morphosyntactic typology. Whereas formal categories are too narrow in that they do not cover all structural variants attested across...... languages, semantic categories can be too wide, including too many structural variants. Furthermore, it appears that in some major typological studies semantic categories have been confused with formal categories. A possible solution is pointed out: typologists first need to make sure that the forms...

  7. The Micro-Category Account of Analogy

    Science.gov (United States)

    Green, Adam E.; Fugelsang, Jonathan A.; Kraemer, David J. M.; Dunbar, Kevin N.

    2008-01-01

    Here, we investigate how activation of mental representations of categories during analogical reasoning influences subsequent cognitive processing. Specifically, we present and test the central predictions of the "Micro-Category" account of analogy. This account emphasizes the role of categories in aligning terms for analogical mapping. In a…

  8. Affective salience can reverse the effects of stimulus-driven salience on eye movements in complex scenes

    Directory of Open Access Journals (Sweden)

    Yaqing eNiu

    2012-09-01

    Full Text Available In natural vision both stimulus features and cognitive/affective factors influence an observer's attention. However, the relationship between stimulus-driven (bottom-up and cognitive/affective (top-down factors remains controversial: Can affective salience counteract strong visual stimulus signals and shift attention allocation irrespective of bottom-up features? Is there any difference between negative and positive scenes in terms of their influence on attention deployment? Here we examined the impact of affective factors on eye movement behavior, to understand the competition between visual stimulus-driven salience and affective salience and how they affect gaze allocation in complex scene viewing. Building on our previous research, we compared predictions generated by a visual salience model with measures indexing participant-identified emotionally meaningful regions of each image. To examine how eye movement behaviour differs for negative, positive, and neutral scenes, we examined the influence of affective salience in capturing attention according to emotional valence. Taken together, our results show that affective salience can override stimulus-driven salience and overall emotional valence can determine attention allocation in complex scenes. These findings are consistent with the hypothesis that cognitive/affective factors play a dominant role in active gaze control.

  9. Literacy in the contemporary scene

    Directory of Open Access Journals (Sweden)

    Angela B. Kleiman

    2014-11-01

    Full Text Available In this paper I examine the relationship between literacy and contemporaneity. I take as a point of departure for my discussion school literacy and its links with literacies in other institutions of the contemporary scene, in order to determine the relation between contemporary ends of reading and writing (in other words, the meaning of being literate in contemporary society and the practices and activities effectively realized at school in order to reach those objectives. Using various examples from teaching and learning situations, I discuss digital literacy practices and multimodal texts and multiliteracies from both printed and digital cultures. Throughout, I keep as a background for the discussion the functions and objectives of school literacy and the professional training of teachers who would like to be effective literacy agents in the contemporary world.

  10. Monoidal categories and topological field theory

    CERN Document Server

    Turaev, Vladimir

    2017-01-01

    This monograph is devoted to monoidal categories and their connections with 3-dimensional topological field theories. Starting with basic definitions, it proceeds to the forefront of current research. Part 1 introduces monoidal categories and several of their classes, including rigid, pivotal, spherical, fusion, braided, and modular categories. It then presents deep theorems of Müger on the center of a pivotal fusion category. These theorems are proved in Part 2 using the theory of Hopf monads. In Part 3 the authors define the notion of a topological quantum field theory (TQFT) and construct a Turaev-Viro-type 3-dimensional state sum TQFT from a spherical fusion category. Lastly, in Part 4 this construction is extended to 3-manifolds with colored ribbon graphs, yielding a so-called graph TQFT (and, consequently, a 3-2-1 extended TQFT). The authors then prove the main result of the monograph: the state sum graph TQFT derived from any spherical fusion category is isomorphic to the Reshetikhin-Turaev surgery gr...

  11. The Effect of Walking and Teleportation on Spatial Updating in Virtual and Real Scenes

    Directory of Open Access Journals (Sweden)

    J Vuong

    2013-10-01

    Full Text Available Intuitively, it seems as if we should be able to point accurately to the location of a target object within a room even if we were teleported to a different location and the object removed from view. We measured the precision of pointing to a previously-seen object in a real room, a virtual room with same dimensions (presented in immersive virtual reality and a sparse virtual scene consisting only of long thin poles at the same locations as the target object and room corners. Participants viewed the target object from one location, walked to another so that the object passed out of view, then turned in complete darkness to point at the location of the previously-viewed target. In a separate experiment, participants viewed a sparse scene consisting of long thin poles (including a target and had to point to the location of the absent target after teleportation to a new location within the scene. Pointing precision in this case was dramatically reduced (σ ≈ 34° compared to the conditions in which participants walked in the real room, virtual room or sparse scene. In the latter three conditions, pointing precision was very similar (σ ≈ 15° despite the removal of prominent distance cues in the sparse condition. Our results show that spatial updating after teleportation is substantially poorer than when walking between two locations. [Supported by Microsoft Research and Wellcome Trust

  12. Terrain Simplification Research in Augmented Scene Modeling

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    environment. As one of the most important tasks in augmented scene modeling, terrain simplification research has gained more and more attention. In this paper, we mainly focus on point selection problem in terrain simplification using triangulated irregular network. Based on the analysis and comparison of traditional importance measures for each input point, we put forward a new importance measure based on local entropy. The results demonstrate that the local entropy criterion has a better performance than any traditional methods. In addition, it can effectively conquer the "short-sight" problem associated with the traditional methods.

  13. Modular categories and 3-manifold invariants

    International Nuclear Information System (INIS)

    Tureav, V.G.

    1992-01-01

    The aim of this paper is to give a concise introduction to the theory of knot invariants and 3-manifold invariants which generalize the Jones polynomial and which may be considered as a mathematical version of the Witten invariants. Such a theory was introduced by N. Reshetikhin and the author on the ground of the theory of quantum groups. here we use more general algebraic objects, specifically, ribbon and modular categories. Such categories in particular arise as the categories of representations of quantum groups. The notion of modular category, interesting in itself, is closely related to the notion of modular tensor category in the sense of G. Moore and N. Seiberg. For simplicity we restrict ourselves in this paper to the case of closed 3-manifolds

  14. Campaign Documentaries: Behind-the-Scenes Perspectives Make Useful Teaching Tools

    Science.gov (United States)

    Wolfford, David

    2012-01-01

    Over the last 20 years, independent filmmakers have produced insightful documentaries of high profile political campaigns with behind-the-scenes footage. These documentaries offer inside looks and unique perspectives on electoral politics. This campaign season, consider "The War Room"; "A Perfect Candidate"; "Journeys With George;" "Chisholm '72";…

  15. Binary Format for Scene (BIFS): combining MPEG-4 media to build rich multimedia services

    Science.gov (United States)

    Signes, Julien

    1998-12-01

    In this paper, we analyze the design concepts and some technical details behind the MPEG-4 standard, particularly the scene description layer, commonly known as the Binary Format for Scene (BIFS). We show how MPEG-4 may ease multimedia proliferation by offering a unique, optimized multimedia platform. Lastly, we analyze the potential of the technology for creating rich multimedia applications on various networks and platforms. An e-commerce application example is detailed, highlighting the benefits of the technology. Compression results show how rich applications may be built even on very low bit rate connections.

  16. Product Category Management Issues

    OpenAIRE

    Żukowska, Joanna

    2011-01-01

    The purpose of the paper is to present the issues related to category management. It includes the overview of category management definitions and the correct process of exercising it. Moreover, attention is paid to the advantages of brand management, the benefits the supplier and retailer may obtain in this way. The risk element related to this topics is also presented herein. Joanna Żukowska

  17. Finding biomedical categories in Medline®

    Directory of Open Access Journals (Sweden)

    Yeganova Lana

    2012-10-01

    Full Text Available Abstract Background There are several humanly defined ontologies relevant to Medline. However, Medline is a fast growing collection of biomedical documents which creates difficulties in updating and expanding these humanly defined ontologies. Automatically identifying meaningful categories of entities in a large text corpus is useful for information extraction, construction of machine learning features, and development of semantic representations. In this paper we describe and compare two methods for automatically learning meaningful biomedical categories in Medline. The first approach is a simple statistical method that uses part-of-speech and frequency information to extract a list of frequent nouns from Medline. The second method implements an alignment-based technique to learn frequent generic patterns that indicate a hyponymy/hypernymy relationship between a pair of noun phrases. We then apply these patterns to Medline to collect frequent hypernyms as potential biomedical categories. Results We study and compare these two alternative sets of terms to identify semantic categories in Medline. We find that both approaches produce reasonable terms as potential categories. We also find that there is a significant agreement between the two sets of terms. The overlap between the two methods improves our confidence regarding categories predicted by these independent methods. Conclusions This study is an initial attempt to extract categories that are discussed in Medline. Rather than imposing external ontologies on Medline, our methods allow categories to emerge from the text.

  18. Molecular identification of blow flies recovered from human cadavers during crime scene investigations in Malaysia.

    Science.gov (United States)

    Kavitha, Rajagopal; Nazni, Wasi Ahmad; Tan, Tian Chye; Lee, Han Lim; Isa, Mohd Noor Mat; Azirun, Mohd Sofian

    2012-12-01

    Forensic entomology applies knowledge about insects associated with decedent in crime scene investigation. It is possible to calculate a minimum postmortem interval (PMI) by determining the age and species of the oldest blow fly larvae feeding on decedent. This study was conducted in Malaysia to identify maggot specimens collected during crime scene investigations. The usefulness of the molecular and morphological approach in species identifications was evaluated in 10 morphologically identified blow fly larvae sampled from 10 different crime scenes in Malaysia. The molecular identification method involved the sequencing of a total length of 2.2 kilo base pairs encompassing the 'barcode' fragments of the mitochondrial cytochrome oxidase I (COI), cytochrome oxidase II (COII) and t-RNA leucine genes. Phylogenetic analyses confirmed the presence of Chrysomya megacephala, Chrysomya rufifacies and Chrysomya nigripes. In addition, one unidentified blow fly species was found based on phylogenetic tree analysis.

  19. Nouns, verbs, objects, actions, and abstractions: local fMRI activity indexes semantics, not lexical categories.

    Science.gov (United States)

    Moseley, Rachel L; Pulvermüller, Friedemann

    2014-05-01

    Noun/verb dissociations in the literature defy interpretation due to the confound between lexical category and semantic meaning; nouns and verbs typically describe concrete objects and actions. Abstract words, pertaining to neither, are a critical test case: dissociations along lexical-grammatical lines would support models purporting lexical category as the principle governing brain organisation, whilst semantic models predict dissociation between concrete words but not abstract items. During fMRI scanning, participants read orthogonalised word categories of nouns and verbs, with or without concrete, sensorimotor meaning. Analysis of inferior frontal/insula, precentral and central areas revealed an interaction between lexical class and semantic factors with clear category differences between concrete nouns and verbs but not abstract ones. Though the brain stores the combinatorial and lexical-grammatical properties of words, our data show that topographical differences in brain activation, especially in the motor system and inferior frontal cortex, are driven by semantics and not by lexical class. Copyright © 2014 The Authors. Published by Elsevier Inc. All rights reserved.

  20. Using MODIS spectral information to classify sea ice scenes for CERES radiance-to-flux inversion

    Science.gov (United States)

    Corbett, J.; Su, W.; Liang, L.; Eitzen, Z.

    2013-12-01

    The Clouds and Earth's Radiant Energy System (CERES) instruments on NASA's Terra and Aqua satellites measure the shortwave (SW) radiance reflected from the Earth. In order to provide an estimate of the top-of-atmosphere reflected SW flux we need to know the anisotropy of the radiance reflected from the scene. Sea Ice scenes are particularly complex due to the wide range of surface conditions that comprise sea ice. For example, the anisotropy of snow-covered sea ice is quite different to that of sea ice with melt-ponds. To attempt to provide a consistent scene classification we have developed the Sea Ice Brightness Index (SIBI). The SIBI is defined as one minus the normalized difference between reflectances from the 0.469 micron and 0.858 micron bands from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument. For brighter snow-covered sea ice scenes the SIBI value is close to 1.0. As the surface changes to bare ice, melt ponds, etc. the SIBI decreases. For open water the SIBI value is around 0.2-0.3. The SIBI exhibits no dependence on viewing zenith or solar zenith angle, allowing for consistent scene identification. To use the SIBI we classify clear-sky CERES field-of-views over sea ice into 3 groups; SIBI≥0.935, 0.935>SIBI≥0.85 and SIBISIBI based ADMs. Using the second metric, we see a reduction in the latitude/longitude binned mean RMS error between the ADM predicted radiance and the measured radiance from 8% to 7% in May and from 17% to 12% in July. These improvements suggest that using the SIBI to account for changes in the sea ice surface will lead to improved CERES flux retrievals.

  1. Factors affecting athletes' motor behavior after the observation of scenes of cooperation and competition in competitive sport: the effect of sport attitude.

    Science.gov (United States)

    Stefani, Elisa De; De Marco, Doriana; Gentilucci, Maurizio

    2015-01-01

    This study delineated how observing sports scenes of cooperation or competition modulated an action of interaction, in expert athletes, depending on their specific sport attitude. In a kinematic study, athletes were divided into two groups depending on their attitude toward teammates (cooperative or competitive). Participants observed sport scenes of cooperation and competition (basketball, soccer, water polo, volleyball, and rugby) and then they reached for, picked up, and placed an object on the hand of a conspecific (giving action). Mixed-design ANOVAs were carried out on the mean values of grasping-reaching parameters. Data showed that the type of scene observed as well as the athletes' attitude affected reach-to-grasp actions to give. In particular, the cooperative athletes were speeded when they observed scenes of cooperation compared to when they observed scenes of competition. Participants were speeded when executing a giving action after observing actions of cooperation. This occurred only when they had a cooperative attitude. A match between attitude and intended action seems to be a necessary prerequisite for observing an effect of the observed type of scene on the performed action. It is possible that the observation of scenes of competition activated motor strategies which interfered with the strategies adopted by the cooperative participants to execute a cooperative (giving) sequence.

  2. How do Category Managers Manage?

    DEFF Research Database (Denmark)

    Hald, Kim Sundtoft; Sigurbjornsson, Tomas

    2013-01-01

    The aim of this research is to explore the managerial role of category managers in purchasing. A network management perspective is adopted. A case based research methodology is applied, and three category managers managing a diverse set of component and service categories in a global production...... firm is observed while providing accounts of their progress and results in meetings. We conclude that the network management classification scheme originally deve loped by Harland and Knight (2001) and Knight and Harland (2005) is a valuable and fertile theoretical framework for the analysis...

  3. To call a cloud 'cirrus': sound symbolism in names for categories or items.

    Science.gov (United States)

    Ković, Vanja; Sučević, Jelena; Styles, Suzy J

    2017-01-01

    The aim of the present paper is to experimentally test whether sound symbolism has selective effects on labels with different ranges-of-reference within a simple noun-hierarchy. In two experiments, adult participants learned the make up of two categories of unfamiliar objects ('alien life forms'), and were passively exposed to either category-labels or item-labels, in a learning-by-guessing categorization task. Following category training, participants were tested on their visual discrimination of object pairs. For different groups of participants, the labels were either congruent or incongruent with the objects. In Experiment 1, when trained on items with individual labels, participants were worse (made more errors) at detecting visual object mismatches when trained labels were incongruent. In Experiment 2, when participants were trained on items in labelled categories, participants were faster at detecting a match if the trained labels were congruent, and faster at detecting a mismatch if the trained labels were incongruent. This pattern of results suggests that sound symbolism in category labels facilitates later similarity judgments when congruent, and discrimination when incongruent, whereas for item labels incongruence generates error in judgements of visual object differences. These findings reveal that sound symbolic congruence has a different outcome at different levels of labelling within a noun hierarchy. These effects emerged in the absence of the label itself, indicating subtle but pervasive effects on visual object processing.

  4. Typicality Mediates Performance during Category Verification in Both Ad-Hoc and Well-Defined Categories

    Science.gov (United States)

    Sandberg, Chaleece; Sebastian, Rajani; Kiran, Swathi

    2012-01-01

    Background: The typicality effect is present in neurologically intact populations for natural, ad-hoc, and well-defined categories. Although sparse, there is evidence of typicality effects in persons with chronic stroke aphasia for natural and ad-hoc categories. However, it is unknown exactly what influences the typicality effect in this…

  5. “Getting out of downtown”: a longitudinal study of how street-entrenched youth attempt to exit an inner city drug scene

    Directory of Open Access Journals (Sweden)

    Rod Knight

    2017-05-01

    Full Text Available Abstract Background Urban drug “scenes” have been identified as important risk environments that shape the health of street-entrenched youth. New knowledge is needed to inform policy and programing interventions to help reduce youths’ drug scene involvement and related health risks. The aim of this study was to identify how young people envisioned exiting a local, inner-city drug scene in Vancouver, Canada, as well as the individual, social and structural factors that shaped their experiences. Methods Between 2008 and 2016, we draw on 150 semi-structured interviews with 75 street-entrenched youth. We also draw on data generated through ethnographic fieldwork conducted with a subgroup of 25 of these youth between. Results Youth described that, in order to successfully exit Vancouver’s inner city drug scene, they would need to: (a secure legitimate employment and/or obtain education or occupational training; (b distance themselves – both physically and socially – from the urban drug scene; and (c reduce their drug consumption. As youth attempted to leave the scene, most experienced substantial social and structural barriers (e.g., cycling in and out of jail, the need to access services that are centralized within a place that they are trying to avoid, in addition to managing complex individual health issues (e.g., substance dependence. Factors that increased youth’s capacity to successfully exit the drug scene included access to various forms of social and cultural capital operating outside of the scene, including supportive networks of friends and/or family, as well as engagement with addiction treatment services (e.g., low-threshold access to methadone to support cessation or reduction of harmful forms of drug consumption. Conclusions Policies and programming interventions that can facilitate young people’s efforts to reduce engagement with Vancouver’s inner-city drug scene are critically needed, including meaningful

  6. Item and response-category functioning of the Persian version of the KIDSCREEN-27: Rasch partial credit model

    Directory of Open Access Journals (Sweden)

    Jafari Peyman

    2012-10-01

    Full Text Available Abstract Background The purpose of the study was to determine whether the Persian version of the KIDSCREEN-27 has the optimal number of response category to measure health-related quality of life (HRQoL in children and adolescents. Moreover, we aimed to determine if all the items contributed adequately to their own domain. Findings The Persian version of the KIDSCREEN-27 was completed by 1083 school children and 1070 of their parents. The Rasch partial credit model (PCM was used to investigate item statistics and ordering of response categories. The PCM showed that no item was misfitting. The PCM also revealed that, successive response categories for all items were located in the expected order except for category 1 in self- and proxy-reports. Conclusions Although Rasch analysis confirms that all the items belong to their own underlying construct, response categories should be reorganized and evaluated in further studies, especially in children with chronic conditions.

  7. An investigation into the effect of surveillance drones on textile evidence at crime scenes.

    Science.gov (United States)

    Bucknell, Alistair; Bassindale, Tom

    2017-09-01

    With increasing numbers of Police forces using drones for crime scene surveillance, the effect of the drones on trace evidence present needs evaluation. In this investigation the effect of flying a quadcopter drone at different heights over a controlled scene and taking off at different distances from the scene were measured. Yarn was placed on a range of floor surfaces and the number lost or moved from their original position was recorded. It was possible to estimate "safe" distances above and take off distance from the bath mat (2m and 1m respectively), and carpet tile (3m and 1m) which were the roughest surfaces. The maximum distances tested of 5m above and 2m from was not far enough to prevent significant disturbance with the other floor surfaces. This report illustrates the importance of considering the impact of new technologies into a forensic workflow on established forensic evidence prior to implementation. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  8. Categories of transactions

    International Nuclear Information System (INIS)

    Anon.

    1991-01-01

    This chapter discusses the types of wholesale sales made by utilities. The Federal Energy Regulatory Commission (FERC), which regulates inter-utility sales, divides these sales into two broad categories: requirements and coordination. A variety of wholesale sales do not fall neatly into either category. For example, power purchased to replace the Three Mile Island outage is in a sense a reliability purchase, since it is bought on a long-term firm basis to meet basic load requirements. However, it does not fit the traditional model of a sale considered as part of each utility's long range planning. In addition, this chapter discusses transmission services, with a particular emphasis on wheeling

  9. Rapid gist perception of meaningful real-life scenes: Exploring individual and gender differences in multiple categorization tasks

    Science.gov (United States)

    Vanmarcke, Steven; Wagemans, Johan

    2015-01-01

    In everyday life, we are generally able to dynamically understand and adapt to socially (ir)elevant encounters, and to make appropriate decisions about these. All of this requires an impressive ability to directly filter and obtain the most informative aspects of a complex visual scene. Such rapid gist perception can be assessed in multiple ways. In the ultrafast categorization paradigm developed by Simon Thorpe et al. (1996), participants get a clear categorization task in advance and succeed at detecting the target object of interest (animal) almost perfectly (even with 20 ms exposures). Since this pioneering work, follow-up studies consistently reported population-level reaction time differences on different categorization tasks, indicating a superordinate advantage (animal versus dog) and effects of perceptual similarity (animals versus vehicles) and object category size (natural versus animal versus dog). In this study, we replicated and extended these separate findings by using a systematic collection of different categorization tasks (varying in presentation time, task demands, and stimuli) and focusing on individual differences in terms of e.g., gender and intelligence. In addition to replicating the main findings from the literature, we find subtle, yet consistent gender differences (women faster than men). PMID:26034569

  10. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    Science.gov (United States)

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-04-20

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  11. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    Directory of Open Access Journals (Sweden)

    J. Javier Yebes

    2015-04-01

    Full Text Available Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles. In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity, while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  12. Perceptual load in different regions of the visual scene and its relevance for driving.

    Science.gov (United States)

    Marciano, Hadas; Yeshurun, Yaffa

    2015-06-01

    The aim of this study was to better understand the role played by perceptual load, at both central and peripheral regions of the visual scene, in driving safety. Attention is a crucial factor in driving safety, and previous laboratory studies suggest that perceptual load is an important factor determining the efficiency of attentional selectivity. Yet, the effects of perceptual load on driving were never studied systematically. Using a driving simulator, we orthogonally manipulated the load levels at the road (central load) and its sides (peripheral load), while occasionally introducing critical events at one of these regions. Perceptual load affected driving performance at both regions of the visual scene. Critically, the effect was different for central versus peripheral load: Whereas load levels on the road mainly affected driving speed, load levels on its sides mainly affected the ability to detect critical events initiating from the roadsides. Moreover, higher levels of peripheral load impaired performance but mainly with low levels of central load, replicating findings with simple letter stimuli. Perceptual load has a considerable effect on driving, but the nature of this effect depends on the region of the visual scene at which the load is introduced. Given the observed importance of perceptual load, authors of future studies of driving safety should take it into account. Specifically, these findings suggest that our understanding of factors that may be relevant for driving safety would benefit from studying these factors under different levels of load at different regions of the visual scene. © 2014, Human Factors and Ergonomics Society.

  13. 77 FR 45378 - Guidelines for Cases Requiring On-Scene Death Investigation

    Science.gov (United States)

    2012-07-31

    ... DEPARTMENT OF JUSTICE Office of Justice Programs [OJP (NIJ) Docket No. 1600] Guidelines for Cases... entitled, ``Guidelines for Cases Requiring On-Scene Death Investigation''. The opportunity to provide comments on this document is open to coroner/medical examiner office representatives, law enforcement...

  14. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    Science.gov (United States)

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  15. Examination of the Suicide Characteristics Based on the Scene Investigation in Capital Budapest (2009-2011).

    Science.gov (United States)

    Kristóf, István; Vörös, Krisztina; Marcsa, Boglárka; Váradi-T, Aletta; Kosztya, Sándor; Törő, Klára

    2015-09-01

    Medicolegal evaluation of postmortem findings at the death scene represents an important part of forensic medicine. The aim of this study was to investigate the occurrence and characteristics of suicide events. Data collection was performed from the police scene investigation reports in capital Budapest between 2009 and 2011. In this study, epidemiological parameters such as age, gender, time and place of death, postmortem changes, suicidal method, seasonal and daily distribution, natural diseases, earlier psychiatric treatment, socioeconomic risks, supposed cause of death, final notes, earlier suicide attempts, and suicide ideations were analyzed. There were 892 suicide cases (619 males, 273 females) detected in the investigated period. Hanging, overdose of prescription medications, jumping, use of firearms, drowning, and electrotrauma showed statistical differences among genders (p<0.05). The most common methods of suicide among men and women were hanging (57.4%) and overdose of prescription medications (33%), respectively. Death scene characteristics represent the important factors for forensic medicine. © 2015 American Academy of Forensic Sciences.

  16. The role of forensic botany in crime scene investigation: case report and review of literature.

    Science.gov (United States)

    Aquila, Isabella; Ausania, Francesco; Di Nunzio, Ciro; Serra, Arianna; Boca, Silvia; Capelli, Arnaldo; Magni, Paola; Ricci, Pietrantonio

    2014-05-01

    Management of a crime is the process of ensuring accurate and effective collection and preservation of physical evidence. Forensic botany can provide significant supporting evidences during criminal investigations. The aim of this study is to demonstrate the importance of forensic botany in the crime scene. We reported a case of a woman affected by dementia who had disappeared from nursing care and was found dead near the banks of a river that flowed under a railroad. Two possible ways of access to crime scene were identified and denominated "Path A" and "Path B." Both types of soil and plants were identified. Botanical survey was performed. Some samples of Xanthium Orientalis subsp. Italicum were identified. The fall of woman resulted in external injuries and vertebral fracture at autopsy. The botanical evidence is important when crime scene and autopsy findings are not sufficient to define the dynamics and the modality of death. © 2014 American Academy of Forensic Sciences.

  17. CATEGORY MANAGEMENT IN THE MANAGEMENT OF MINIMUM ASSORTMENT OF THE PHARMACEUTICAL ORGANIZATION

    Directory of Open Access Journals (Sweden)

    I. F. Samoshchenkova

    2017-01-01

    Full Text Available The main principle of the category management is the management of product category as a separate business unit. Category management directs the activities of the pharmaceutical organization to meet the consumer requirements and to provide customers with maximum benefits, which are expressed in the improved assortment,the attractive prices, the reduction of cases of lack of necessary goods, the simplifiedpurchase process. In article the structure of the category management and its role inthe minimum pharmaceutical assortment, a complex of the theoretical and practical issues affecting interrelation of the list of vital and essential medicines and the minimum range of medicines are considered. A number of the new elements supplementing the concept of category management is offered, and the corresponding generalizations are made. The objective of the research is to study the influence of category management on the structure in management of the minimum assortment of medicines of the pharmaceutical organization. Materials and methods. In the course of the solution of the set tasks, the methods of marketing and economic-mathematical analysis were used. Results and discussion. In the analysis of the assortment list of medicines for medical application, which is obligatory for the pharmaceutical enterprises of all forms of ownership, it was revealed that this assortment list is based on the List of Vital Essential and Necessary (VEN Drugs. The results of the analysis of the obligatory assortment list from the position of internal category management showed that 77.45% are medicines of the list of VEN Drugs; 46.08% are medicines of non-prescription dispensing. Proceeding from this it follows that the worthy, profitable price policy can be conducted only with 22.55% of the list; to develop standards of merchandising with 46.05%. The category management gives an opportunity to the pharmaceutical organization to specify its competitive strategy and to

  18. Human matching performance of genuine crime scene latent fingerprints.

    Science.gov (United States)

    Thompson, Matthew B; Tangen, Jason M; McCarthy, Duncan J

    2014-02-01

    There has been very little research into the nature and development of fingerprint matching expertise. Here we present the results of an experiment testing the claimed matching expertise of fingerprint examiners. Expert (n = 37), intermediate trainee (n = 8), new trainee (n = 9), and novice (n = 37) participants performed a fingerprint discrimination task involving genuine crime scene latent fingerprints, their matches, and highly similar distractors, in a signal detection paradigm. Results show that qualified, court-practicing fingerprint experts were exceedingly accurate compared with novices. Experts showed a conservative response bias, tending to err on the side of caution by making more errors of the sort that could allow a guilty person to escape detection than errors of the sort that could falsely incriminate an innocent person. The superior performance of experts was not simply a function of their ability to match prints, per se, but a result of their ability to identify the highly similar, but nonmatching fingerprints as such. Comparing these results with previous experiments, experts were even more conservative in their decision making when dealing with these genuine crime scene prints than when dealing with simulated crime scene prints, and this conservatism made them relatively less accurate overall. Intermediate trainees-despite their lack of qualification and average 3.5 years experience-performed about as accurately as qualified experts who had an average 17.5 years experience. New trainees-despite their 5-week, full-time training course or their 6 months experience-were not any better than novices at discriminating matching and similar nonmatching prints, they were just more conservative. Further research is required to determine the precise nature of fingerprint matching expertise and the factors that influence performance. The findings of this representative, lab-based experiment may have implications for the way fingerprint examiners testify in

  19. Enhanced recognition memory in grapheme-color synaesthesia for different categories of visual stimuli.

    Science.gov (United States)

    Ward, Jamie; Hovard, Peter; Jones, Alicia; Rothen, Nicolas

    2013-01-01

    Memory has been shown to be enhanced in grapheme-color synaesthesia, and this enhancement extends to certain visual stimuli (that don't induce synaesthesia) as well as stimuli comprised of graphemes (which do). Previous studies have used a variety of testing procedures to assess memory in synaesthesia (e.g., free recall, recognition, associative learning) making it hard to know the extent to which memory benefits are attributable to the stimulus properties themselves, the testing method, participant strategies, or some combination of these factors. In the first experiment, we use the same testing procedure (recognition memory) for a variety of stimuli (written words, non-words, scenes, and fractals) and also check which memorization strategies were used. We demonstrate that grapheme-color synaesthetes show enhanced memory across all these stimuli, but this is not found for a non-visual type of synaesthesia (lexical-gustatory). In the second experiment, the memory advantage for scenes is explored further by manipulating the properties of the old and new images (changing color, orientation, or object presence). Again, grapheme-color synaesthetes show a memory advantage for scenes across all manipulations. Although recognition memory is generally enhanced in this study, the largest effects were found for abstract visual images (fractals) and scenes for which color can be used to discriminate old/new status.

  20. Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes.

    Science.gov (United States)

    Kwon, Min-Woo; Kim, Seung-Cheol; Kim, Eun-Soo

    2016-01-20

    A three-directional motion-compensation mask-based novel look-up table method is proposed and implemented on graphics processing units (GPUs) for video-rate generation of digital holographic videos of three-dimensional (3D) scenes. Since the proposed method is designed to be well matched with the software and memory structures of GPUs, the number of compute-unified-device-architecture kernel function calls can be significantly reduced. This results in a great increase of the computational speed of the proposed method, allowing video-rate generation of the computer-generated hologram (CGH) patterns of 3D scenes. Experimental results reveal that the proposed method can generate 39.8 frames of Fresnel CGH patterns with 1920×1080 pixels per second for the test 3D video scenario with 12,088 object points on dual GPU boards of NVIDIA GTX TITANs, and they confirm the feasibility of the proposed method in the practical application fields of electroholographic 3D displays.