WorldWideScience

Sample records for scene complexity influence

  1. Acute stress influences the discrimination of complex scenes and complex faces in young healthy men.

    Science.gov (United States)

    Paul, M; Lech, R K; Scheil, J; Dierolf, A M; Suchan, B; Wolf, O T

    2016-04-01

    The stress-induced release of glucocorticoids has been demonstrated to influence hippocampal functions via the modulation of specific receptors. At the behavioral level stress is known to influence hippocampus dependent long-term memory. In recent years, studies have consistently associated the hippocampus with the non-mnemonic perception of scenes, while adjacent regions in the medial temporal lobe were associated with the perception of objects, and faces. So far it is not known whether and how stress influences non-mnemonic perceptual processes. In a behavioral study, fifty male participants were subjected either to the stressful socially evaluated cold-pressor test or to a non-stressful control procedure, before they completed a visual discrimination task, comprising scenes and faces. The complexity of the face and scene stimuli was manipulated in easy and difficult conditions. A significant three way interaction between stress, stimulus type and complexity was found. Stressed participants tended to commit more errors in the complex scenes condition. For complex faces a descriptive tendency in the opposite direction (fewer errors under stress) was observed. As a result the difference between the number of errors for scenes and errors for faces was significantly larger in the stress group. These results indicate that, beyond the effects of stress on long-term memory, stress influences the discrimination of spatial information, especially when the perception is characterized by a high complexity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Scene complexity: influence on perception, memory, and development in the medial temporal lobe

    Directory of Open Access Journals (Sweden)

    Xiaoqian J Chai

    2010-03-01

    Full Text Available Regions in the medial temporal lobe (MTL and prefrontal cortex (PFC are involved in memory formation for scenes in both children and adults. The development in children and adolescents of successful memory encoding for scenes has been associated with increased activation in PFC, but not MTL, regions. However, evidence suggests that a functional subregion of the MTL that supports scene perception, located in the parahippocampal gyrus (PHG, goes through a prolonged maturation process. Here we tested the hypothesis that maturation of scene perception supports the development of memory for complex scenes. Scenes were characterized by their levels of complexity defined by the number of unique object categories depicted in the scene. Recognition memory improved with age, in participants ages 8-24, for high, but not low, complexity scenes. High-complexity compared to low-complexity scenes activated a network of regions including the posterior PHG. The difference in activations for high- versus low- complexity scenes increased with age in the right posterior PHG. Finally, activations in right posterior PHG were associated with age-related increases in successful memory formation for high-, but not low-, complexity scenes. These results suggest that functional maturation of the right posterior PHG plays a critical role in the development of enduring long-term recollection for high-complexity scenes.

  3. Affective salience can reverse the effects of stimulus-driven salience on eye movements in complex scenes

    Directory of Open Access Journals (Sweden)

    Yaqing eNiu

    2012-09-01

    Full Text Available In natural vision both stimulus features and cognitive/affective factors influence an observer's attention. However, the relationship between stimulus-driven (bottom-up and cognitive/affective (top-down factors remains controversial: Can affective salience counteract strong visual stimulus signals and shift attention allocation irrespective of bottom-up features? Is there any difference between negative and positive scenes in terms of their influence on attention deployment? Here we examined the impact of affective factors on eye movement behavior, to understand the competition between visual stimulus-driven salience and affective salience and how they affect gaze allocation in complex scene viewing. Building on our previous research, we compared predictions generated by a visual salience model with measures indexing participant-identified emotionally meaningful regions of each image. To examine how eye movement behaviour differs for negative, positive, and neutral scenes, we examined the influence of affective salience in capturing attention according to emotional valence. Taken together, our results show that affective salience can override stimulus-driven salience and overall emotional valence can determine attention allocation in complex scenes. These findings are consistent with the hypothesis that cognitive/affective factors play a dominant role in active gaze control.

  4. Hierarchical Model for the Similarity Measurement of a Complex Holed-Region Entity Scene

    Directory of Open Access Journals (Sweden)

    Zhanlong Chen

    2017-11-01

    Full Text Available Complex multi-holed-region entity scenes (i.e., sets of random region with holes are common in spatial database systems, spatial query languages, and the Geographic Information System (GIS. A multi-holed-region (region with an arbitrary number of holes is an abstraction of the real world that primarily represents geographic objects that have more than one interior boundary, such as areas that contain several lakes or lakes that contain islands. When the similarity of the two complex holed-region entity scenes is measured, the number of regions in the scenes and the number of holes in the regions are usually different between the two scenes, which complicates the matching relationships of holed-regions and holes. The aim of this research is to develop several holed-region similarity metrics and propose a hierarchical model to measure comprehensively the similarity between two complex holed-region entity scenes. The procedure first divides a complex entity scene into three layers: a complex scene, a micro-spatial-scene, and a simple entity (hole. The relationships between the adjacent layers are considered to be sets of relationships, and each level of similarity measurements is nested with the adjacent one. Next, entity matching is performed from top to bottom, while the similarity results are calculated from local to global. In addition, we utilize position graphs to describe the distribution of the holed-regions and subsequently describe the directions between the holes using a feature matrix. A case study that uses the Great Lakes in North America in 1986 and 2015 as experimental data illustrates the entire similarity measurement process between two complex holed-region entity scenes. The experimental results show that the hierarchical model accounts for the relationships of the different layers in the entire complex holed-region entity scene. The model can effectively calculate the similarity of complex holed-region entity scenes, even if the

  5. SAR Raw Data Generation for Complex Airport Scenes

    Directory of Open Access Journals (Sweden)

    Jia Li

    2014-10-01

    Full Text Available The method of generating the SAR raw data of complex airport scenes is studied in this paper. A formulation of the SAR raw signal model of airport scenes is given. Via generating the echoes from the background, aircrafts and buildings, respectively, the SAR raw data of the unified SAR imaging geometry is obtained from their vector additions. The multipath scattering and the shadowing between the background and different ground covers of standing airplanes and buildings are analyzed. Based on the scattering characteristics, coupling scattering models and SAR raw data models of different targets are given, respectively. A procedure is given to generate the SAR raw data of airport scenes. The SAR images from the simulated raw data demonstrate the validity of the proposed method.

  6. Surface-illuminant ambiguity and color constancy: effects of scene complexity and depth cues.

    Science.gov (United States)

    Kraft, James M; Maloney, Shannon I; Brainard, David H

    2002-01-01

    Two experiments were conducted to study how scene complexity and cues to depth affect human color constancy. Specifically, two levels of scene complexity were compared. The low-complexity scene contained two walls with the same surface reflectance and a test patch which provided no information about the illuminant. In addition to the surfaces visible in the low-complexity scene, the high-complexity scene contained two rectangular solid objects and 24 paper samples with diverse surface reflectances. Observers viewed illuminated objects in an experimental chamber and adjusted the test patch until it appeared achromatic. Achromatic settings made tinder two different illuminants were used to compute an index that quantified the degree of constancy. Two experiments were conducted: one in which observers viewed the stimuli directly, and one in which they viewed the scenes through an optical system that reduced cues to depth. In each experiment, constancy was assessed for two conditions. In the valid-cue condition, many cues provided valid information about the illuminant change. In the invalid-cue condition, some image cues provided invalid information. Four broad conclusions are drawn from the data: (a) constancy is generally better in the valid-cue condition than in the invalid-cue condition: (b) for the stimulus configuration used, increasing image complexity has little effect in the valid-cue condition but leads to increased constancy in the invalid-cue condition; (c) for the stimulus configuration used, reducing cues to depth has little effect for either constancy condition: and (d) there is moderate individual variation in the degree of constancy exhibited, particularly in the degree to which the complexity manipulation affects performance.

  7. The influence of color on emotional perception of natural scenes.

    Science.gov (United States)

    Codispoti, Maurizio; De Cesarei, Andrea; Ferrari, Vera

    2012-01-01

    Is color a critical factor when processing the emotional content of natural scenes? Under challenging perceptual conditions, such as when pictures are briefly presented, color might facilitate scene segmentation and/or function as a semantic cue via association with scene-relevant concepts (e.g., red and blood/injury). To clarify the influence of color on affective picture perception, we compared the late positive potentials (LPP) to color versus grayscale pictures, presented for very brief (24 ms) and longer (6 s) exposure durations. Results indicated that removing color information had no effect on the affective modulation of the LPP, regardless of exposure duration. These findings imply that the recognition of the emotional content of scenes, even when presented very briefly, does not critically rely on color information. Copyright © 2011 Society for Psychophysiological Research.

  8. Memory for sound, with an ear toward hearing in complex auditory scenes.

    Science.gov (United States)

    Snyder, Joel S; Gregg, Melissa K

    2011-10-01

    An area of research that has experienced recent growth is the study of memory during perception of simple and complex auditory scenes. These studies have provided important information about how well auditory objects are encoded in memory and how well listeners can notice changes in auditory scenes. These are significant developments because they present an opportunity to better understand how we hear in realistic situations, how higher-level aspects of hearing such as semantics and prior exposure affect perception, and the similarities and differences between auditory perception and perception in other modalities, such as vision and touch. The research also poses exciting challenges for behavioral and neural models of how auditory perception and memory work.

  9. Influence of semantic consistency and perceptual features on visual attention during scene viewing in toddlers.

    Science.gov (United States)

    Helo, Andrea; van Ommen, Sandrien; Pannasch, Sebastian; Danteny-Dordoigne, Lucile; Rämä, Pia

    2017-11-01

    Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Action adaptation during natural unfolding social scenes influences action recognition and inferences made about actor beliefs.

    Science.gov (United States)

    Keefe, Bruce D; Wincenciak, Joanna; Jellema, Tjeerd; Ward, James W; Barraclough, Nick E

    2016-07-01

    When observing another individual's actions, we can both recognize their actions and infer their beliefs concerning the physical and social environment. The extent to which visual adaptation influences action recognition and conceptually later stages of processing involved in deriving the belief state of the actor remains unknown. To explore this we used virtual reality (life-size photorealistic actors presented in stereoscopic three dimensions) to see how visual adaptation influences the perception of individuals in naturally unfolding social scenes at increasingly higher levels of action understanding. We presented scenes in which one actor picked up boxes (of varying number and weight), after which a second actor picked up a single box. Adaptation to the first actor's behavior systematically changed perception of the second actor. Aftereffects increased with the duration of the first actor's behavior, declined exponentially over time, and were independent of view direction. Inferences about the second actor's expectation of box weight were also distorted by adaptation to the first actor. Distortions in action recognition and actor expectations did not, however, extend across different actions, indicating that adaptation is not acting at an action-independent abstract level but rather at an action-dependent level. We conclude that although adaptation influences more complex inferences about belief states of individuals, this is likely to be a result of adaptation at an earlier action recognition stage rather than adaptation operating at a higher, more abstract level in mentalizing or simulation systems.

  11. Complex scenes and situations visualization in hierarchical learning algorithm with dynamic 3D NeoAxis engine

    Science.gov (United States)

    Graham, James; Ternovskiy, Igor V.

    2013-06-01

    We applied a two stage unsupervised hierarchical learning system to model complex dynamic surveillance and cyber space monitoring systems using a non-commercial version of the NeoAxis visualization software. The hierarchical scene learning and recognition approach is based on hierarchical expectation maximization, and was linked to a 3D graphics engine for validation of learning and classification results and understanding the human - autonomous system relationship. Scene recognition is performed by taking synthetically generated data and feeding it to a dynamic logic algorithm. The algorithm performs hierarchical recognition of the scene by first examining the features of the objects to determine which objects are present, and then determines the scene based on the objects present. This paper presents a framework within which low level data linked to higher-level visualization can provide support to a human operator and be evaluated in a detailed and systematic way.

  12. Cross-cultural differences in item and background memory: examining the influence of emotional intensity and scene congruency.

    Science.gov (United States)

    Mickley Steinmetz, Katherine R; Sturkie, Charlee M; Rochester, Nina M; Liu, Xiaodong; Gutchess, Angela H

    2018-07-01

    After viewing a scene, individuals differ in what they prioritise and remember. Culture may be one factor that influences scene memory, as Westerners have been shown to be more item-focused than Easterners (see Masuda, T., & Nisbett, R. E. (2001). Attending holistically versus analytically: Comparing the context sensitivity of Japanese and Americans. Journal of Personality and Social Psychology, 81, 922-934). However, cultures may differ in their sensitivity to scene incongruences and emotion processing, which may account for cross-cultural differences in scene memory. The current study uses hierarchical linear modeling (HLM) to examine scene memory while controlling for scene congruency and the perceived emotional intensity of the images. American and East Asian participants encoded pictures that included a positive, negative, or neutral item placed on a neutral background. After a 20-min delay, participants were shown the item and background separately along with similar and new items and backgrounds to assess memory specificity. Results indicated that even when congruency and emotional intensity were controlled, there was evidence that Americans had better item memory than East Asians. Incongruent scenes were better remembered than congruent scenes. However, this effect did not differ by culture. This suggests that Americans' item focus may result in memory changes that are robust despite variations in scene congruency and perceived emotion.

  13. The probability of object-scene co-occurrence influences object identification processes.

    Science.gov (United States)

    Sauvé, Geneviève; Harmand, Mariane; Vanni, Léa; Brodeur, Mathieu B

    2017-07-01

    Contextual information allows the human brain to make predictions about the identity of objects that might be seen and irregularities between an object and its background slow down perception and identification processes. Bar and colleagues modeled the mechanisms underlying this beneficial effect suggesting that the brain stocks information about the statistical regularities of object and scene co-occurrence. Their model suggests that these recurring regularities could be conceptualized along a continuum in which the probability of seeing an object within a given scene can be high (probable condition), moderate (improbable condition) or null (impossible condition). In the present experiment, we propose to disentangle the electrophysiological correlates of these context effects by directly comparing object-scene pairs found along this continuum. We recorded the event-related potentials of 30 healthy participants (18-34 years old) and analyzed their brain activity in three time windows associated with context effects. We observed anterior negativities between 250 and 500 ms after object onset for the improbable and impossible conditions (improbable more negative than impossible) compared to the probable condition as well as a parieto-occipital positivity (improbable more positive than impossible). The brain may use different processing pathways to identify objects depending on whether the probability of co-occurrence with the scene is moderate (rely more on top-down effects) or null (rely more on bottom-up influences). The posterior positivity could index error monitoring aimed to ensure that no false information is integrated into mental representations of the world.

  14. The role of memory for visual search in scenes.

    Science.gov (United States)

    Le-Hoa Võ, Melissa; Wolfe, Jeremy M

    2015-03-01

    Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes. © 2015 New York Academy of Sciences.

  15. The Influence of Familiarity on Affective Responses to Natural Scenes

    Science.gov (United States)

    Sanabria Z., Jorge C.; Cho, Youngil; Yamanaka, Toshimasa

    This kansei study explored how familiarity with image-word combinations influences affective states. Stimuli were obtained from Japanese print advertisements (ads), and consisted of images (e.g., natural-scene backgrounds) and their corresponding headlines (advertising copy). Initially, a group of subjects evaluated their level of familiarity with images and headlines independently, and stimuli were filtered based on the results. In the main experiment, a different group of subjects rated their pleasure and arousal to, and familiarity with, image-headline combinations. The Self-Assessment Manikin (SAM) scale was used to evaluate pleasure and arousal, and a bipolar scale was used to evaluate familiarity. The results showed a high correlation between familiarity and pleasure, but low correlation between familiarity and arousal. The characteristics of the stimuli, and their effect on the variables of pleasure, arousal and familiarity, were explored through ANOVA. It is suggested that, in the case of natural-scene ads, familiarity with image-headline combinations may increase the pleasure response to the ads, and that certain components in the images (e.g., water) may increase arousal levels.

  16. License plate localization in complex scenes based on oriented FAST and rotated BRIEF feature

    Science.gov (United States)

    Wang, Ran; Xia, Yuanchun; Wang, Guoyou; Tian, Jiangmin

    2015-09-01

    Within intelligent transportation systems, fast and robust license plate localization (LPL) in complex scenes is still a challenging task. Real-world scenes introduce complexities such as variation in license plate size and orientation, uneven illumination, background clutter, and nonplate objects. These complexities lead to poor performance using traditional LPL features, such as color, edge, and texture. Recently, state-of-the-art performance in LPL has been achieved by applying the scale invariant feature transform (SIFT) descriptor to LPL for visual matching. However, for applications that require fast processing, such as mobile phones, SIFT does not meet the efficiency requirement due to its relatively slow computational speed. To address this problem, a new approach for LPL, which uses the oriented FAST and rotated BRIEF (ORB) feature detector, is proposed. The feature extraction in ORB is much more efficient than in SIFT and is invariant to scale and grayscale as well as rotation changes, and hence is able to provide superior performance for LPL. The potential regions of a license plate are detected by considering spatial and color information simultaneously, which is different from previous approaches. The experimental results on a challenging dataset demonstrate the effectiveness and efficiency of the proposed method.

  17. Associative Processing Is Inherent in Scene Perception

    Science.gov (United States)

    Aminoff, Elissa M.; Tarr, Michael J.

    2015-01-01

    How are complex visual entities such as scenes represented in the human brain? More concretely, along what visual and semantic dimensions are scenes encoded in memory? One hypothesis is that global spatial properties provide a basis for categorizing the neural response patterns arising from scenes. In contrast, non-spatial properties, such as single objects, also account for variance in neural responses. The list of critical scene dimensions has continued to grow—sometimes in a contradictory manner—coming to encompass properties such as geometric layout, big/small, crowded/sparse, and three-dimensionality. We demonstrate that these dimensions may be better understood within the more general framework of associative properties. That is, across both the perceptual and semantic domains, features of scene representations are related to one another through learned associations. Critically, the components of such associations are consistent with the dimensions that are typically invoked to account for scene understanding and its neural bases. Using fMRI, we show that non-scene stimuli displaying novel associations across identities or locations recruit putatively scene-selective regions of the human brain (the parahippocampal/lingual region, the retrosplenial complex, and the transverse occipital sulcus/occipital place area). Moreover, we find that the voxel-wise neural patterns arising from these associations are significantly correlated with the neural patterns arising from everyday scenes providing critical evidence whether the same encoding principals underlie both types of processing. These neuroimaging results provide evidence for the hypothesis that the neural representation of scenes is better understood within the broader theoretical framework of associative processing. In addition, the results demonstrate a division of labor that arises across scene-selective regions when processing associations and scenes providing better understanding of the functional

  18. Finding the Cause: Verbal Framing Helps Children Extract Causal Evidence Embedded in a Complex Scene

    Science.gov (United States)

    Butler, Lucas P.; Markman, Ellen M.

    2012-01-01

    In making causal inferences, children must both identify a causal problem and selectively attend to meaningful evidence. Four experiments demonstrate that verbally framing an event ("Which animals make Lion laugh?") helps 4-year-olds extract evidence from a complex scene to make accurate causal inferences. Whereas framing was unnecessary when…

  19. Scene construction in schizophrenia.

    Science.gov (United States)

    Raffard, Stéphane; D'Argembeau, Arnaud; Bayard, Sophie; Boulenger, Jean-Philippe; Van der Linden, Martial

    2010-09-01

    Recent research has revealed that schizophrenia patients are impaired in remembering the past and imagining the future. In this study, we examined patients' ability to engage in scene construction (i.e., the process of mentally generating and maintaining a complex and coherent scene), which is a key part of retrieving past experiences and episodic future thinking. 24 participants with schizophrenia and 25 healthy controls were asked to imagine new fictitious experiences and described their mental representations of the scenes in as much detail as possible. Descriptions were scored according to various dimensions (e.g., sensory details, spatial reference), and participants also provided ratings of their subjective experience when imagining the scenes (e.g., their sense of presence, the perceived similarity of imagined events to past experiences). Imagined scenes contained less phenomenological details (d = 1.11) and were more fragmented (d = 2.81) in schizophrenia patients compared to controls. Furthermore, positive symptoms were positively correlated to the sense of presence (r = .43) and the perceived similarity of imagined events to past episodes (r = .47), whereas negative symptoms were negatively related to the overall richness of the imagined scenes (r = -.43). The results suggest that schizophrenic patients' impairments in remembering the past and imagining the future are, at least in part, due to deficits in the process of scene construction. The relationships between the characteristics of imagined scenes and positive and negative symptoms could be related to reality monitoring deficits and difficulties in strategic retrieval processes, respectively. Copyright 2010 APA, all rights reserved.

  20. Semantic guidance of eye movements in real-world scenes

    OpenAIRE

    Hwang, Alex D.; Wang, Hsueh-Cheng; Pomplun, Marc

    2011-01-01

    The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movemen...

  1. Semantic guidance of eye movements in real-world scenes.

    Science.gov (United States)

    Hwang, Alex D; Wang, Hsueh-Cheng; Pomplun, Marc

    2011-05-25

    The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying latent semantic analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects' gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects' eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. Copyright © 2011 Elsevier Ltd. All rights reserved.

  2. The Influence of Color on the Perception of Scene Gist

    Science.gov (United States)

    Castelhano, Monica S.; Henderson, John M.

    2008-01-01

    In 3 experiments the authors used a new contextual bias paradigm to explore how quickly information is extracted from a scene to activate gist, whether color contributes to this activation, and how color contributes, if it does. Participants were shown a brief presentation of a scene followed by the name of a target object. The target object could…

  3. Attention Switching during Scene Perception: How Goals Influence the Time Course of Eye Movements across Advertisements

    Science.gov (United States)

    Wedel, Michel; Pieters, Rik; Liechty, John

    2008-01-01

    Eye movements across advertisements express a temporal pattern of bursts of respectively relatively short and long saccades, and this pattern is systematically influenced by activated scene perception goals. This was revealed by a continuous-time hidden Markov model applied to eye movements of 220 participants exposed to 17 ads under a…

  4. Multi- and hyperspectral scene modeling

    Science.gov (United States)

    Borel, Christoph C.; Tuttle, Ronald F.

    2011-06-01

    This paper shows how to use a public domain raytracer POV-Ray (Persistence Of Vision Raytracer) to render multiand hyper-spectral scenes. The scripting environment allows automatic changing of the reflectance and transmittance parameters. The radiosity rendering mode allows accurate simulation of multiple-reflections between surfaces and also allows semi-transparent surfaces such as plant leaves. We show that POV-Ray computes occlusion accurately using a test scene with two blocks under a uniform sky. A complex scene representing a plant canopy is generated using a few lines of script. With appropriate rendering settings, shadows cast by leaves are rendered in many bands. Comparing single and multiple reflection renderings, the effect of multiple reflections is clearly visible and accounts for 25% of the overall apparent canopy reflectance in the near infrared.

  5. Correlated Topic Vector for Scene Classification.

    Science.gov (United States)

    Wei, Pengxu; Qin, Fei; Wan, Fang; Zhu, Yi; Jiao, Jianbin; Ye, Qixiang

    2017-07-01

    Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features.

  6. Cortical Representations of Speech in a Multitalker Auditory Scene.

    Science.gov (United States)

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory

  7. Band registration of tuneable frame format hyperspectral UAV imagers in complex scenes

    Science.gov (United States)

    Honkavaara, Eija; Rosnell, Tomi; Oliveira, Raquel; Tommaselli, Antonio

    2017-12-01

    A recent revolution in miniaturised sensor technology has provided markets with novel hyperspectral imagers operating in the frame format principle. In the case of unmanned aerial vehicle (UAV) based remote sensing, the frame format technology is highly attractive in comparison to the commonly utilised pushbroom scanning technology, because it offers better stability and the possibility to capture stereoscopic data sets, bringing an opportunity for 3D hyperspectral object reconstruction. Tuneable filters are one of the approaches for capturing multi- or hyperspectral frame images. The individual bands are not aligned when operating a sensor based on tuneable filters from a mobile platform, such as UAV, because the full spectrum recording is carried out in the time-sequential principle. The objective of this investigation was to study the aspects of band registration of an imager based on tuneable filters and to develop a rigorous and efficient approach for band registration in complex 3D scenes, such as forests. The method first determines the orientations of selected reference bands and reconstructs the 3D scene using structure-from-motion and dense image matching technologies. The bands, without orientation, are then matched to the oriented bands accounting the 3D scene to provide exterior orientations, and afterwards, hyperspectral orthomosaics, or hyperspectral point clouds, are calculated. The uncertainty aspects of the novel approach were studied. An empirical assessment was carried out in a forested environment using hyperspectral images captured with a hyperspectral 2D frame format camera, based on a tuneable Fabry-Pérot interferometer (FPI) on board a multicopter and supported by a high spatial resolution consumer colour camera. A theoretical assessment showed that the method was capable of providing band registration accuracy better than 0.5-pixel size. The empirical assessment proved the performance and showed that, with the novel method, most parts of

  8. Attention switching during scene perception: how goals influence the time course of eye movements across advertisements.

    Science.gov (United States)

    Wedel, Michel; Pieters, Rik; Liechty, John

    2008-06-01

    Eye movements across advertisements express a temporal pattern of bursts of respectively relatively short and long saccades, and this pattern is systematically influenced by activated scene perception goals. This was revealed by a continuous-time hidden Markov model applied to eye movements of 220 participants exposed to 17 ads under a free-viewing condition, and a scene-learning goal (ad memorization), a scene-evaluation goal (ad appreciation), a target-learning goal (product learning), or a target-evaluation goal (product evaluation). The model reflects how attention switches between two states--local and global--expressed in saccades of shorter and longer amplitude on a spatial grid with 48 cells overlaid on the ads. During the 5- to 6-s duration of self-controlled exposure to ads in the magazine context, attention predominantly started in the local state and ended in the global state, and rapidly switched about 5 times between states. The duration of the local attention state was much longer than the duration of the global state. Goals affected the frequency of switching between attention states and the duration of the local, but not of the global, state. (c) 2008 APA, all rights reserved

  9. Emotional Scene Content Drives the Saccade Generation System Reflexively

    Science.gov (United States)

    Nummenmaa, Lauri; Hyona, Jukka; Calvo, Manuel G.

    2009-01-01

    The authors assessed whether parafoveal perception of emotional content influences saccade programming. In Experiment 1, paired emotional and neutral scenes were presented to parafoveal vision. Participants performed voluntary saccades toward either of the scenes according to an imperative signal (color cue). Saccadic reaction times were faster…

  10. Effects of scene content and layout on the perceived light direction in 3D spaces.

    Science.gov (United States)

    Xia, Ling; Pont, Sylvia C; Heynderickx, Ingrid

    2016-08-01

    The lighting and furnishing of an interior space (i.e., the reflectance of its materials, the geometries of the furnishings, and their arrangement) determine the appearance of this space. Conversely, human observers infer lighting properties from the space's appearance. We conducted two psychophysical experiments to investigate how the perception of the light direction is influenced by a scene's objects and their layout using real scenes. In the first experiment, we confirmed that the shape of the objects in the scene and the scene layout influence the perceived light direction. In the second experiment, we systematically investigated how specific shape properties influenced the estimation of the light direction. The results showed that increasing the number of visible faces of an object, ultimately using globally spherical shapes in the scene, supported the veridicality of the estimated light direction. Furthermore, symmetric arrangements in the scene improved the estimation of the tilt direction. Thus, human perception of light should integrally consider materials, scene content, and layout.

  11. Accumulating and remembering the details of neutral and emotional natural scenes.

    Science.gov (United States)

    Melcher, David

    2010-01-01

    In contrast to our rich sensory experience with complex scenes in everyday life, the capacity of visual working memory is thought to be quite limited. Here our memory has been examined for the details of naturalistic scenes as a function of display duration, emotional valence of the scene, and delay before test. Individual differences in working memory and long-term memory for pictorial scenes were examined in experiment 1. The accumulation of memory for emotional scenes and the retention of these details in long-term memory were investigated in experiment 2. Although there were large individual differences in performance, memory for scene details generally exceeded the traditional working memory limit within a few seconds. Information about positive scenes was learned most quickly, while negative scenes showed the worst memory for details. The overall pattern of results was consistent with the idea that both short-term and long-term representations are mixed together in a medium-term 'online' memory for scenes.

  12. Statistics of high-level scene context.

    Science.gov (United States)

    Greene, Michelle R

    2013-01-01

    CONTEXT IS CRITICAL FOR RECOGNIZING ENVIRONMENTS AND FOR SEARCHING FOR OBJECTS WITHIN THEM: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed "things" in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics

  13. Scene perception in posterior cortical atrophy: categorization, description and fixation patterns.

    Science.gov (United States)

    Shakespeare, Timothy J; Yong, Keir X X; Frost, Chris; Kim, Lois G; Warrington, Elizabeth K; Crutch, Sebastian J

    2013-01-01

    Partial or complete Balint's syndrome is a core feature of the clinico-radiological syndrome of posterior cortical atrophy (PCA), in which individuals experience a progressive deterioration of cortical vision. Although multi-object arrays are frequently used to detect simultanagnosia in the clinical assessment and diagnosis of PCA, to date there have been no group studies of scene perception in patients with the syndrome. The current study involved three linked experiments conducted in PCA patients and healthy controls. Experiment 1 evaluated the accuracy and latency of complex scene perception relative to individual faces and objects (color and grayscale) using a categorization paradigm. PCA patients were both less accurate (faces < scenes < objects) and slower (scenes < objects < faces) than controls on all categories, with performance strongly associated with their level of basic visual processing impairment; patients also showed a small advantage for color over grayscale stimuli. Experiment 2 involved free description of real world scenes. PCA patients generated fewer features and more misperceptions than controls, though perceptual errors were always consistent with the patient's global understanding of the scene (whether correct or not). Experiment 3 used eye tracking measures to compare patient and control eye movements over initial and subsequent fixations of scenes. Patients' fixation patterns were significantly different to those of young and age-matched controls, with comparable group differences for both initial and subsequent fixations. Overall, these findings describe the variability in everyday scene perception exhibited by individuals with PCA, and indicate the importance of exposure duration in the perception of complex scenes.

  14. Scene perception in Posterior Cortical Atrophy: categorisation, description and fixation patterns

    Directory of Open Access Journals (Sweden)

    Timothy J Shakespeare

    2013-10-01

    Full Text Available Partial or complete Balint’s syndrome is a core feature of the clinico-radiological syndrome of posterior cortical atrophy (PCA, in which individuals experience a progressive deterioration of cortical vision. Although multi-object arrays are frequently used to detect simultanagnosia in the clinical assessment and diagnosis of PCA, to date there have been no group studies of scene perception in patients with the syndrome. The current study involved three linked experiments conducted in PCA patients and healthy controls. Experiment 1 evaluated the accuracy and latency of complex scene perception relative to individual faces and objects (colour and greyscale using a categorisation paradigm. PCA patients were both less accurate (faces<scenesscenesscenes. PCA patients generated fewer features and more misperceptions than controls, though perceptual errors were always consistent with the patient’s global understanding of the scene (whether correct or not. Experiment 3 used eye tracking measures to compare patient and control eye movements over initial and subsequent fixations of scenes. Patients’ fixation patterns were significantly different to those of young and age-matched controls, with comparable group differences for both initial and subsequent fixations. Overall, these findings describe the variability in everyday scene perception exhibited by individuals with PCA, and indicate the importance of exposure duration in the perception of complex scenes.

  15. The roles of scene gist and spatial dependency among objects in the semantic guidance of attention in real-world scenes.

    Science.gov (United States)

    Wu, Chia-Chien; Wang, Hsueh-Cheng; Pomplun, Marc

    2014-12-01

    A previous study (Vision Research 51 (2011) 1192-1205) found evidence for semantic guidance of visual attention during the inspection of real-world scenes, i.e., an influence of semantic relationships among scene objects on overt shifts of attention. In particular, the results revealed an observer bias toward gaze transitions between semantically similar objects. However, this effect is not necessarily indicative of semantic processing of individual objects but may be mediated by knowledge of the scene gist, which does not require object recognition, or by known spatial dependency among objects. To examine the mechanisms underlying semantic guidance, in the present study, participants were asked to view a series of displays with the scene gist excluded and spatial dependency varied. Our results show that spatial dependency among objects seems to be sufficient to induce semantic guidance. Scene gist, on the other hand, does not seem to affect how observers use semantic information to guide attention while viewing natural scenes. Extracting semantic information mainly based on spatial dependency may be an efficient strategy of the visual system that only adds little cognitive load to the viewing task. Copyright © 2014 Elsevier Ltd. All rights reserved.

  16. Global Transsaccadic Change Blindness During Scene Perception

    National Research Council Canada - National Science Library

    Henderson, John

    2003-01-01

    .... The results from two experiments demonstrated a global transsaccadic change-blindness effect, suggesting that point-by-point visual representations are not functional across saccades during complex scene perception. Ahstract.

  17. HDR IMAGING FOR FEATURE DETECTION ON DETAILED ARCHITECTURAL SCENES

    Directory of Open Access Journals (Sweden)

    G. Kontogianni

    2015-02-01

    Full Text Available 3D reconstruction relies on accurate detection, extraction, description and matching of image features. This is even truer for complex architectural scenes that pose needs for 3D models of high quality, without any loss of detail in geometry or color. Illumination conditions influence the radiometric quality of images, as standard sensors cannot depict properly a wide range of intensities in the same scene. Indeed, overexposed or underexposed pixels cause irreplaceable information loss and degrade digital representation. Images taken under extreme lighting environments may be thus prohibitive for feature detection/extraction and consequently for matching and 3D reconstruction. High Dynamic Range (HDR images could be helpful for these operators because they broaden the limits of illumination range that Standard or Low Dynamic Range (SDR/LDR images can capture and increase in this way the amount of details contained in the image. Experimental results of this study prove this assumption as they examine state of the art feature detectors applied both on standard dynamic range and HDR images.

  18. Combined Influence of Visual Scene and Body Tilt on Arm Pointing Movements: Gravity Matters!

    Science.gov (United States)

    Scotto Di Cesare, Cécile; Sarlegna, Fabrice R.; Bourdin, Christophe; Mestre, Daniel R.; Bringoux, Lionel

    2014-01-01

    Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., ‘combined’ tilts equal to the sum of ‘single’ tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues. PMID:24925371

  19. Influences of High-Level Features, Gaze, and Scene Transitions on the Reliability of BOLD Responses to Natural Movie Stimuli

    Science.gov (United States)

    Lu, Kun-Han; Hung, Shao-Chin; Wen, Haiguang; Marussich, Lauren; Liu, Zhongming

    2016-01-01

    Complex, sustained, dynamic, and naturalistic visual stimulation can evoke distributed brain activities that are highly reproducible within and across individuals. However, the precise origins of such reproducible responses remain incompletely understood. Here, we employed concurrent functional magnetic resonance imaging (fMRI) and eye tracking to investigate the experimental and behavioral factors that influence fMRI activity and its intra- and inter-subject reproducibility during repeated movie stimuli. We found that widely distributed and highly reproducible fMRI responses were attributed primarily to the high-level natural content in the movie. In the absence of such natural content, low-level visual features alone in a spatiotemporally scrambled control stimulus evoked significantly reduced degree and extent of reproducible responses, which were mostly confined to the primary visual cortex (V1). We also found that the varying gaze behavior affected the cortical response at the peripheral part of V1 and in the oculomotor network, with minor effects on the response reproducibility over the extrastriate visual areas. Lastly, scene transitions in the movie stimulus due to film editing partly caused the reproducible fMRI responses at widespread cortical areas, especially along the ventral visual pathway. Therefore, the naturalistic nature of a movie stimulus is necessary for driving highly reliable visual activations. In a movie-stimulation paradigm, scene transitions and individuals’ gaze behavior should be taken as potential confounding factors in order to properly interpret cortical activity that supports natural vision. PMID:27564573

  20. IR characteristic simulation of city scenes based on radiosity model

    Science.gov (United States)

    Xiong, Xixian; Zhou, Fugen; Bai, Xiangzhi; Yu, Xiyu

    2013-09-01

    Reliable modeling for thermal infrared (IR) signatures of real-world city scenes is required for signature management of civil and military platforms. Traditional modeling methods generally assume that scene objects are individual entities during the physical processes occurring in infrared range. However, in reality, the physical scene involves convective and conductive interactions between objects as well as the radiations interactions between objects. A method based on radiosity model describes these complex effects. It has been developed to enable an accurate simulation for the radiance distribution of the city scenes. Firstly, the physical processes affecting the IR characteristic of city scenes were described. Secondly, heat balance equations were formed on the basis of combining the atmospheric conditions, shadow maps and the geometry of scene. Finally, finite difference method was used to calculate the kinetic temperature of object surface. A radiosity model was introduced to describe the scattering effect of radiation between surface elements in the scene. By the synthesis of objects radiance distribution in infrared range, we could obtain the IR characteristic of scene. Real infrared images and model predictions were shown and compared. The results demonstrate that this method can realistically simulate the IR characteristic of city scenes. It effectively displays the infrared shadow effects and the radiation interactions between objects in city scenes.

  1. A hierarchical probabilistic model for rapid object categorization in natural scenes.

    Directory of Open Access Journals (Sweden)

    Xiaofu He

    Full Text Available Humans can categorize objects in complex natural scenes within 100-150 ms. This amazing ability of rapid categorization has motivated many computational models. Most of these models require extensive training to obtain a decision boundary in a very high dimensional (e.g., ∼6,000 in a leading model feature space and often categorize objects in natural scenes by categorizing the context that co-occurs with objects when objects do not occupy large portions of the scenes. It is thus unclear how humans achieve rapid scene categorization.To address this issue, we developed a hierarchical probabilistic model for rapid object categorization in natural scenes. In this model, a natural object category is represented by a coarse hierarchical probability distribution (PD, which includes PDs of object geometry and spatial configuration of object parts. Object parts are encoded by PDs of a set of natural object structures, each of which is a concatenation of local object features. Rapid categorization is performed as statistical inference. Since the model uses a very small number (∼100 of structures for even complex object categories such as animals and cars, it requires little training and is robust in the presence of large variations within object categories and in their occurrences in natural scenes. Remarkably, we found that the model categorized animals in natural scenes and cars in street scenes with a near human-level performance. We also found that the model located animals and cars in natural scenes, thus overcoming a flaw in many other models which is to categorize objects in natural context by categorizing contextual features. These results suggest that coarse PDs of object categories based on natural object structures and statistical operations on these PDs may underlie the human ability to rapidly categorize scenes.

  2. Moving through a multiplex holographic scene

    Science.gov (United States)

    Mrongovius, Martina

    2013-02-01

    This paper explores how movement can be used as a compositional element in installations of multiplex holograms. My holographic images are created from montages of hand-held video and photo-sequences. These spatially dynamic compositions are visually complex but anchored to landmarks and hints of the capturing process - such as the appearance of the photographer's shadow - to establish a sense of connection to the holographic scene. Moving around in front of the hologram, the viewer animates the holographic scene. A perception of motion then results from the viewer's bodily awareness of physical motion and the visual reading of dynamics within the scene or movement of perspective through a virtual suggestion of space. By linking and transforming the physical motion of the viewer with the visual animation, the viewer's bodily awareness - including proprioception, balance and orientation - play into the holographic composition. How multiplex holography can be a tool for exploring coupled, cross-referenced and transformed perceptions of movement is demonstrated with a number of holographic image installations. Through this process I expanded my creative composition practice to consider how dynamic and spatial scenes can be conveyed through the fragmented view of a multiplex hologram. This body of work was developed through an installation art practice and was the basis of my recently completed doctoral thesis: 'The Emergent Holographic Scene — compositions of movement and affect using multiplex holographic images'.

  3. Three-dimensional scene encryption and display based on computer-generated holograms.

    Science.gov (United States)

    Kong, Dezhao; Cao, Liangcai; Jin, Guofan; Javidi, Bahram

    2016-10-10

    An optical encryption and display method for a three-dimensional (3D) scene is proposed based on computer-generated holograms (CGHs) using a single phase-only spatial light modulator. The 3D scene is encoded as one complex Fourier CGH. The Fourier CGH is then decomposed into two phase-only CGHs with random distributions by the vector stochastic decomposition algorithm. Two CGHs are interleaved as one final phase-only CGH for optical encryption and reconstruction. The proposed method can support high-level nonlinear optical 3D scene security and complex amplitude modulation of the optical field. The exclusive phase key offers strong resistances of decryption attacks. Experimental results demonstrate the validity of the novel method.

  4. Deconstructing visual scenes in cortex: gradients of object and spatial layout information.

    Science.gov (United States)

    Harel, Assaf; Kravitz, Dwight J; Baker, Chris I

    2013-04-01

    Real-world visual scenes are complex cluttered, and heterogeneous stimuli engaging scene- and object-selective cortical regions including parahippocampal place area (PPA), retrosplenial complex (RSC), and lateral occipital complex (LOC). To understand the unique contribution of each region to distributed scene representations, we generated predictions based on a neuroanatomical framework adapted from monkey and tested them using minimal scenes in which we independently manipulated both spatial layout (open, closed, and gradient) and object content (furniture, e.g., bed, dresser). Commensurate with its strong connectivity with posterior parietal cortex, RSC evidenced strong spatial layout information but no object information, and its response was not even modulated by object presence. In contrast, LOC, which lies within the ventral visual pathway, contained strong object information but no background information. Finally, PPA, which is connected with both the dorsal and the ventral visual pathway, showed information about both objects and spatial backgrounds and was sensitive to the presence or absence of either. These results suggest that 1) LOC, PPA, and RSC have distinct representations, emphasizing different aspects of scenes, 2) the specific representations in each region are predictable from their patterns of connectivity, and 3) PPA combines both spatial layout and object information as predicted by connectivity.

  5. The occipital place area represents the local elements of scenes.

    Science.gov (United States)

    Kamps, Frederik S; Julian, Joshua B; Kubilius, Jonas; Kanwisher, Nancy; Dilks, Daniel D

    2016-05-15

    Neuroimaging studies have identified three scene-selective regions in human cortex: parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA). However, precisely what scene information each region represents is not clear, especially for the least studied, more posterior OPA. Here we hypothesized that OPA represents local elements of scenes within two independent, yet complementary scene descriptors: spatial boundary (i.e., the layout of external surfaces) and scene content (e.g., internal objects). If OPA processes the local elements of spatial boundary information, then it should respond to these local elements (e.g., walls) themselves, regardless of their spatial arrangement. Indeed, we found that OPA, but not PPA or RSC, responded similarly to images of intact rooms and these same rooms in which the surfaces were fractured and rearranged, disrupting the spatial boundary. Next, if OPA represents the local elements of scene content information, then it should respond more when more such local elements (e.g., furniture) are present. Indeed, we found that OPA, but not PPA or RSC, responded more to multiple than single pieces of furniture. Taken together, these findings reveal that OPA analyzes local scene elements - both in spatial boundary and scene content representation - while PPA and RSC represent global scene properties. Copyright © 2016 Elsevier Inc. All rights reserved.

  6. Children's Development of Analogical Reasoning: Insights from Scene Analogy Problems

    Science.gov (United States)

    Richland, Lindsey E.; Morrison, Robert G.; Holyoak, Keith J.

    2006-01-01

    We explored how relational complexity and featural distraction, as varied in scene analogy problems, affect children's analogical reasoning performance. Results with 3- and 4-year-olds, 6- and 7-year-olds, 9- to 11-year-olds, and 13- and 14-year-olds indicate that when children can identify the critical structural relations in a scene analogy…

  7. Global scene layout modulates contextual learning in change detection

    Directory of Open Access Journals (Sweden)

    Markus eConci

    2014-02-01

    Full Text Available Change in the visual scene often goes unnoticed – a phenomenon referred to as ‘change blindness’. This study examined whether the hierarchical structure, i.e., the global-local layout of a scene can influence performance in a one-shot change detection paradigm. To this end, natural scenes of a laid breakfast table were presented, and observers were asked to locate the onset of a new local object. Importantly, the global structure of the scene was manipulated by varying the relations among objects in the scene layouts. The very same items were either presented as global-congruent (typical layouts or as global-incongruent (random arrangements. Change blindness was less severe for congruent than for incongruent displays, and this congruency benefit increased with the duration of the experiment. These findings show that global layouts are learned, supporting detection of local changes with enhanced efficiency. However, performance was not affected by scene congruency in a subsequent control experiment that required observers to localize a static discontinuity (i.e., an object that was missing from the repeated layouts. Our results thus show that learning of the global layout is particularly linked to the local objects. Taken together, our results reveal an effect of global precedence in natural scenes. We suggest that relational properties within the hierarchy of a natural scene are governed, in particular, by global image analysis, reducing change blindness for local objects through scene learning.

  8. Global scene layout modulates contextual learning in change detection.

    Science.gov (United States)

    Conci, Markus; Müller, Hermann J

    2014-01-01

    Change in the visual scene often goes unnoticed - a phenomenon referred to as "change blindness." This study examined whether the hierarchical structure, i.e., the global-local layout of a scene can influence performance in a one-shot change detection paradigm. To this end, natural scenes of a laid breakfast table were presented, and observers were asked to locate the onset of a new local object. Importantly, the global structure of the scene was manipulated by varying the relations among objects in the scene layouts. The very same items were either presented as global-congruent (typical) layouts or as global-incongruent (random) arrangements. Change blindness was less severe for congruent than for incongruent displays, and this congruency benefit increased with the duration of the experiment. These findings show that global layouts are learned, supporting detection of local changes with enhanced efficiency. However, performance was not affected by scene congruency in a subsequent control experiment that required observers to localize a static discontinuity (i.e., an object that was missing from the repeated layouts). Our results thus show that learning of the global layout is particularly linked to the local objects. Taken together, our results reveal an effect of "global precedence" in natural scenes. We suggest that relational properties within the hierarchy of a natural scene are governed, in particular, by global image analysis, reducing change blindness for local objects through scene learning.

  9. Changing scenes: memory for naturalistic events following change blindness.

    Science.gov (United States)

    Mäntylä, Timo; Sundström, Anna

    2004-11-01

    Research on scene perception indicates that viewers often fail to detect large changes to scene regions when these changes occur during a visual disruption such as a saccade or a movie cut. In two experiments, we examined whether this relative inability to detect changes would produce systematic biases in event memory. In Experiment 1, participants decided whether two successively presented images were the same or different, followed by a memory task, in which they recalled the content of the viewed scene. In Experiment 2, participants viewed a short video, in which an actor carried out a series of daily activities, and central scenes' attributes were changed during a movie cut. A high degree of change blindness was observed in both experiments, and these effects were related to scene complexity (Experiment 1) and level of retrieval support (Experiment 2). Most important, participants reported the changed, rather than the initial, event attributes following a failure in change detection. These findings suggest that attentional limitations during encoding contribute to biases in episodic memory.

  10. Scene incongruity and attention.

    Science.gov (United States)

    Mack, Arien; Clarke, Jason; Erol, Muge; Bert, John

    2017-02-01

    Does scene incongruity, (a mismatch between scene gist and a semantically incongruent object), capture attention and lead to conscious perception? We explored this question using 4 different procedures: Inattention (Experiment 1), Scene description (Experiment 2), Change detection (Experiment 3), and Iconic Memory (Experiment 4). We found no differences between scene incongruity and scene congruity in Experiments 1, 2, and 4, although in Experiment 3 change detection was faster for scenes containing an incongruent object. We offer an explanation for why the change detection results differ from the results of the other three experiments. In all four experiments, participants invariably failed to report the incongruity and routinely mis-described it by normalizing the incongruent object. None of the results supports the claim that semantic incongruity within a scene invariably captures attention and provide strong evidence of the dominant role of scene gist in determining what is perceived. Copyright © 2016 Elsevier Inc. All rights reserved.

  11. PC Scene Generation

    Science.gov (United States)

    Buford, James A., Jr.; Cosby, David; Bunfield, Dennis H.; Mayhall, Anthony J.; Trimble, Darian E.

    2007-04-01

    AMRDEC has successfully tested hardware and software for Real-Time Scene Generation for IR and SAL Sensors on COTS PC based hardware and video cards. AMRDEC personnel worked with nVidia and Concurrent Computer Corporation to develop a Scene Generation system capable of frame rates of at least 120Hz while frame locked to an external source (such as a missile seeker) with no dropped frames. Latency measurements and image validation were performed using COTS and in-house developed hardware and software. Software for the Scene Generation system was developed using OpenSceneGraph.

  12. Two Distinct Scene-Processing Networks Connecting Vision and Memory.

    Science.gov (United States)

    Baldassano, Christopher; Esteva, Andre; Fei-Fei, Li; Beck, Diane M

    2016-01-01

    A number of regions in the human brain are known to be involved in processing natural scenes, but the field has lacked a unifying framework for understanding how these different regions are organized and interact. We provide evidence from functional connectivity and meta-analyses for a new organizational principle, in which scene processing relies upon two distinct networks that split the classically defined parahippocampal place area (PPA). The first network of strongly connected regions consists of the occipital place area/transverse occipital sulcus and posterior PPA, which contain retinotopic maps and are not strongly coupled to the hippocampus at rest. The second network consists of the caudal inferior parietal lobule, retrosplenial complex, and anterior PPA, which connect to the hippocampus (especially anterior hippocampus), and are implicated in both visual and nonvisual tasks, including episodic memory and navigation. We propose that these two distinct networks capture the primary functional division among scene-processing regions, between those that process visual features from the current view of a scene and those that connect information from a current scene view with a much broader temporal and spatial context. This new framework for understanding the neural substrates of scene-processing bridges results from many lines of research, and makes specific functional predictions.

  13. CSIR optronic scene simulator finds real application in self-protection mechanisms of the South African Air Force

    CSIR Research Space (South Africa)

    Willers, CJ

    2010-09-01

    Full Text Available The Optronic Scene Simulator (OSSIM) is a second-generation scene simulator that creates synthetic images of arbitrary complex scenes in the visual and infrared (IR) bands, covering the 0.2 to 20 μm spectral region. These images are radiometrically...

  14. Hydrological AnthropoScenes

    Science.gov (United States)

    Cudennec, Christophe

    2016-04-01

    The Anthropocene concept encapsulates the planetary-scale changes resulting from accelerating socio-ecological transformations, beyond the stratigraphic definition actually in debate. The emergence of multi-scale and proteiform complexity requires inter-discipline and system approaches. Yet, to reduce the cognitive challenge of tackling this complexity, the global Anthropocene syndrome must now be studied from various topical points of view, and grounded at regional and local levels. A system approach should allow to identify AnthropoScenes, i.e. settings where a socio-ecological transformation subsystem is clearly coherent within boundaries and displays explicit relationships with neighbouring/remote scenes and within a nesting architecture. Hydrology is a key topical point of view to be explored, as it is important in many aspects of the Anthropocene, either with water itself being a resource, hazard or transport force; or through the network, connectivity, interface, teleconnection, emergence and scaling issues it determines. We will schematically exemplify these aspects with three contrasted hydrological AnthropoScenes in Tunisia, France and Iceland; and reframe therein concepts of the hydrological change debate. Bai X., van der Leeuw S., O'Brien K., Berkhout F., Biermann F., Brondizio E., Cudennec C., Dearing J., Duraiappah A., Glaser M., Revkin A., Steffen W., Syvitski J., 2016. Plausible and desirable futures in the Anthropocene: A new research agenda. Global Environmental Change, in press, http://dx.doi.org/10.1016/j.gloenvcha.2015.09.017 Brondizio E., O'Brien K., Bai X., Biermann F., Steffen W., Berkhout F., Cudennec C., Lemos M.C., Wolfe A., Palma-Oliveira J., Chen A. C-T. Re-conceptualizing the Anthropocene: A call for collaboration. Global Environmental Change, in review. Montanari A., Young G., Savenije H., Hughes D., Wagener T., Ren L., Koutsoyiannis D., Cudennec C., Grimaldi S., Blöschl G., Sivapalan M., Beven K., Gupta H., Arheimer B., Huang Y

  15. Effects of aging on neural connectivity underlying selective memory for emotional scenes.

    Science.gov (United States)

    Waring, Jill D; Addis, Donna Rose; Kensinger, Elizabeth A

    2013-02-01

    Older adults show age-related reductions in memory for neutral items within complex visual scenes, but just like young adults, older adults exhibit a memory advantage for emotional items within scenes compared with the background scene information. The present study examined young and older adults' encoding-stage effective connectivity for selective memory of emotional items versus memory for both the emotional item and its background. In a functional magnetic resonance imaging (fMRI) study, participants viewed scenes containing either positive or negative items within neutral backgrounds. Outside the scanner, participants completed a memory test for items and backgrounds. Irrespective of scene content being emotionally positive or negative, older adults had stronger positive connections among frontal regions and from frontal regions to medial temporal lobe structures than did young adults, especially when items and backgrounds were subsequently remembered. These results suggest there are differences between young and older adults' connectivity accompanying the encoding of emotional scenes. Older adults may require more frontal connectivity to encode all elements of a scene rather than just encoding the emotional item. Published by Elsevier Inc.

  16. Oculomotor capture during real-world scene viewing depends on cognitive load.

    Science.gov (United States)

    Matsukura, Michi; Brockmole, James R; Boot, Walter R; Henderson, John M

    2011-03-25

    It has been claimed that gaze control during scene viewing is largely governed by stimulus-driven, bottom-up selection mechanisms. Recent research, however, has strongly suggested that observers' top-down control plays a dominant role in attentional prioritization in scenes. A notable exception to this strong top-down control is oculomotor capture, where visual transients in a scene draw the eyes. One way to test whether oculomotor capture during scene viewing is independent of an observer's top-down goal setting is to reduce observers' cognitive resource availability. In the present study, we examined whether increasing observers' cognitive load influences the frequency and speed of oculomotor capture during scene viewing. In Experiment 1, we tested whether increasing observers' cognitive load modulates the degree of oculomotor capture by a new object suddenly appeared in a scene. Similarly, in Experiment 2, we tested whether increasing observers' cognitive load modulates the degree of oculomotor capture by an object's color change. In both experiments, the degree of oculomotor capture decreased as observers' cognitive resources were reduced. These results suggest that oculomotor capture during scene viewing is dependent on observers' top-down selection mechanisms. Copyright © 2011 Elsevier Ltd. All rights reserved.

  17. Categorization of natural dynamic audiovisual scenes.

    Directory of Open Access Journals (Sweden)

    Olli Rummukainen

    Full Text Available This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database.

  18. Virtual environments for scene of crime reconstruction and analysis

    Science.gov (United States)

    Howard, Toby L. J.; Murta, Alan D.; Gibson, Simon

    2000-02-01

    This paper describes research conducted in collaboration with Greater Manchester Police (UK), to evalute the utility of Virtual Environments for scene of crime analysis, forensic investigation, and law enforcement briefing and training. We present an illustrated case study of the construction of a high-fidelity virtual environment, intended to match a particular real-life crime scene as closely as possible. We describe and evaluate the combination of several approaches including: the use of the Manchester Scene Description Language for constructing complex geometrical models; the application of a radiosity rendering algorithm with several novel features based on human perceptual consideration; texture extraction from forensic photography; and experiments with interactive walkthroughs and large-screen stereoscopic display of the virtual environment implemented using the MAVERIK system. We also discuss the potential applications of Virtual Environment techniques in the Law Enforcement and Forensic communities.

  19. Scene Integration Without Awareness: No Conclusive Evidence for Processing Scene Congruency During Continuous Flash Suppression.

    Science.gov (United States)

    Moors, Pieter; Boelens, David; van Overwalle, Jaana; Wagemans, Johan

    2016-07-01

    A recent study showed that scenes with an object-background relationship that is semantically incongruent break interocular suppression faster than scenes with a semantically congruent relationship. These results implied that semantic relations between the objects and the background of a scene could be extracted in the absence of visual awareness of the stimulus. In the current study, we assessed the replicability of this finding and tried to rule out an alternative explanation dependent on low-level differences between the stimuli. Furthermore, we used a Bayesian analysis to quantify the evidence in favor of the presence or absence of a scene-congruency effect. Across three experiments, we found no convincing evidence for a scene-congruency effect or a modulation of scene congruency by scene inversion. These findings question the generalizability of previous observations and cast doubt on whether genuine semantic processing of object-background relationships in scenes can manifest during interocular suppression. © The Author(s) 2016.

  20. Measuring streetscape complexity based on the statistics of local contrast and spatial frequency.

    Directory of Open Access Journals (Sweden)

    André Cavalcante

    Full Text Available Streetscapes are basic urban elements which play a major role in the livability of a city. The visual complexity of streetscapes is known to influence how people behave in such built spaces. However, how and which characteristics of a visual scene influence our perception of complexity have yet to be fully understood. This study proposes a method to evaluate the complexity perceived in streetscapes based on the statistics of local contrast and spatial frequency. Here, 74 streetscape images from four cities, including daytime and nighttime scenes, were ranked for complexity by 40 participants. Image processing was then used to locally segment contrast and spatial frequency in the streetscapes. The statistics of these characteristics were extracted and later combined to form a single objective measure. The direct use of statistics revealed structural or morphological patterns in streetscapes related to the perception of complexity. Furthermore, in comparison to conventional measures of visual complexity, the proposed objective measure exhibits a higher correlation with the opinion of the participants. Also, the performance of this method is more robust regarding different time scenarios.

  1. Individual differences in the spontaneous recruitment of brain regions supporting mental state understanding when viewing natural social scenes.

    Science.gov (United States)

    Wagner, Dylan D; Kelley, William M; Heatherton, Todd F

    2011-12-01

    People are able to rapidly infer complex personality traits and mental states even from the most minimal person information. Research has shown that when observers view a natural scene containing people, they spend a disproportionate amount of their time looking at the social features (e.g., faces, bodies). Does this preference for social features merely reflect the biological salience of these features or are observers spontaneously attempting to make sense of complex social dynamics? Using functional neuroimaging, we investigated neural responses to social and nonsocial visual scenes in a large sample of participants (n = 48) who varied on an individual difference measure assessing empathy and mentalizing (i.e., empathizing). Compared with other scene categories, viewing natural social scenes activated regions associated with social cognition (e.g., dorsomedial prefrontal cortex and temporal poles). Moreover, activity in these regions during social scene viewing was strongly correlated with individual differences in empathizing. These findings offer neural evidence that observers spontaneously engage in social cognition when viewing complex social material but that the degree to which people do so is mediated by individual differences in trait empathizing.

  2. The functional consequences of social distraction: Attention and memory for complex scenes.

    Science.gov (United States)

    Doherty, Brianna Ruth; Patai, Eva Zita; Duta, Mihaela; Nobre, Anna Christina; Scerif, Gaia

    2017-01-01

    Cognitive scientists have long proposed that social stimuli attract visual attention even when task irrelevant, but the consequences of this privileged status for memory are unknown. To address this, we combined computational approaches, eye-tracking methodology, and individual-differences measures. Participants searched for targets in scenes containing social or non-social distractors equated for low-level visual salience. Subsequent memory precision for target locations was tested. Individual differences in autistic traits and social anxiety were also measured. Eye-tracking revealed significantly more attentional capture to social compared to non-social distractors. Critically, memory precision for target locations was poorer for social scenes. This effect was moderated by social anxiety, with anxious individuals remembering target locations better under conditions of social distraction. These findings shed further light onto the privileged attentional status of social stimuli and its functional consequences on memory across individuals. Copyright © 2016. Published by Elsevier B.V.

  3. Real-time scene and signature generation for ladar and imaging sensors

    Science.gov (United States)

    Swierkowski, Leszek; Christie, Chad L.; Antanovskii, Leonid; Gouthas, Efthimios

    2014-05-01

    This paper describes development of two key functionalities within the VIRSuite scene simulation program, broadening its scene generation capabilities and increasing accuracy of thermal signatures. Firstly, a new LADAR scene generation module has been designed. It is capable of simulating range imagery for Geiger mode LADAR, in addition to the already existing functionality for linear mode systems. Furthermore, a new 3D heat diffusion solver has been developed within the VIRSuite signature prediction module. It is capable of calculating the temperature distribution in complex three-dimensional objects for enhanced dynamic prediction of thermal signatures. With these enhancements, VIRSuite is now a robust tool for conducting dynamic simulation for missiles with multi-mode seekers.

  4. Influence of 3D effects on 1D aerosol retrievals in synthetic, partially clouded scenes

    International Nuclear Information System (INIS)

    Stap, F.A.; Hasekamp, O.P.; Emde, C.; Röckmann, T.

    2016-01-01

    An important challenge in aerosol remote sensing is to retrieve aerosol properties in the vicinity of clouds and in cloud contaminated scenes. Satellite based multi-wavelength, multi-angular, photo-polarimetric instruments are particularly suited for this task as they have the ability to separate scattering by aerosol and cloud particles. Simultaneous aerosol/cloud retrievals using 1D radiative transfer codes cannot account for 3D effects such as shadows, cloud induced enhancements and darkening of cloud edges. In this study we investigate what errors are introduced on the retrieved optical and micro-physical aerosol properties, when these 3D effects are neglected in retrievals where the partial cloud cover is modeled using the Independent Pixel Approximation. To this end a generic, synthetic data set of PARASOL like observations for 3D scenes with partial, liquid water cloud cover is created. It is found that in scenes with random cloud distributions (i.e. broken cloud fields) and either low cloud optical thickness or low cloud fraction, the inversion algorithm can fit the observations and retrieve optical and micro-physical aerosol properties with sufficient accuracy. In scenes with non-random cloud distributions (e.g. at the edge of a cloud field) the inversion algorithm can fit the observations, however, here the retrieved real part of the refractive indices of both modes is biased. - Highlights: • An algorithm for retrieval of both aerosol and cloud properties is presented. • Radiative transfer models of 3D, partially clouded scenes are simulated. • Errors introduced in the retrieved aerosol properties are discussed.

  5. Underwater Scene Composition

    Science.gov (United States)

    Kim, Nanyoung

    2009-01-01

    In this article, the author describes an underwater scene composition for elementary-education majors. This project deals with watercolor with crayon or oil-pastel resist (medium); the beauty of nature represented by fish in the underwater scene (theme); texture and pattern (design elements); drawing simple forms (drawing skill); and composition…

  6. Scene-Based Contextual Cueing in Pigeons

    Science.gov (United States)

    Wasserman, Edward A.; Teng, Yuejia; Brooks, Daniel I.

    2014-01-01

    Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of such contextual cueing. Pigeons had to peck a target which could appear in one of four locations on color photographs of real-world scenes. On half of the trials, each of four scenes was consistently paired with one of four possible target locations; on the other half of the trials, each of four different scenes was randomly paired with the same four possible target locations. In Experiments 1 and 2, pigeons exhibited robust contextual cueing when the context preceded the target by 1 s to 8 s, with reaction times to the target being shorter on predictive-scene trials than on random-scene trials. Pigeons also responded more frequently during the delay on predictive-scene trials than on random-scene trials; indeed, during the delay on predictive-scene trials, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. In Experiment 3, involving left-right and top-bottom scene reversals, pigeons exhibited stronger control by global than by local scene cues. These results attest to the robustness and associative basis of contextual cueing in pigeons. PMID:25546098

  7. Places in the Brain: Bridging Layout and Object Geometry in Scene-Selective Cortex.

    Science.gov (United States)

    Dillon, Moira R; Persichetti, Andrew S; Spelke, Elizabeth S; Dilks, Daniel D

    2017-06-13

    Diverse animal species primarily rely on sense (left-right) and egocentric distance (proximal-distal) when navigating the environment. Recent neuroimaging studies with human adults show that this information is represented in 2 scene-selective cortical regions-the occipital place area (OPA) and retrosplenial complex (RSC)-but not in a third scene-selective region-the parahippocampal place area (PPA). What geometric properties, then, does the PPA represent, and what is its role in scene processing? Here we hypothesize that the PPA represents relative length and angle, the geometric properties classically associated with object recognition, but only in the context of large extended surfaces that compose the layout of a scene. Using functional magnetic resonance imaging adaptation, we found that the PPA is indeed sensitive to relative length and angle changes in pictures of scenes, but not pictures of objects that reliably elicited responses to the same geometric changes in object-selective cortical regions. Moreover, we found that the OPA is also sensitive to such changes, while the RSC is tolerant to such changes. Thus, the geometric information typically associated with object recognition is also used during some aspects of scene processing. These findings provide evidence that scene-selective cortex differentially represents the geometric properties guiding navigation versus scene categorization. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  8. Multimodal computational attention for scene understanding and robotics

    CERN Document Server

    Schauerte, Boris

    2016-01-01

    This book presents state-of-the-art computational attention models that have been successfully tested in diverse application areas and can build the foundation for artificial systems to efficiently explore, analyze, and understand natural scenes. It gives a comprehensive overview of the most recent computational attention models for processing visual and acoustic input. It covers the biological background of visual and auditory attention, as well as bottom-up and top-down attentional mechanisms and discusses various applications. In the first part new approaches for bottom-up visual and acoustic saliency models are presented and applied to the task of audio-visual scene exploration of a robot. In the second part the influence of top-down cues for attention modeling is investigated. .

  9. Where and when Do Objects Become Scenes?

    Directory of Open Access Journals (Sweden)

    Jiye G. Kim

    2011-05-01

    Full Text Available Scenes can be understood with extraordinary speed and facility, not merely as an inventory of individual objects but in the coding of the relations among them. These relations, which can be readily described by prepositions or gerunds (e.g., a hand holding a pen, allows the explicit representation of complex structures. Where in the brain are inter-object relations specified? In a series of fMRI experiments, we show that pairs of objects shown as interacting elicit greater activity in LOC than when the objects are depicted side-by-side (e.g., a hand beside a pen. Other visual areas, PPA, IPS, and DLPFC, did not show this sensitivity to scene relations, rendering it unlikely that the relations were computed in these regions. Using EEG and TMS, we further show that LOC's sensitivity to object interactions arises around 170ms post stimulus onset and that disruption of normal LOC activity—but not IPS activity—is detrimental to the behavioral sensitivity of inter-object relations. Insofar as LOC is the earliest cortical region where shape is distinguished from texture, our results provide strong evidence that scene-like relations are achieved simultaneously with the perception of object shape and not inferred at some stage following object identification.

  10. Reconstruction and simplification of urban scene models based on oblique images

    Science.gov (United States)

    Liu, J.; Guo, B.

    2014-08-01

    We describe a multi-view stereo reconstruction and simplification algorithms for urban scene models based on oblique images. The complexity, diversity, and density within the urban scene, it increases the difficulty to build the city models using the oblique images. But there are a lot of flat surfaces existing in the urban scene. One of our key contributions is that a dense matching algorithm based on Self-Adaptive Patch in view of the urban scene is proposed. The basic idea of matching propagating based on Self-Adaptive Patch is to build patches centred by seed points which are already matched. The extent and shape of the patches can adapt to the objects of urban scene automatically: when the surface is flat, the extent of the patch would become bigger; while the surface is very rough, the extent of the patch would become smaller. The other contribution is that the mesh generated by Graph Cuts is 2-manifold surface satisfied the half edge data structure. It is solved by clustering and re-marking tetrahedrons in s-t graph. The purpose of getting 2- manifold surface is to simply the mesh by edge collapse algorithm which can preserve and stand out the features of buildings.

  11. Developmental Changes in Attention to Faces and Bodies in Static and Dynamic Scenes

    Directory of Open Access Journals (Sweden)

    Brenda M Stoesz

    2014-03-01

    Full Text Available Typically developing individuals show a strong visual preference for faces and face-like stimuli; however, this may come at the expense of attending to bodies or to other aspects of a scene. The primary goal of the present study was to provide additional insight into the development of attentional mechanisms that underlie perception of real people in naturalistic scenes. We examined the looking behaviours of typical children, adolescents, and young adults as they viewed static and dynamic scenes depicting one or more people. Overall, participants showed a bias to attend to faces more than on other parts of the scenes. Adding motion cues led to a reduction in the number, but an increase in the average duration of face fixations in single-character scenes. When multiple characters appeared in a scene, motion-related effects were attenuated and participants shifted their gaze from faces to bodies, or made off-screen glances. Children showed the largest effects related to the introduction of motion cues or additional characters, suggesting that they find dynamic faces difficult to process, and are especially prone to look away from faces when viewing complex social scenes – a strategy that could reduce the cognitive and the affective load imposed by having to divide one’s attention between multiple faces. Our findings provide new insights into the typical development of social attention during natural scene viewing, and lay the foundation for future work examining gaze behaviours in typical and atypical development.

  12. Interactive Procedural Modelling of Coherent Waterfall Scenes

    OpenAIRE

    Emilien , Arnaud; Poulin , Pierre; Cani , Marie-Paule; Vimont , Ulysse

    2015-01-01

    International audience; Combining procedural generation and user control is a fundamental challenge for the interactive design of natural scenery. This is particularly true for modelling complex waterfall scenes where, in addition to taking charge of geometric details, an ideal tool should also provide a user with the freedom to shape the running streams and falls, while automatically maintaining physical plausibility in terms of flow network, embedding into the terrain, and visual aspects of...

  13. Beyond scene gist: Objects guide search more than scene background.

    Science.gov (United States)

    Koehler, Kathryn; Eckstein, Miguel P

    2017-06-01

    Although the facilitation of visual search by contextual information is well established, there is little understanding of the independent contributions of different types of contextual cues in scenes. Here we manipulated 3 types of contextual information: object co-occurrence, multiple object configurations, and background category. We isolated the benefits of each contextual cue to target detectability, its impact on decision bias, confidence, and the guidance of eye movements. We find that object-based information guides eye movements and facilitates perceptual judgments more than scene background. The degree of guidance and facilitation of each contextual cue can be related to its inherent informativeness about the target spatial location as measured by human explicit judgments about likely target locations. Our results improve the understanding of the contributions of distinct contextual scene components to search and suggest that the brain's utilization of cues to guide eye movements is linked to the cue's informativeness about the target's location. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  14. A Knowledge Base for Automatic Feature Recognition from Point Clouds in an Urban Scene

    Directory of Open Access Journals (Sweden)

    Xu-Feng Xing

    2018-01-01

    Full Text Available LiDAR technology can provide very detailed and highly accurate geospatial information on an urban scene for the creation of Virtual Geographic Environments (VGEs for different applications. However, automatic 3D modeling and feature recognition from LiDAR point clouds are very complex tasks. This becomes even more complex when the data is incomplete (occlusion problem or uncertain. In this paper, we propose to build a knowledge base comprising of ontology and semantic rules aiming at automatic feature recognition from point clouds in support of 3D modeling. First, several modules for ontology are defined from different perspectives to describe an urban scene. For instance, the spatial relations module allows the formalized representation of possible topological relations extracted from point clouds. Then, a knowledge base is proposed that contains different concepts, their properties and their relations, together with constraints and semantic rules. Then, instances and their specific relations form an urban scene and are added to the knowledge base as facts. Based on the knowledge and semantic rules, a reasoning process is carried out to extract semantic features of the objects and their components in the urban scene. Finally, several experiments are presented to show the validity of our approach to recognize different semantic features of buildings from LiDAR point clouds.

  15. Stages As Models of Scene Geometry

    NARCIS (Netherlands)

    Nedović, V.; Smeulders, A.W.M.; Redert, A.; Geusebroek, J.M.

    2010-01-01

    Reconstruction of 3D scene geometry is an important element for scene understanding, autonomous vehicle and robot navigation, image retrieval, and 3D television. We propose accounting for the inherent structure of the visual world when trying to solve the scene reconstruction problem. Consequently,

  16. Cultural adaptation of visual attention: calibration of the oculomotor control system in accordance with cultural scenes.

    Directory of Open Access Journals (Sweden)

    Yoshiyuki Ueda

    Full Text Available Previous studies have found that Westerners are more likely than East Asians to attend to central objects (i.e., analytic attention, whereas East Asians are more likely than Westerners to focus on background objects or context (i.e., holistic attention. Recently, it has been proposed that the physical environment of a given culture influences the cultural form of scene cognition, although the underlying mechanism is yet unclear. This study examined whether the physical environment influences oculomotor control. Participants saw culturally neutral stimuli (e.g., a dog in a park as a baseline, followed by Japanese or United States scenes, and finally culturally neutral stimuli again. The results showed that participants primed with Japanese scenes were more likely to move their eyes within a broader area and they were less likely to fixate on central objects compared with the baseline, whereas there were no significant differences in the eye movements of participants primed with American scenes. These results suggest that culturally specific patterns in eye movements are partly caused by the physical environment.

  17. When Does Repeated Search in Scenes Involve Memory? Looking at versus Looking for Objects in Scenes

    Science.gov (United States)

    Vo, Melissa L. -H.; Wolfe, Jeremy M.

    2012-01-01

    One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained…

  18. Emotional event-related potentials are larger to figures than scenes but are similarly reduced by inattention

    Directory of Open Access Journals (Sweden)

    Nordström Henrik

    2012-05-01

    Full Text Available Abstract Background In research on event-related potentials (ERP to emotional pictures, greater attention to emotional than neutral stimuli (i.e., motivated attention is commonly indexed by two difference waves between emotional and neutral stimuli: the early posterior negativity (EPN and the late positive potential (LPP. Evidence suggests that if attention is directed away from the pictures, then the emotional effects on EPN and LPP are eliminated. However, a few studies have found residual, emotional effects on EPN and LPP. In these studies, pictures were shown at fixation, and picture composition was that of simple figures rather than that of complex scenes. Because figures elicit larger LPP than do scenes, figures might capture and hold attention more strongly than do scenes. Here, we showed negative and neutral pictures of figures and scenes and tested first, whether emotional effects are larger to figures than scenes for both EPN and LPP, and second, whether emotional effects on EPN and LPP are reduced less for unattended figures than scenes. Results Emotional effects on EPN and LPP were larger for figures than scenes. When pictures were unattended, emotional effects on EPN increased for scenes but tended to decrease for figures, whereas emotional effects on LPP decreased similarly for figures and scenes. Conclusions Emotional effects on EPN and LPP were larger for figures than scenes, but these effects did not resist manipulations of attention more strongly for figures than scenes. These findings imply that the emotional content captures attention more strongly for figures than scenes, but that the emotional content does not hold attention more strongly for figures than scenes.

  19. Emotional event-related potentials are larger to figures than scenes but are similarly reduced by inattention

    Science.gov (United States)

    2012-01-01

    Background In research on event-related potentials (ERP) to emotional pictures, greater attention to emotional than neutral stimuli (i.e., motivated attention) is commonly indexed by two difference waves between emotional and neutral stimuli: the early posterior negativity (EPN) and the late positive potential (LPP). Evidence suggests that if attention is directed away from the pictures, then the emotional effects on EPN and LPP are eliminated. However, a few studies have found residual, emotional effects on EPN and LPP. In these studies, pictures were shown at fixation, and picture composition was that of simple figures rather than that of complex scenes. Because figures elicit larger LPP than do scenes, figures might capture and hold attention more strongly than do scenes. Here, we showed negative and neutral pictures of figures and scenes and tested first, whether emotional effects are larger to figures than scenes for both EPN and LPP, and second, whether emotional effects on EPN and LPP are reduced less for unattended figures than scenes. Results Emotional effects on EPN and LPP were larger for figures than scenes. When pictures were unattended, emotional effects on EPN increased for scenes but tended to decrease for figures, whereas emotional effects on LPP decreased similarly for figures and scenes. Conclusions Emotional effects on EPN and LPP were larger for figures than scenes, but these effects did not resist manipulations of attention more strongly for figures than scenes. These findings imply that the emotional content captures attention more strongly for figures than scenes, but that the emotional content does not hold attention more strongly for figures than scenes. PMID:22607397

  20. The elephant in the room: Inconsistency in scene viewing and representation.

    Science.gov (United States)

    Spotorno, Sara; Tatler, Benjamin W

    2017-10-01

    We examined the extent to which semantic informativeness, consistency with expectations and perceptual salience contribute to object prioritization in scene viewing and representation. In scene viewing (Experiments 1-2), semantic guidance overshadowed perceptual guidance in determining fixation order, with the greatest prioritization for objects that were diagnostic of the scene's depicted event. Perceptual properties affected selection of consistent objects (regardless of their informativeness) but not of inconsistent objects. Semantic and perceptual properties also interacted in influencing foveal inspection, as inconsistent objects were fixated longer than low but not high salience diagnostic objects. While not studied in direct competition with each other (each studied in competition with diagnostic objects), we found that inconsistent objects were fixated earlier and for longer than consistent but marginally informative objects. In change detection (Experiment 3), perceptual guidance overshadowed semantic guidance, promoting detection of highly salient changes. A residual advantage for diagnosticity over inconsistency emerged only when selection prioritization could not be based on low-level features. Overall these findings show that semantic inconsistency is not prioritized within a scene when competing with other relevant information that is essential to scene understanding and respects observers' expectations. Moreover, they reveal that the relative dominance of semantic or perceptual properties during selection depends on ongoing task requirements. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Forensic 3D Scene Reconstruction

    International Nuclear Information System (INIS)

    LITTLE, CHARLES Q.; PETERS, RALPH R.; RIGDON, J. BRIAN; SMALL, DANIEL E.

    1999-01-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene

  2. Using 3D range cameras for crime scene documentation and legal medicine

    Science.gov (United States)

    Cavagnini, Gianluca; Sansoni, Giovanna; Trebeschi, Marco

    2009-01-01

    Crime scene documentation and legal medicine analysis are part of a very complex process which is aimed at identifying the offender starting from the collection of the evidences on the scene. This part of the investigation is very critical, since the crime scene is extremely volatile, and once it is removed, it can not be precisely created again. For this reason, the documentation process should be as complete as possible, with minimum invasiveness. The use of optical 3D imaging sensors has been considered as a possible aid to perform the documentation step, since (i) the measurement is contactless and (ii) the process required to editing and modeling the 3D data is quite similar to the reverse engineering procedures originally developed for the manufacturing field. In this paper we show the most important results obtained in the experimentation.

  3. Bicycles, Airplanes and Peter Pans: Flying Scenes in Steven Spielberg's Films

    Directory of Open Access Journals (Sweden)

    Emilio Audissino

    2014-10-01

    Full Text Available In Steven Spielberg's cinema the flight is a recurring theme. Flying scenes can be sorted into two classes: those involving a realistic flight – by aircraft – and those involving a magical flight – by supernatural powers. The realistic flight is influenced by the war stories of Spielberg's father – a radio man in U.S. Air-force during WWII – and it is featured in such films as Empire of the Sun (1987, Always (1989, and 1941 (1979. The magical flight is influenced by James M. Barries' character Peter Pan (Peter and Wendy, 1911, which is quoted directly in E.T. the Extraterrestrial (1982 and, above all, in Hook (1991, which is a sequel to Barrie's story. These two types of flying scenes are analysed as to their meanings, compared to the models that influenced them, and surveyed as to their evolution across Spielberg's films. A central case study is the episode The Mission from Amazing Stories (1985, in which the realistic and the magical flights overlap.

  4. Scenes of the self, and trance

    Directory of Open Access Journals (Sweden)

    Jan M. Broekman

    2014-02-01

    Full Text Available Trance shows the Self as a process involved in all sorts and forms of life. A Western perspective on a self and its reifying tendencies is only one (or one series of those variations. The process character of the self does not allow any coherent theory but shows, in particular when confronted with trance, its variability in all regards. What is more: the Self is always first on the scene of itself―a situation in which it becomes a sign for itself. That particular semiotic feature is again not a unified one but leads, as the Self in view of itself does, to series of scenes with changing colors, circumstances and environments. Our first scene “Beyond Monotheism” shows semiotic importance in that a self as determining component of a trance-phenomenon must abolish its own referent and seems not able to answer the question, what makes trance a trance. The Pizzica is an example here. Other social features of trance appear in the second scene, US post traumatic psychological treatments included. Our third scene underlines structures of an unfolding self: beginning with ‘split-ego’ conclusions, a self’s engenderment appears dependent on linguistic events and on spoken words in the first place. A fourth scene explores that theme and explains modern forms of an ego ―in particular those inherent to ‘citizenship’ or a ‘corporation’. The legal consequences are concentrated in the fifth scene, which considers a legal subject by revealing its ‘standing’. Our sixth and final scene pertains to the relation between trance and commerce. All scenes tie together and show parallels between Pizzica, rights-based behavior, RAVE music versus disco, commerce and trance; they demonstrate the meaning of trance as a multifaceted social phenomenon.

  5. Optimization of incremental structure from motion combining a random k-d forest and pHash for unordered images in a complex scene

    Science.gov (United States)

    Zhan, Zongqian; Wang, Chendong; Wang, Xin; Liu, Yi

    2018-01-01

    On the basis of today's popular virtual reality and scientific visualization, three-dimensional (3-D) reconstruction is widely used in disaster relief, virtual shopping, reconstruction of cultural relics, etc. In the traditional incremental structure from motion (incremental SFM) method, the time cost of the matching is one of the main factors restricting the popularization of this method. To make the whole matching process more efficient, we propose a preprocessing method before the matching process: (1) we first construct a random k-d forest with the large-scale scale-invariant feature transform features in the images and combine this with the pHash method to obtain a value of relatedness, (2) we then construct a connected weighted graph based on the relatedness value, and (3) we finally obtain a planned sequence of adding images according to the principle of the minimum spanning tree. On this basis, we attempt to thin the minimum spanning tree to reduce the number of matchings and ensure that the images are well distributed. The experimental results show a great reduction in the number of matchings with enough object points, with only a small influence on the inner stability, which proves that this method can quickly and reliably improve the efficiency of the SFM method with unordered multiview images in complex scenes.

  6. Land-use Scene Classification in High-Resolution Remote Sensing Images by Multiscale Deeply Described Correlatons

    Science.gov (United States)

    Qi, K.; Qingfeng, G.

    2017-12-01

    With the popular use of High-Resolution Satellite (HRS) images, more and more research efforts have been placed on land-use scene classification. However, it makes the task difficult with HRS images for the complex background and multiple land-cover classes or objects. This article presents a multiscale deeply described correlaton model for land-use scene classification. Specifically, the convolutional neural network is introduced to learn and characterize the local features at different scales. Then, learnt multiscale deep features are explored to generate visual words. The spatial arrangement of visual words is achieved through the introduction of adaptive vector quantized correlograms at different scales. Experiments on two publicly available land-use scene datasets demonstrate that the proposed model is compact and yet discriminative for efficient representation of land-use scene images, and achieves competitive classification results with the state-of-art methods.

  7. Depth estimation of complex geometry scenes from light fields

    Science.gov (United States)

    Si, Lipeng; Wang, Qing

    2018-01-01

    The surface camera (SCam) of light fields gathers angular sample rays passing through a 3D point. The consistency of SCams is evaluated to estimate the depth map of scene. But the consistency is affected by several limitations such as occlusions or non-Lambertian surfaces. To solve those limitations, the SCam is partitioned into two segments that one of them could satisfy the consistency constraint. The segmentation pattern of SCam is highly related to the texture of spatial patch, so we enforce a mask matching to describe the shape correlation between segments of SCam and spatial patch. To further address the ambiguity in textureless region, a global method with pixel-wise plane label is presented. Plane label inference at each pixel can recover not only depth value but also local geometry structure, that is suitable for light fields with sub-pixel disparities and continuous view variation. Our method is evaluated on public light field datasets and outperforms the state-of-the-art.

  8. Scene grammar shapes the way we interact with objects, strengthens memories, and speeds search.

    Science.gov (United States)

    Draschkow, Dejan; Võ, Melissa L-H

    2017-11-28

    Predictions of environmental rules (here referred to as "scene grammar") can come in different forms: seeing a toilet in a living room would violate semantic predictions, while finding a toilet brush next to the toothpaste would violate syntactic predictions. The existence of such predictions has usually been investigated by showing observers images containing such grammatical violations. Conversely, the generative process of creating an environment according to one's scene grammar and its effects on behavior and memory has received little attention. In a virtual reality paradigm, we either instructed participants to arrange objects according to their scene grammar or against it. Subsequently, participants' memory for the arrangements was probed using a surprise recall (Exp1), or repeated search (Exp2) task. As a result, participants' construction behavior showed strategic use of larger, static objects to anchor the location of smaller objects which are generally the goals of everyday actions. Further analysis of this scene construction data revealed possible commonalities between the rules governing word usage in language and object usage in naturalistic environments. Taken together, we revealed some of the building blocks of scene grammar necessary for efficient behavior, which differentially influence how we interact with objects and what we remember about scenes.

  9. Lateralized discrimination of emotional scenes in peripheral vision.

    Science.gov (United States)

    Calvo, Manuel G; Rodríguez-Chinea, Sandra; Fernández-Martín, Andrés

    2015-03-01

    This study investigates whether there is lateralized processing of emotional scenes in the visual periphery, in the absence of eye fixations; and whether this varies with emotional valence (pleasant vs. unpleasant), specific emotional scene content (babies, erotica, human attack, mutilation, etc.), and sex of the viewer. Pairs of emotional (positive or negative) and neutral photographs were presented for 150 ms peripherally (≥6.5° away from fixation). Observers judged on which side the emotional picture was located. Low-level image properties, scene visual saliency, and eye movements were controlled. Results showed that (a) correct identification of the emotional scene exceeded the chance level; (b) performance was more accurate and faster when the emotional scene appeared in the left than in the right visual field; (c) lateralization was equivalent for females and males for pleasant scenes, but was greater for females and unpleasant scenes; and (d) lateralization occurred similarly for different emotional scene categories. These findings reveal discrimination between emotional and neutral scenes, and right brain hemisphere dominance for emotional processing, which is modulated by sex of the viewer and scene valence, and suggest that coarse affective significance can be extracted in peripheral vision.

  10. Stages as models of scene geometry.

    Science.gov (United States)

    Nedović, Vladimir; Smeulders, Arnold W M; Redert, André; Geusebroek, Jan-Mark

    2010-09-01

    Reconstruction of 3D scene geometry is an important element for scene understanding, autonomous vehicle and robot navigation, image retrieval, and 3D television. We propose accounting for the inherent structure of the visual world when trying to solve the scene reconstruction problem. Consequently, we identify geometric scene categorization as the first step toward robust and efficient depth estimation from single images. We introduce 15 typical 3D scene geometries called stages, each with a unique depth profile, which roughly correspond to a large majority of broadcast video frames. Stage information serves as a first approximation of global depth, narrowing down the search space in depth estimation and object localization. We propose different sets of low-level features for depth estimation, and perform stage classification on two diverse data sets of television broadcasts. Classification results demonstrate that stages can often be efficiently learned from low-dimensional image representations.

  11. Anticipatory scene representation in preschool children's recall and recognition memory.

    Science.gov (United States)

    Kreindel, Erica; Intraub, Helene

    2017-09-01

    Behavioral and neuroscience research on boundary extension (false memory beyond the edges of a view of a scene) has provided new insights into the constructive nature of scene representation, and motivates questions about development. Early research with children (as young as 6-7 years) was consistent with boundary extension, but relied on an analysis of spatial errors in drawings which are open to alternative explanations (e.g. drawing ability). Experiment 1 replicated and extended prior drawing results with 4-5-year-olds and adults. In Experiment 2, a new, forced-choice immediate recognition memory test was implemented with the same children. On each trial, a card (photograph of a simple scene) was immediately replaced by a test card (identical view and either a closer or more wide-angle view) and participants indicated which one matched the original view. Error patterns supported boundary extension; identical photographs were more frequently rejected when the closer view was the original view, than vice versa. This asymmetry was not attributable to a selection bias (guessing tasks; Experiments 3-5). In Experiment 4, working memory load was increased by presenting more expansive views of more complex scenes. Again, children exhibited boundary extension, but now adults did not, unless stimulus duration was reduced to 5 s (limiting time to implement strategies; Experiment 5). We propose that like adults, children interpret photographs as views of places in the world; they extrapolate the anticipated continuation of the scene beyond the view and misattribute it to having been seen. Developmental differences in source attribution decision processes provide an explanation for the age-related differences observed. © 2016 John Wiley & Sons Ltd.

  12. Classification of Mls Point Clouds in Urban Scenes Using Detrended Geometric Features from Supervoxel-Based Local Contexts

    Science.gov (United States)

    Sun, Z.; Xu, Y.; Hoegner, L.; Stilla, U.

    2018-05-01

    In this work, we propose a classification method designed for the labeling of MLS point clouds, with detrended geometric features extracted from the points of the supervoxel-based local context. To achieve the analysis of complex 3D urban scenes, acquired points of the scene should be tagged with individual labels of different classes. Thus, assigning a unique label to the points of an object that belong to the same category plays an essential role in the entire 3D scene analysis workflow. Although plenty of studies in this field have been reported, this work is still a challenging task. Specifically, in this work: 1) A novel geometric feature extraction method, detrending the redundant and in-salient information in the local context, is proposed, which is proved to be effective for extracting local geometric features from the 3D scene. 2) Instead of using individual point as basic element, the supervoxel-based local context is designed to encapsulate geometric characteristics of points, providing a flexible and robust solution for feature extraction. 3) Experiments using complex urban scene with manually labeled ground truth are conducted, and the performance of proposed method with respect to different methods is analyzed. With the testing dataset, we have obtained a result of 0.92 for overall accuracy for assigning eight semantic classes.

  13. Iconic memory for the gist of natural scenes.

    Science.gov (United States)

    Clarke, Jason; Mack, Arien

    2014-11-01

    Does iconic memory contain the gist of multiple scenes? Three experiments were conducted. In the first, four scenes from different basic-level categories were briefly presented in one of two conditions: a cue or a no-cue condition. The cue condition was designed to provide an index of the contents of iconic memory of the display. Subjects were more sensitive to scene gist in the cue condition than in the no-cue condition. In the second, the scenes came from the same basic-level category. We found no difference in sensitivity between the two conditions. In the third, six scenes from different basic level categories were presented in the visual periphery. Subjects were more sensitive to scene gist in the cue condition. These results suggest that scene gist is contained in iconic memory even in the visual periphery; however, iconic representations are not sufficiently detailed to distinguish between scenes coming from the same category. Copyright © 2014 Elsevier Inc. All rights reserved.

  14. Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach

    Directory of Open Access Journals (Sweden)

    Mengyun Liu

    2017-12-01

    Full Text Available After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an indoor scene constrained method for localization is proposed in this paper, which is inspired by the visual cognition ability of the human brain and the progress in the computer vision field regarding high-level image understanding. Furthermore, a multi-sensor fusion method is implemented on a commercial smartphone including cameras, WiFi and inertial sensors. Compared to former research, the camera on a smartphone is used to “see” which scene the user is in. With this information, a particle filter algorithm constrained by scene information is adopted to determine the final location. For indoor scene recognition, we take advantage of deep learning that has been proven to be highly effective in the computer vision community. For particle filter, both WiFi and magnetic field signals are used to update the weights of particles. Similar to other fingerprinting localization methods, there are two stages in the proposed system, offline training and online localization. In the offline stage, an indoor scene model is trained by Caffe (one of the most popular open source frameworks for deep learning and a fingerprint database is constructed by user trajectories in different scenes. To reduce the volume requirement of training data for deep learning, a fine-tuned method is adopted for model training. In the online stage, a camera in a smartphone is used to recognize the initial scene. Then a particle filter algorithm is used to fuse the sensor data and determine the final location. To prove the effectiveness of the proposed method, an Android client and a web server are implemented. The Android client is used to collect data and locate a user. The web

  15. Semantic Reasoning for Scene Interpretation

    DEFF Research Database (Denmark)

    Jensen, Lars Baunegaard With; Baseski, Emre; Pugeault, Nicolas

    2008-01-01

    In this paper, we propose a hierarchical architecture for representing scenes, covering 2D and 3D aspects of visual scenes as well as the semantic relations between the different aspects. We argue that labeled graphs are a suitable representational framework for this representation and demonstrat...

  16. Navigating the auditory scene: an expert role for the hippocampus.

    Science.gov (United States)

    Teki, Sundeep; Kumar, Sukhbinder; von Kriegstein, Katharina; Stewart, Lauren; Lyness, C Rebecca; Moore, Brian C J; Capleton, Brian; Griffiths, Timothy D

    2012-08-29

    Over a typical career piano tuners spend tens of thousands of hours exploring a specialized acoustic environment. Tuning requires accurate perception and adjustment of beats in two-note chords that serve as a navigational device to move between points in previously learned acoustic scenes. It is a two-stage process that depends on the following: first, selective listening to beats within frequency windows, and, second, the subsequent use of those beats to navigate through a complex soundscape. The neuroanatomical substrates underlying brain specialization for such fundamental organization of sound scenes are unknown. Here, we demonstrate that professional piano tuners are significantly better than controls matched for age and musical ability on a psychophysical task simulating active listening to beats within frequency windows that is based on amplitude modulation rate discrimination. Tuners show a categorical increase in gray matter volume in the right frontal operculum and right superior temporal lobe. Tuners also show a striking enhancement of gray matter volume in the anterior hippocampus, parahippocampal gyrus, and superior temporal gyrus, and an increase in white matter volume in the posterior hippocampus as a function of years of tuning experience. The relationship with gray matter volume is sensitive to years of tuning experience and starting age but not actual age or level of musicality. Our findings support a role for a core set of regions in the hippocampus and superior temporal cortex in skilled exploration of complex sound scenes in which precise sound "templates" are encoded and consolidated into memory over time in an experience-dependent manner.

  17. Rapid discrimination of visual scene content in the human brain

    Science.gov (United States)

    Anokhin, Andrey P.; Golosheykin, Simon; Sirevaag, Erik; Kristjansson, Sean; Rohrbaugh, John W.; Heath, Andrew C.

    2007-01-01

    The rapid evaluation of complex visual environments is critical for an organism's adaptation and survival. Previous studies have shown that emotionally significant visual scenes, both pleasant and unpleasant, elicit a larger late positive wave in the event-related brain potential (ERP) than emotionally neutral pictures. The purpose of the present study was to examine whether neuroelectric responses elicited by complex pictures discriminate between specific, biologically relevant contents of the visual scene and to determine how early in the picture processing this discrimination occurs. Subjects (n=264) viewed 55 color slides differing in both scene content and emotional significance. No categorical judgments or responses were required. Consistent with previous studies, we found that emotionally arousing pictures, regardless of their content, produce a larger late positive wave than neutral pictures. However, when pictures were further categorized by content, anterior ERP components in a time window between 200−600 ms following stimulus onset showed a high selectivity for pictures with erotic content compared to other pictures regardless of their emotional valence (pleasant, neutral, and unpleasant) or emotional arousal. The divergence of ERPs elicited by erotic and non-erotic contents started at 185 ms post-stimulus in the fronto-central midline regions, with a later onset in parietal regions. This rapid, selective, and content-specific processing of erotic materials and its dissociation from other pictures (including emotionally positive pictures) suggests the existence of a specialized neural network for prioritized processing of a distinct category of biologically relevant stimuli with high adaptive and evolutionary significance. PMID:16712815

  18. The scene and the unseen: manipulating photographs for experiments on change blindness and scene memory: image manipulation for change blindness.

    Science.gov (United States)

    Ball, Felix; Elzemann, Anne; Busch, Niko A

    2014-09-01

    The change blindness paradigm, in which participants often fail to notice substantial changes in a scene, is a popular tool for studying scene perception, visual memory, and the link between awareness and attention. Some of the most striking and popular examples of change blindness have been demonstrated with digital photographs of natural scenes; in most studies, however, much simpler displays, such as abstract stimuli or "free-floating" objects, are typically used. Although simple displays have undeniable advantages, natural scenes remain a very useful and attractive stimulus for change blindness research. To assist researchers interested in using natural-scene stimuli in change blindness experiments, we provide here a step-by-step tutorial on how to produce changes in natural-scene images with a freely available image-processing tool (GIMP). We explain how changes in a scene can be made by deleting objects or relocating them within the scene or by changing the color of an object, in just a few simple steps. We also explain how the physical properties of such changes can be analyzed using GIMP and MATLAB (a high-level scientific programming tool). Finally, we present an experiment confirming that scenes manipulated according to our guidelines are effective in inducing change blindness and demonstrating the relationship between change blindness and the physical properties of the change and inter-individual differences in performance measures. We expect that this tutorial will be useful for researchers interested in studying the mechanisms of change blindness, attention, or visual memory using natural scenes.

  19. Albedo estimation for scene segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C H; Rosenfeld, A

    1983-03-01

    Standard methods of image segmentation do not take into account the three-dimensional nature of the underlying scene. For example, histogram-based segmentation tacitly assumes that the image intensity is piecewise constant, and this is not true when the scene contains curved surfaces. This paper introduces a method of taking 3d information into account in the segmentation process. The image intensities are adjusted to compensate for the effects of estimated surface orientation; the adjusted intensities can be regarded as reflectivity estimates. When histogram-based segmentation is applied to these new values, the image is segmented into parts corresponding to surfaces of constant reflectivity in the scene. 7 references.

  20. Pooling Objects for Recognizing Scenes without Examples

    NARCIS (Netherlands)

    Kordumova, S.; Mensink, T.; Snoek, C.G.M.

    2016-01-01

    In this paper we aim to recognize scenes in images without using any scene images as training data. Different from attribute based approaches, we do not carefully select the training classes to match the unseen scene classes. Instead, we propose a pooling over ten thousand of off-the-shelf object

  1. Selective Segmentation for Global Optimization of Depth Estimation in Complex Scenes

    Directory of Open Access Journals (Sweden)

    Sheng Liu

    2013-01-01

    Full Text Available This paper proposes a segmentation-based global optimization method for depth estimation. Firstly, for obtaining accurate matching cost, the original local stereo matching approach based on self-adapting matching window is integrated with two matching cost optimization strategies aiming at handling both borders and occlusion regions. Secondly, we employ a comprehensive smooth term to satisfy diverse smoothness request in real scene. Thirdly, a selective segmentation term is used for enforcing the plane trend constraints selectively on the corresponding segments to further improve the accuracy of depth results from object level. Experiments on the Middlebury image pairs show that the proposed global optimization approach is considerably competitive with other state-of-the-art matching approaches.

  2. Where’s Wally: The influence of visual salience on referring expression generation

    Directory of Open Access Journals (Sweden)

    Alasdair Daniel Francis Clarke

    2013-06-01

    Full Text Available Referring expression generation (REG presents the converse problem to visualsearch: Given a scene and a specified target, how does one generate adescription which would allow somebody else to quickly and accurately locatethe target? Previous work in psycholinguistics and natural language processingthat has addressed this question identifies only a limited role for vision inthis task. That previous work, which relies largely on simple scenes, tends totreat vision as a pre-process for extracting feature categories that arerelevant to disambiguation. However, the visual search literature suggeststhat some descriptions are better than others at enabling listeners to searchefficiently within complex stimuli. This paper presents the results of a studytesting whether speakers are sensitive to visual features that allow them tocompose such `good' descriptions. Our results show that visual properties(salience, clutter, area, and distance influence REG for targets embedded inimages from the *Where's Wally?* books, which are an order of magnitudemore complex than traditional stimuli. Referring expressions for large salienttargets are shorter than those for smaller and less salient targets, and targets within highly cluttered scenes are described using more words.We also find that speakers are more likely to mention non-target landmarks thatare large, salient, and in close proximity to the target. These findingsidentfy a key role for visual salience in language production decisions and highlight the importance of scene complexity for REG.

  3. The influence of action video game playing on eye movement behaviour during visual search in abstract, in-game and natural scenes.

    Science.gov (United States)

    Azizi, Elham; Abel, Larry A; Stainer, Matthew J

    2017-02-01

    Action game playing has been associated with several improvements in visual attention tasks. However, it is not clear how such changes might influence the way we overtly select information from our visual world (i.e. eye movements). We examined whether action-video-game training changed eye movement behaviour in a series of visual search tasks including conjunctive search (relatively abstracted from natural behaviour), game-related search, and more naturalistic scene search. Forty nongamers were trained in either an action first-person shooter game or a card game (control) for 10 hours. As a further control, we recorded eye movements of 20 experienced action gamers on the same tasks. The results did not show any change in duration of fixations or saccade amplitude either from before to after the training or between all nongamers (pretraining) and experienced action gamers. However, we observed a change in search strategy, reflected by a reduction in the vertical distribution of fixations for the game-related search task in the action-game-trained group. This might suggest learning the likely distribution of targets. In other words, game training only skilled participants to search game images for targets important to the game, with no indication of transfer to the more natural scene search. Taken together, these results suggest no modification in overt allocation of attention. Either the skills that can be trained with action gaming are not powerful enough to influence information selection through eye movements, or action-game-learned skills are not used when deciding where to move the eyes.

  4. Neural Scene Segmentation by Oscillatory Correlation

    National Research Council Canada - National Science Library

    Wang, DeLiang

    2000-01-01

    The segmentation of a visual scene into a set of coherent patterns (objects) is a fundamental aspect of perception, which underlies a variety of important tasks such as figure/ground segregation, and scene analysis...

  5. A hierarchical inferential method for indoor scene classification

    Directory of Open Access Journals (Sweden)

    Jiang Jingzhe

    2017-12-01

    Full Text Available Indoor scene classification forms a basis for scene interaction for service robots. The task is challenging because the layout and decoration of a scene vary considerably. Previous studies on knowledge-based methods commonly ignore the importance of visual attributes when constructing the knowledge base. These shortcomings restrict the performance of classification. The structure of a semantic hierarchy was proposed to describe similarities of different parts of scenes in a fine-grained way. Besides the commonly used semantic features, visual attributes were also introduced to construct the knowledge base. Inspired by the processes of human cognition and the characteristics of indoor scenes, we proposed an inferential framework based on the Markov logic network. The framework is evaluated on a popular indoor scene dataset, and the experimental results demonstrate its effectiveness.

  6. Visual search for arbitrary objects in real scenes

    Science.gov (United States)

    Alvarez, George A.; Rosenholtz, Ruth; Kuzmova, Yoana I.; Sherman, Ashley M.

    2011-01-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target. PMID:21671156

  7. Visual search for arbitrary objects in real scenes.

    Science.gov (United States)

    Wolfe, Jeremy M; Alvarez, George A; Rosenholtz, Ruth; Kuzmova, Yoana I; Sherman, Ashley M

    2011-08-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4-6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the "functional set size" of items that could possibly be the target.

  8. Scene analysis in the natural environment

    DEFF Research Database (Denmark)

    Lewicki, Michael S; Olshausen, Bruno A; Surlykke, Annemarie

    2014-01-01

    The problem of scene analysis has been studied in a number of different fields over the past decades. These studies have led to important insights into problems of scene analysis, but not all of these insights are widely appreciated, and there remain critical shortcomings in current approaches th...... ill-posed problems, (2) the ability to integrate and store information across time and modality, (3) efficient recovery and representation of 3D scene structure, and (4) the use of optimal motor actions for acquiring information to progress toward behavioral goals....

  9. The time course of natural scene perception with reduced attention.

    Science.gov (United States)

    Groen, Iris I A; Ghebreab, Sennay; Lamme, Victor A F; Scholte, H Steven

    2016-02-01

    Attention is thought to impose an informational bottleneck on vision by selecting particular information from visual scenes for enhanced processing. Behavioral evidence suggests, however, that some scene information is extracted even when attention is directed elsewhere. Here, we investigated the neural correlates of this ability by examining how attention affects electrophysiological markers of scene perception. In two electro-encephalography (EEG) experiments, human subjects categorized real-world scenes as manmade or natural (full attention condition) or performed tasks on unrelated stimuli in the center or periphery of the scenes (reduced attention conditions). Scene processing was examined in two ways: traditional trial averaging was used to assess the presence of a categorical manmade/natural distinction in event-related potentials, whereas single-trial analyses assessed whether EEG activity was modulated by scene statistics that are diagnostic of naturalness of individual scenes. The results indicated that evoked activity up to 250 ms was unaffected by reduced attention, showing intact categorical differences between manmade and natural scenes and strong modulations of single-trial activity by scene statistics in all conditions. Thus initial processing of both categorical and individual scene information remained intact with reduced attention. Importantly, however, attention did have profound effects on later evoked activity; full attention on the scene resulted in prolonged manmade/natural differences, increased neural sensitivity to scene statistics, and enhanced scene memory. These results show that initial processing of real-world scene information is intact with diminished attention but that the depth of processing of this information does depend on attention. Copyright © 2016 the American Physiological Society.

  10. Fast Binary Coding for the Scene Classification of High-Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Fan Hu

    2016-06-01

    Full Text Available Scene classification of high-resolution remote sensing (HRRS imagery is an important task in the intelligent processing of remote sensing images and has attracted much attention in recent years. Although the existing scene classification methods, e.g., the bag-of-words (BOW model and its variants, can achieve acceptable performance, these approaches strongly rely on the extraction of local features and the complicated coding strategy, which are usually time consuming and demand much expert effort. In this paper, we propose a fast binary coding (FBC method, to effectively generate efficient discriminative scene representations of HRRS images. The main idea is inspired by the unsupervised feature learning technique and the binary feature descriptions. More precisely, equipped with the unsupervised feature learning technique, we first learn a set of optimal “filters” from large quantities of randomly-sampled image patches and then obtain feature maps by convolving the image scene with the learned filters. After binarizing the feature maps, we perform a simple hashing step to convert the binary-valued feature map to the integer-valued feature map. Finally, statistical histograms computed on the integer-valued feature map are used as global feature representations of the scenes of HRRS images, similar to the conventional BOW model. The analysis of the algorithm complexity and experiments on HRRS image datasets demonstrate that, in contrast with existing scene classification approaches, the proposed FBC has much faster computational speed and achieves comparable classification performance. In addition, we also propose two extensions to FBC, i.e., the spatial co-occurrence matrix and different visual saliency maps, for further improving its final classification accuracy.

  11. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  12. Interaction between scene-based and array-based contextual cueing.

    Science.gov (United States)

    Rosenbaum, Gail M; Jiang, Yuhong V

    2013-07-01

    Contextual cueing refers to the cueing of spatial attention by repeated spatial context. Previous studies have demonstrated distinctive properties of contextual cueing by background scenes and by an array of search items. Whereas scene-based contextual cueing reflects explicit learning of the scene-target association, array-based contextual cueing is supported primarily by implicit learning. In this study, we investigated the interaction between scene-based and array-based contextual cueing. Participants searched for a target that was predicted by both the background scene and the locations of distractor items. We tested three possible patterns of interaction: (1) The scene and the array could be learned independently, in which case cueing should be expressed even when only one cue was preserved; (2) the scene and array could be learned jointly, in which case cueing should occur only when both cues were preserved; (3) overshadowing might occur, in which case learning of the stronger cue should preclude learning of the weaker cue. In several experiments, we manipulated the nature of the contextual cues present during training and testing. We also tested explicit awareness of scenes, scene-target associations, and arrays. The results supported the overshadowing account: Specifically, scene-based contextual cueing precluded array-based contextual cueing when both were predictive of the location of a search target. We suggest that explicit, endogenous cues dominate over implicit cues in guiding spatial attention.

  13. Superpixel-Based Feature for Aerial Image Scene Recognition

    Directory of Open Access Journals (Sweden)

    Hongguang Li

    2018-01-01

    Full Text Available Image scene recognition is a core technology for many aerial remote sensing applications. Different landforms are inputted as different scenes in aerial imaging, and all landform information is regarded as valuable for aerial image scene recognition. However, the conventional features of the Bag-of-Words model are designed using local points or other related information and thus are unable to fully describe landform areas. This limitation cannot be ignored when the aim is to ensure accurate aerial scene recognition. A novel superpixel-based feature is proposed in this study to characterize aerial image scenes. Then, based on the proposed feature, a scene recognition method of the Bag-of-Words model for aerial imaging is designed. The proposed superpixel-based feature that utilizes landform information establishes top-task superpixel extraction of landforms to bottom-task expression of feature vectors. This characterization technique comprises the following steps: simple linear iterative clustering based superpixel segmentation, adaptive filter bank construction, Lie group-based feature quantification, and visual saliency model-based feature weighting. Experiments of image scene recognition are carried out using real image data captured by an unmanned aerial vehicle (UAV. The recognition accuracy of the proposed superpixel-based feature is 95.1%, which is higher than those of scene recognition algorithms based on other local features.

  14. Out of Mind, Out of Sight: Unexpected Scene Elements Frequently Go Unnoticed Until Primed.

    Science.gov (United States)

    Slavich, George M; Zimbardo, Philip G

    2013-12-01

    The human visual system employs a sophisticated set of strategies for scanning the environment and directing attention to stimuli that can be expected given the context and a person's past experience. Although these strategies enable us to navigate a very complex physical and social environment, they can also cause highly salient, but unexpected stimuli to go completely unnoticed. To examine the generality of this phenomenon, we conducted eight studies that included 15 different experimental conditions and 1,577 participants in all. These studies revealed that a large majority of participants do not report having seen a woman in the center of an urban scene who was photographed in midair as she was committing suicide. Despite seeing the scene repeatedly, 46 % of all participants failed to report seeing a central figure and only 4.8 % reported seeing a falling person. Frequency of noticing the suicidal woman was highest for participants who read a narrative priming story that increased the extent to which she was schematically congruent with the scene. In contrast to this robust effect of inattentional blindness , a majority of participants reported seeing other peripheral objects in the visual scene that were equally difficult to detect, yet more consistent with the scene. Follow-up qualitative analyses revealed that participants reported seeing many elements that were not actually present, but which could have been expected given the overall context of the scene. Together, these findings demonstrate the robustness of inattentional blindness and highlight the specificity with which different visual primes may increase noticing behavior.

  15. Primal scene derivatives in the work of Yukio Mishima: the primal scene fantasy.

    Science.gov (United States)

    Turco, Ronald N

    2002-01-01

    This article discusses the preoccupation with fire, revenge, crucifixion, and other fantasies as they relate to the primal scene. The manifestations of these fantasies are demonstrated in a work of fiction by Yukio Mishima. The Temple of the Golden Pavillion. As is the case in other writings of Mishima there is a fusion of aggressive and libidinal drives and a preoccupation with death. The primal scene is directly connected with pyromania and destructive "acting out" of fantasies. This article is timely with regard to understanding contemporary events of cultural and national destruction.

  16. Logarithmic r-θ mapping for hybrid optical neural network filter for multiple objects recognition within cluttered scenes

    Science.gov (United States)

    Kypraios, Ioannis; Young, Rupert C. D.; Chatwin, Chris R.; Birch, Phil M.

    2009-04-01

    θThe window unit in the design of the complex logarithmic r-θ mapping for hybrid optical neural network filter can allow multiple objects of the same class to be detected within the input image. Additionally, the architecture of the neural network unit of the complex logarithmic r-θ mapping for hybrid optical neural network filter becomes attractive for accommodating the recognition of multiple objects of different classes within the input image by modifying the output layer of the unit. We test the overall filter for multiple objects of the same and of different classes' recognition within cluttered input images and video sequences of cluttered scenes. Logarithmic r-θ mapping for hybrid optical neural network filter is shown to exhibit with a single pass over the input data simultaneously in-plane rotation, out-of-plane rotation, scale, log r-θ map translation and shift invariance, and good clutter tolerance by recognizing correctly the different objects within the cluttered scenes. We record in our results additional extracted information from the cluttered scenes about the objects' relative position, scale and in-plane rotation.

  17. Emotional and neutral scenes in competition: orienting, efficiency, and identification.

    Science.gov (United States)

    Calvo, Manuel G; Nummenmaa, Lauri; Hyönä, Jukka

    2007-12-01

    To investigate preferential processing of emotional scenes competing for limited attentional resources with neutral scenes, prime pictures were presented briefly (450 ms), peripherally (5.2 degrees away from fixation), and simultaneously (one emotional and one neutral scene) versus singly. Primes were followed by a mask and a probe for recognition. Hit rate was higher for emotional than for neutral scenes in the dual- but not in the single-prime condition, and A' sensitivity decreased for neutral but not for emotional scenes in the dual-prime condition. This preferential processing involved both selective orienting and efficient encoding, as revealed, respectively, by a higher probability of first fixation on--and shorter saccade latencies to--emotional scenes and by shorter fixation time needed to accurately identify emotional scenes, in comparison with neutral scenes.

  18. Scene Integration for Online VR Advertising Clouds

    Directory of Open Access Journals (Sweden)

    Michael Kalochristianakis

    2014-12-01

    Full Text Available This paper presents a scene composition approach that allows the combinational use of standard three dimensional objects, called models, in order to create X3D scenes. The module is an integral part of a broader design aiming to construct large scale online advertising infrastructures that rely on virtual reality technologies. The architecture addresses a number of problems regarding remote rendering for low end devices and last but not least, the provision of scene composition and integration. Since viewers do not keep information regarding individual input models or scenes, composition requires the consideration of mechanisms that add state to viewing technologies. In terms of this work we extended a well-known, open source X3D authoring tool.

  19. Advanced radiometric and interferometric milimeter-wave scene simulations

    Science.gov (United States)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  20. Three-dimensional measurement system for crime scene documentation

    Science.gov (United States)

    Adamczyk, Marcin; Hołowko, Elwira; Lech, Krzysztof; Michoński, Jakub; MÄ czkowski, Grzegorz; Bolewicki, Paweł; Januszkiewicz, Kamil; Sitnik, Robert

    2017-10-01

    Three dimensional measurements (such as photogrammetry, Time of Flight, Structure from Motion or Structured Light techniques) are becoming a standard in the crime scene documentation process. The usage of 3D measurement techniques provide an opportunity to prepare more insightful investigation and helps to show every trace in the context of the entire crime scene. In this paper we would like to present a hierarchical, three-dimensional measurement system that is designed for crime scenes documentation process. Our system reflects the actual standards in crime scene documentation process - it is designed to perform measurement in two stages. First stage of documentation, the most general, is prepared with a scanner with relatively low spatial resolution but also big measuring volume - it is used for the whole scene documentation. Second stage is much more detailed: high resolution but smaller size of measuring volume for areas that required more detailed approach. The documentation process is supervised by a specialised application CrimeView3D, that is a software platform for measurements management (connecting with scanners and carrying out measurements, automatic or semi-automatic data registration in the real time) and data visualisation (3D visualisation of documented scenes). It also provides a series of useful tools for forensic technicians: virtual measuring tape, searching for sources of blood spatter, virtual walk on the crime scene and many others. In this paper we present our measuring system and the developed software. We also provide an outcome from research on metrological validation of scanners that was performed according to VDI/VDE standard. We present a CrimeView3D - a software-platform that was developed to manage the crime scene documentation process. We also present an outcome from measurement sessions that were conducted on real crime scenes with cooperation with Technicians from Central Forensic Laboratory of Police.

  1. Explicit Foreground and Background Modeling in The Classification of Text Blocks in Scene Images

    NARCIS (Netherlands)

    Sriman, Bowornrat; Schomaker, Lambertus

    2015-01-01

    Achieving high accuracy for classifying foreground and background is an interesting challenge in the field of scene image analysis because of the wide range of illumination, complex background, and scale changes. Classifying fore- ground and background using bag-of-feature model gives a good result.

  2. Attention, awareness, and the perception of auditory scenes

    Directory of Open Access Journals (Sweden)

    Joel S Snyder

    2012-02-01

    Full Text Available Auditory perception and cognition entails both low-level and high-level processes, which are likely to interact with each other to create our rich conscious experience of soundscapes. Recent research that we review has revealed numerous influences of high-level factors, such as attention, intention, and prior experience, on conscious auditory perception. And recently, studies have shown that auditory scene analysis tasks can exhibit multistability in a manner very similar to ambiguous visual stimuli, presenting a unique opportunity to study neural correlates of auditory awareness and the extent to which mechanisms of perception are shared across sensory modalities. Research has also led to a growing number of techniques through which auditory perception can be manipulated and even completely suppressed. Such findings have important consequences for our understanding of the mechanisms of perception and also should allow scientists to precisely distinguish the influences of different higher-level influences.

  3. Where's Wally: the influence of visual salience on referring expression generation.

    Science.gov (United States)

    Clarke, Alasdair D F; Elsner, Micha; Rohde, Hannah

    2013-01-01

    REFERRING EXPRESSION GENERATION (REG) PRESENTS THE CONVERSE PROBLEM TO VISUAL SEARCH: given a scene and a specified target, how does one generate a description which would allow somebody else to quickly and accurately locate the target?Previous work in psycholinguistics and natural language processing has failed to find an important and integrated role for vision in this task. That previous work, which relies largely on simple scenes, tends to treat vision as a pre-process for extracting feature categories that are relevant to disambiguation. However, the visual search literature suggests that some descriptions are better than others at enabling listeners to search efficiently within complex stimuli. This paper presents a study testing whether participants are sensitive to visual features that allow them to compose such "good" descriptions. Our results show that visual properties (salience, clutter, area, and distance) influence REG for targets embedded in images from the Where's Wally? books. Referring expressions for large targets are shorter than those for smaller targets, and expressions about targets in highly cluttered scenes use more words. We also find that participants are more likely to mention non-target landmarks that are large, salient, and in close proximity to the target. These findings identify a key role for visual salience in language production decisions and highlight the importance of scene complexity for REG.

  4. Auditory Scene Analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    Directory of Open Access Journals (Sweden)

    David J Brown

    2015-10-01

    Full Text Available A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don’t yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36 performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio-visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this.

  5. Fast Extended Depth-of-Field Reconstruction for Complex Holograms Using Block Partitioned Entropy Minimization

    Directory of Open Access Journals (Sweden)

    Peter Wai Ming Tsang

    2018-05-01

    Full Text Available Optical scanning holography (OSH is a powerful and effective method for capturing the complex hologram of a three-dimensional (3-D scene. Such captured complex hologram is called optical scanned hologram. However, reconstructing a focused image from an optical scanned hologram is a difficult issue, as OSH technique can be applied to acquire holograms of wide-view and complicated object scenes. Solutions developed to date are mostly computationally intensive, and in so far only reconstruction of simple object scenes have been demonstrated. In this paper we report a low complexity method for reconstructing a focused image from an optical scanned hologram that is representing a 3-D object scene. Briefly, a complex hologram is back-propagated onto regular spaced images along the axial direction, and from which a crude, blocky depth map of the object scene is computed according to non-overlapping block partitioned entropy minimization. Subsequently, the depth map is low-pass filtered to decrease the blocky distribution, and employed to reconstruct a single focused image of the object scene for extended depth of field. The method proposed here can be applied to any complex holograms such as those obtained from standard phase-shifting holography.

  6. Parietal cortex integrates contextual and saliency signals during the encoding of natural scenes in working memory.

    Science.gov (United States)

    Santangelo, Valerio; Di Francesco, Simona Arianna; Mastroberardino, Serena; Macaluso, Emiliano

    2015-12-01

    The Brief presentation of a complex scene entails that only a few objects can be selected, processed indepth, and stored in memory. Both low-level sensory salience and high-level context-related factors (e.g., the conceptual match/mismatch between objects and scene context) contribute to this selection process, but how the interplay between these factors affects memory encoding is largely unexplored. Here, during fMRI we presented participants with pictures of everyday scenes. After a short retention interval, participants judged the position of a target object extracted from the initial scene. The target object could be either congruent or incongruent with the context of the scene, and could be located in a region of the image with maximal or minimal salience. Behaviourally, we found a reduced impact of saliency on visuospatial working memory performance when the target was out-of-context. Encoding-related fMRI results showed that context-congruent targets activated dorsoparietal regions, while context-incongruent targets de-activated the ventroparietal cortex. Saliency modulated activity both in dorsal and ventral regions, with larger context-related effects for salient targets. These findings demonstrate the joint contribution of knowledge-based and saliency-driven attention for memory encoding, highlighting a dissociation between dorsal and ventral parietal regions. © 2015 Wiley Periodicals, Inc.

  7. The primal scene and symbol formation.

    Science.gov (United States)

    Niedecken, Dietmut

    2016-06-01

    This article discusses the meaning of the primal scene for symbol formation by exploring its way of processing in a child's play. The author questions the notion that a sadomasochistic way of processing is the only possible one. A model of an alternative mode of processing is being presented. It is suggested that both ways of processing intertwine in the "fabric of life" (D. Laub). Two clinical vignettes, one from an analytic child psychotherapy and the other from the analysis of a 30 year-old female patient, illustrate how the primal scene is being played out in the form of a terzet. The author explores whether the sadomasochistic way of processing actually precedes the "primal scene as a terzet". She discusses if it could even be regarded as a precondition for the formation of the latter or, alternatively, if the "combined parent-figure" gives rise to ways of processing. The question is being left open. Finally, it is shown how both modes of experiencing the primal scene underlie the discoursive and presentative symbol formation, respectively. Copyright © 2015 Institute of Psychoanalysis.

  8. Modeling global scene factors in attention

    Science.gov (United States)

    Torralba, Antonio

    2003-07-01

    Models of visual attention have focused predominantly on bottom-up approaches that ignored structured contextual and scene information. I propose a model of contextual cueing for attention guidance based on the global scene configuration. It is shown that the statistics of low-level features across the whole image can be used to prime the presence or absence of objects in the scene and to predict their location, scale, and appearance before exploring the image. In this scheme, visual context information can become available early in the visual processing chain, which allows modulation of the saliency of image regions and provides an efficient shortcut for object detection and recognition. 2003 Optical Society of America

  9. Scene perception and memory revealed by eye movements and receiver-operating characteristic analyses: Does a cultural difference truly exist?

    OpenAIRE

    Evans, Kris; Rotello, Caren M.; Li, Xingshan; Rayner, Keith

    2008-01-01

    Cultural differences have been observed in scene perception and memory: Chinese participants purportedly attend to the background information more than did American participants. We investigated the influence of culture by recording eye movements during scene perception and while participants made recognition memory judgements. Real-world pictures with a focal object on a background were shown to both American and Chinese participants while their eye movements were recorded. Later, memory for...

  10. A hybrid multiview stereo algorithm for modeling urban scenes.

    Science.gov (United States)

    Lafarge, Florent; Keriven, Renaud; Brédif, Mathieu; Vu, Hoang-Hiep

    2013-01-01

    We present an original multiview stereo reconstruction algorithm which allows the 3D-modeling of urban scenes as a combination of meshes and geometric primitives. The method provides a compact model while preserving details: Irregular elements such as statues and ornaments are described by meshes, whereas regular structures such as columns and walls are described by primitives (planes, spheres, cylinders, cones, and tori). We adopt a two-step strategy consisting first in segmenting the initial meshbased surface using a multilabel Markov Random Field-based model and second in sampling primitive and mesh components simultaneously on the obtained partition by a Jump-Diffusion process. The quality of a reconstruction is measured by a multi-object energy model which takes into account both photo-consistency and semantic considerations (i.e., geometry and shape layout). The segmentation and sampling steps are embedded into an iterative refinement procedure which provides an increasingly accurate hybrid representation. Experimental results on complex urban structures and large scenes are presented and compared to state-of-the-art multiview stereo meshing algorithms.

  11. Joint Multi-scale Convolution Neural Network for Scene Classification of High Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    ZHENG Zhuo

    2018-05-01

    Full Text Available High resolution remote sensing imagery scene classification is important for automatic complex scene recognition, which is the key technology for military and disaster relief, etc. In this paper, we propose a novel joint multi-scale convolution neural network (JMCNN method using a limited amount of image data for high resolution remote sensing imagery scene classification. Different from traditional convolutional neural network, the proposed JMCNN is an end-to-end training model with joint enhanced high-level feature representation, which includes multi-channel feature extractor, joint multi-scale feature fusion and Softmax classifier. Multi-channel and scale convolutional extractors are used to extract scene middle features, firstly. Then, in order to achieve enhanced high-level feature representation in a limit dataset, joint multi-scale feature fusion is proposed to combine multi-channel and scale features using two feature fusions. Finally, enhanced high-level feature representation can be used for classification by Softmax. Experiments were conducted using two limit public UCM and SIRI datasets. Compared to state-of-the-art methods, the JMCNN achieved improved performance and great robustness with average accuracies of 89.3% and 88.3% on the two datasets.

  12. Presentation of 3D Scenes Through Video Example.

    Science.gov (United States)

    Baldacci, Andrea; Ganovelli, Fabio; Corsini, Massimiliano; Scopigno, Roberto

    2017-09-01

    Using synthetic videos to present a 3D scene is a common requirement for architects, designers, engineers or Cultural Heritage professionals however it is usually time consuming and, in order to obtain high quality results, the support of a film maker/computer animation expert is necessary. We introduce an alternative approach that takes the 3D scene of interest and an example video as input, and automatically produces a video of the input scene that resembles the given video example. In other words, our algorithm allows the user to "replicate" an existing video, on a different 3D scene. We build on the intuition that a video sequence of a static environment is strongly characterized by its optical flow, or, in other words, that two videos are similar if their optical flows are similar. We therefore recast the problem as producing a video of the input scene whose optical flow is similar to the optical flow of the input video. Our intuition is supported by a user-study specifically designed to verify this statement. We have successfully tested our approach on several scenes and input videos, some of which are reported in the accompanying material of this paper.

  13. 47 CFR 80.1127 - On-scene communications.

    Science.gov (United States)

    2010-10-01

    ....1127 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Global Maritime Distress and Safety System (GMDSS) Operating Procedures for Distress and Safety Communications § 80.1127 On-scene communications. (a) On-scene communications...

  14. 3D Traffic Scene Understanding From Movable Platforms.

    Science.gov (United States)

    Geiger, Andreas; Lauer, Martin; Wojek, Christian; Stiller, Christoph; Urtasun, Raquel

    2014-05-01

    In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry, and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar, or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow, and occupancy grids. For each of these cues, we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments.

  15. Feature diagnosticity and task context shape activity in human scene-selective cortex.

    Science.gov (United States)

    Lowe, Matthew X; Gallivan, Jason P; Ferber, Susanne; Cant, Jonathan S

    2016-01-15

    Scenes are constructed from multiple visual features, yet previous research investigating scene processing has often focused on the contributions of single features in isolation. In the real world, features rarely exist independently of one another and likely converge to inform scene identity in unique ways. Here, we utilize fMRI and pattern classification techniques to examine the interactions between task context (i.e., attend to diagnostic global scene features; texture or layout) and high-level scene attributes (content and spatial boundary) to test the novel hypothesis that scene-selective cortex represents multiple visual features, the importance of which varies according to their diagnostic relevance across scene categories and task demands. Our results show for the first time that scene representations are driven by interactions between multiple visual features and high-level scene attributes. Specifically, univariate analysis of scene-selective cortex revealed that task context and feature diagnosticity shape activity differentially across scene categories. Examination using multivariate decoding methods revealed results consistent with univariate findings, but also evidence for an interaction between high-level scene attributes and diagnostic visual features within scene categories. Critically, these findings suggest visual feature representations are not distributed uniformly across scene categories but are shaped by task context and feature diagnosticity. Thus, we propose that scene-selective cortex constructs a flexible representation of the environment by integrating multiple diagnostically relevant visual features, the nature of which varies according to the particular scene being perceived and the goals of the observer. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. Maxwellian Eye Fixation during Natural Scene Perception

    Directory of Open Access Journals (Sweden)

    Jean Duchesne

    2012-01-01

    Full Text Available When we explore a visual scene, our eyes make saccades to jump rapidly from one area to another and fixate regions of interest to extract useful information. While the role of fixation eye movements in vision has been widely studied, their random nature has been a hitherto neglected issue. Here we conducted two experiments to examine the Maxwellian nature of eye movements during fixation. In Experiment 1, eight participants were asked to perform free viewing of natural scenes displayed on a computer screen while their eye movements were recorded. For each participant, the probability density function (PDF of eye movement amplitude during fixation obeyed the law established by Maxwell for describing molecule velocity in gas. Only the mean amplitude of eye movements varied with expertise, which was lower in experts than novice participants. In Experiment 2, two participants underwent fixed time, free viewing of natural scenes and of their scrambled version while their eye movements were recorded. Again, the PDF of eye movement amplitude during fixation obeyed Maxwell’s law for each participant and for each scene condition (normal or scrambled. The results suggest that eye fixation during natural scene perception describes a random motion regardless of top-down or of bottom-up processes.

  17. Maxwellian Eye Fixation during Natural Scene Perception

    Science.gov (United States)

    Duchesne, Jean; Bouvier, Vincent; Guillemé, Julien; Coubard, Olivier A.

    2012-01-01

    When we explore a visual scene, our eyes make saccades to jump rapidly from one area to another and fixate regions of interest to extract useful information. While the role of fixation eye movements in vision has been widely studied, their random nature has been a hitherto neglected issue. Here we conducted two experiments to examine the Maxwellian nature of eye movements during fixation. In Experiment 1, eight participants were asked to perform free viewing of natural scenes displayed on a computer screen while their eye movements were recorded. For each participant, the probability density function (PDF) of eye movement amplitude during fixation obeyed the law established by Maxwell for describing molecule velocity in gas. Only the mean amplitude of eye movements varied with expertise, which was lower in experts than novice participants. In Experiment 2, two participants underwent fixed time, free viewing of natural scenes and of their scrambled version while their eye movements were recorded. Again, the PDF of eye movement amplitude during fixation obeyed Maxwell's law for each participant and for each scene condition (normal or scrambled). The results suggest that eye fixation during natural scene perception describes a random motion regardless of top-down or of bottom-up processes. PMID:23226987

  18. Selective scene perception deficits in a case of topographical disorientation.

    Science.gov (United States)

    Robin, Jessica; Lowe, Matthew X; Pishdadian, Sara; Rivest, Josée; Cant, Jonathan S; Moscovitch, Morris

    2017-07-01

    Topographical disorientation (TD) is a neuropsychological condition characterized by an inability to find one's way, even in familiar environments. One common contributing cause of TD is landmark agnosia, a visual recognition impairment specific to scenes and landmarks. Although many cases of TD with landmark agnosia have been documented, little is known about the perceptual mechanisms which lead to selective deficits in recognizing scenes. In the present study, we test LH, a man who exhibits TD and landmark agnosia, on measures of scene perception that require selectively attending to either the configural or surface properties of a scene. Compared to healthy controls, LH demonstrates perceptual impairments when attending to the configuration of a scene, but not when attending to its surface properties, such as the pattern of the walls or whether the ground is sand or grass. In contrast, when focusing on objects instead of scenes, LH demonstrates intact perception of both geometric and surface properties. This study demonstrates that in a case of TD and landmark agnosia, the perceptual impairments are selective to the layout of scenes, providing insight into the mechanism of landmark agnosia and scene-selective perceptual processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Color constancy in a scene with bright colors that do not have a fully natural surface appearance.

    Science.gov (United States)

    Fukuda, Kazuho; Uchikawa, Keiji

    2014-04-01

    Theoretical and experimental approaches have proposed that color constancy involves a correction related to some average of stimulation over the scene, and some of the studies showed that the average gives greater weight to surrounding bright colors. However, in a natural scene, high-luminance elements do not necessarily carry information about the scene illuminant when the luminance is too high for it to appear as a natural object color. The question is how a surrounding color's appearance mode influences its contribution to the degree of color constancy. Here the stimuli were simple geometric patterns, and the luminance of surrounding colors was tested over the range beyond the luminosity threshold. Observers performed perceptual achromatic setting on the test patch in order to measure the degree of color constancy and evaluated the surrounding bright colors' appearance mode. Broadly, our results support the assumption that the visual system counts only the colors in the object-color appearance for color constancy. However, detailed analysis indicated that surrounding colors without a fully natural object-color appearance had some sort of influence on color constancy. Consideration of this contribution of unnatural object color might be important for precise modeling of human color constancy.

  20. Crime Scenes as Augmented Reality

    DEFF Research Database (Denmark)

    Sandvik, Kjetil

    2010-01-01

    Using the concept of augmented reality, this article will investigate how places in various ways have become augmented by means of different mediatization strategies. Augmentation of reality implies an enhancement of the places' emotional character: a certain mood, atmosphere or narrative surplus......, physical damage: they are all readable and interpretable signs. As augmented reality the crime scene carries a narrative which at first is hidden and must be revealed. Due to the process of investigation and the detective's ability to reason and deduce, the crime scene as place is reconstructed as virtual...

  1. Semi-Supervised Multitask Learning for Scene Recognition.

    Science.gov (United States)

    Lu, Xiaoqiang; Li, Xuelong; Mou, Lichao

    2015-09-01

    Scene recognition has been widely studied to understand visual information from the level of objects and their relationships. Toward scene recognition, many methods have been proposed. They, however, encounter difficulty to improve the accuracy, mainly due to two limitations: 1) lack of analysis of intrinsic relationships across different scales, say, the initial input and its down-sampled versions and 2) existence of redundant features. This paper develops a semi-supervised learning mechanism to reduce the above two limitations. To address the first limitation, we propose a multitask model to integrate scene images of different resolutions. For the second limitation, we build a model of sparse feature selection-based manifold regularization (SFSMR) to select the optimal information and preserve the underlying manifold structure of data. SFSMR coordinates the advantages of sparse feature selection and manifold regulation. Finally, we link the multitask model and SFSMR, and propose the semi-supervised learning method to reduce the two limitations. Experimental results report the improvements of the accuracy in scene recognition.

  2. Semantic congruency but not temporal synchrony enhances long-term memory performance for audio-visual scenes.

    Science.gov (United States)

    Meyerhoff, Hauke S; Huff, Markus

    2016-04-01

    Human long-term memory for visual objects and scenes is tremendous. Here, we test how auditory information contributes to long-term memory performance for realistic scenes. In a total of six experiments, we manipulated the presentation modality (auditory, visual, audio-visual) as well as semantic congruency and temporal synchrony between auditory and visual information of brief filmic clips. Our results show that audio-visual clips generally elicit more accurate memory performance than unimodal clips. This advantage even increases with congruent visual and auditory information. However, violations of audio-visual synchrony hardly have any influence on memory performance. Memory performance remained intact even with a sequential presentation of auditory and visual information, but finally declined when the matching tracks of one scene were presented separately with intervening tracks during learning. With respect to memory performance, our results therefore show that audio-visual integration is sensitive to semantic congruency but remarkably robust against asymmetries between different modalities.

  3. Political conservatism predicts asymmetries in emotional scene memory.

    Science.gov (United States)

    Mills, Mark; Gonzalez, Frank J; Giuseffi, Karl; Sievert, Benjamin; Smith, Kevin B; Hibbing, John R; Dodd, Michael D

    2016-06-01

    Variation in political ideology has been linked to differences in attention to and processing of emotional stimuli, with stronger responses to negative versus positive stimuli (negativity bias) the more politically conservative one is. As memory is enhanced by attention, such findings predict that memory for negative versus positive stimuli should similarly be enhanced the more conservative one is. The present study tests this prediction by having participants study 120 positive, negative, and neutral scenes in preparation for a subsequent memory test. On the memory test, the same 120 scenes were presented along with 120 new scenes and participants were to respond whether a scene was old or new. Results on the memory test showed that negative scenes were more likely to be remembered than positive scenes, though, this was true only for political conservatives. That is, a larger negativity bias was found the more conservative one was. The effect was sizeable, explaining 45% of the variance across subjects in the effect of emotion. These findings demonstrate that the relationship between political ideology and asymmetries in emotion processing extend to memory and, furthermore, suggest that exploring the extent to which subject variation in interactions among emotion, attention, and memory is predicted by conservatism may provide new insights into theories of political ideology. Published by Elsevier B.V.

  4. Being There: (Re)Making the Assessment Scene

    Science.gov (United States)

    Gallagher, Chris W.

    2011-01-01

    I use Burkean analysis to show how neoliberalism undermines faculty assessment expertise and underwrites testing industry expertise in the current assessment scene. Contending that we cannot extricate ourselves from our limited agency in this scene until we abandon the familiar "stakeholder" theory of power, I propose a rewriting of the…

  5. Dynamical scene analysis with a moving camera: mobile targets detection system

    International Nuclear Information System (INIS)

    Hennebert, Christine

    1996-01-01

    This thesis work deals with the detection of moving objects in monocular image sequences acquired with a mobile camera. We propose a method able to detect small moving objects in visible or infrared images of real outdoor scenes. In order to detect objects of very low apparent motion, we consider an analysis on a large temporal interval. We have chosen to compensate for the dominant motion due to the camera displacement for several consecutive images in order to form a sub-sequence of images for which the camera seems virtually static. We have also developed a new approach allowing to extract the different layers of a real scene in order to deal with cases where the 2D motion due to the camera displacement cannot be globally compensated for. To this end, we use a hierarchical model with two levels: the local merging step and the global merging one. Then, an appropriate temporal filtering is applied to registered image sub-sequence to enhance signals corresponding to moving objects. The detection issue is stated as a labeling problem within a statistical regularization based on Markov Random Fields. Our method has been validated on numerous real image sequences depicting complex outdoor scenes. Finally, the feasibility of an integrated circuit for mobile object detection has been proved. This circuit could lead to an ASIC creation. (author) [fr

  6. [Perception of objects and scenes in age-related macular degeneration].

    Science.gov (United States)

    Tran, T H C; Boucart, M

    2012-01-01

    Vision related quality of life questionnaires suggest that patients with AMD exhibit difficulties in finding objects and in mobility. In the natural environment, objects seldom appear in isolation. They appear in a spatial context which may obscure them in part or place obstacles in the patient's path. Furthermore, the luminance of a natural scene varies as a function of the hour of the day and the light source, which can alter perception. This study aims to evaluate recognition of objects and natural scenes by patients with AMD, by using photographs of such scenes. Studies demonstrate that AMD patients are able to categorize scenes as nature scenes or urban scenes and to discriminate indoor from outdoor scenes with a high degree of precision. They detect objects better in isolation, in color, or against a white background than in their natural contexts. These patients encounter more difficulties than normally sighted individuals in detecting objects in a low-contrast, black-and-white scene. These results may have implications for rehabilitation, for layout of texts and magazines for the reading-impaired and for the rearrangement of the spatial environment of older AMD patients in order to facilitate mobility, finding objects and reducing the risk of falls. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  7. On-scene crisis intervention: psychological guidelines and communication strategies for first responders.

    Science.gov (United States)

    Miller, Laurence

    2010-01-01

    Effective emergency mental health intervention for victims of crime, natural disaster or terrorism begins the moment the first responders arrive. This article describes a range of on-scene crisis intervention options, including verbal communication, body language, behavioral strategies, and interpersonal style. The correct intervention in the first few moments and hours of a crisis can profoundly influence the recovery course of victims and survivors of catastrophic events.

  8. Construction and Optimization of Three-Dimensional Disaster Scenes within Mobile Virtual Reality

    Directory of Open Access Journals (Sweden)

    Ya Hu

    2018-06-01

    Full Text Available Because mobile virtual reality (VR is both mobile and immersive, three-dimensional (3D visualizations of disaster scenes based in mobile VR enable users to perceive and recognize disaster environments faster and better than is possible with other methods. To achieve immersion and prevent users from feeling dizzy, such visualizations require a high scene-rendering frame rate. However, the existing related visualization work cannot provide a sufficient solution for this purpose. This study focuses on the construction and optimization of a 3D disaster scene in order to satisfy the high frame-rate requirements for the rendering of 3D disaster scenes in mobile VR. First, the design of a plugin-free browser/server (B/S architecture for 3D disaster scene construction and visualization based in mobile VR is presented. Second, certain key technologies for scene optimization are discussed, including diverse modes of scene data representation, representation optimization of mobile scenes, and adaptive scheduling of mobile scenes. By means of these technologies, smartphones with various performance levels can achieve higher scene-rendering frame rates and improved visual quality. Finally, using a flood disaster as an example, a plugin-free prototype system was developed, and experiments were conducted. The experimental results demonstrate that a 3D disaster scene constructed via the methods addressed in this study has a sufficiently high scene-rendering frame rate to satisfy the requirements for rendering a 3D disaster scene in mobile VR.

  9. Generation of binary holograms for deep scenes captured with a camera and a depth sensor

    Science.gov (United States)

    Leportier, Thibault; Park, Min-Chul

    2017-01-01

    This work presents binary hologram generation from images of a real object acquired from a Kinect sensor. Since hologram calculation from a point-cloud or polygon model presents a heavy computational burden, we adopted a depth-layer approach to generate the holograms. This method enables us to obtain holographic data of large scenes quickly. Our investigations focus on the performance of different methods, iterative and noniterative, to convert complex holograms into binary format. Comparisons were performed to examine the reconstruction of the binary holograms at different depths. We also propose to modify the direct binary search algorithm to take into account several reference image planes. Then, deep scenes featuring multiple planes of interest can be reconstructed with better efficiency.

  10. Fixations on objects in natural scenes: dissociating importance from salience

    Directory of Open Access Journals (Sweden)

    Bernard Marius e’t Hart

    2013-07-01

    Full Text Available The relation of selective attention to understanding of natural scenes has been subject to intense behavioral research and computational modeling, and gaze is often used as a proxy for such attention. The probability of an image region to be fixated typically correlates with its contrast. However, this relation does not imply a causal role of contrast. Rather, contrast may relate to an object’s importance for a scene, which in turn drives attention. Here we operationalize importance by the probability that an observer names the object as characteristic for a scene. We modify luminance contrast of either a frequently named (common/important or a rarely named (rare/unimportant object, track the observers’ eye movements during scene viewing and ask them to provide keywords describing the scene immediately after.When no object is modified relative to the background, important objects draw more fixations than unimportant ones. Increases of contrast make an object more likely to be fixated, irrespective of whether it was important for the original scene, while decreases in contrast have little effect on fixations. Any contrast modification makes originally unimportant objects more important for the scene. Finally, important objects are fixated more centrally than unimportant objects, irrespective of contrast.Our data suggest a dissociation between object importance (relevance for the scene and salience (relevance for attention. If an object obeys natural scene statistics, important objects are also salient. However, when natural scene statistics are violated, importance and salience are differentially affected. Object salience is modulated by the expectation about object properties (e.g., formed by context or gist, and importance by the violation of such expectations. In addition, the dependence of fixated locations within an object on the object’s importance suggests an analogy to the effects of word frequency on landing positions in reading.

  11. A statistical model for radar images of agricultural scenes

    Science.gov (United States)

    Frost, V. S.; Shanmugan, K. S.; Holtzman, J. C.; Stiles, J. A.

    1982-01-01

    The presently derived and validated statistical model for radar images containing many different homogeneous fields predicts the probability density functions of radar images of entire agricultural scenes, thereby allowing histograms of large scenes composed of a variety of crops to be described. Seasat-A SAR images of agricultural scenes are accurately predicted by the model on the basis of three assumptions: each field has the same SNR, all target classes cover approximately the same area, and the true reflectivity characterizing each individual target class is a uniformly distributed random variable. The model is expected to be useful in the design of data processing algorithms and for scene analysis using radar images.

  12. The neural bases of spatial frequency processing during scene perception

    Science.gov (United States)

    Kauffmann, Louise; Ramanoël, Stephen; Peyrin, Carole

    2014-01-01

    Theories on visual perception agree that scenes are processed in terms of spatial frequencies. Low spatial frequencies (LSF) carry coarse information whereas high spatial frequencies (HSF) carry fine details of the scene. However, how and where spatial frequencies are processed within the brain remain unresolved questions. The present review addresses these issues and aims to identify the cerebral regions differentially involved in low and high spatial frequency processing, and to clarify their attributes during scene perception. Results from a number of behavioral and neuroimaging studies suggest that spatial frequency processing is lateralized in both hemispheres, with the right and left hemispheres predominantly involved in the categorization of LSF and HSF scenes, respectively. There is also evidence that spatial frequency processing is retinotopically mapped in the visual cortex. HSF scenes (as opposed to LSF) activate occipital areas in relation to foveal representations, while categorization of LSF scenes (as opposed to HSF) activates occipital areas in relation to more peripheral representations. Concomitantly, a number of studies have demonstrated that LSF information may reach high-order areas rapidly, allowing an initial coarse parsing of the visual scene, which could then be sent back through feedback into the occipito-temporal cortex to guide finer HSF-based analysis. Finally, the review addresses spatial frequency processing within scene-selective regions areas of the occipito-temporal cortex. PMID:24847226

  13. Detection of chromatic and luminance distortions in natural scenes.

    Science.gov (United States)

    Jennings, Ben J; Wang, Karen; Menzies, Samantha; Kingdom, Frederick A A

    2015-09-01

    A number of studies have measured visual thresholds for detecting spatial distortions applied to images of natural scenes. In one study, Bex [J. Vis.10(2), 1 (2010)10.1167/10.2.231534-7362] measured sensitivity to sinusoidal spatial modulations of image scale. Here, we measure sensitivity to sinusoidal scale distortions applied to the chromatic, luminance, or both layers of natural scene images. We first established that sensitivity does not depend on whether the undistorted comparison image was of the same or of a different scene. Next, we found that, when the luminance but not chromatic layer was distorted, performance was the same regardless of whether the chromatic layer was present, absent, or phase-scrambled; in other words, the chromatic layer, in whatever form, did not affect sensitivity to the luminance layer distortion. However, when the chromatic layer was distorted, sensitivity was higher when the luminance layer was intact compared to when absent or phase-scrambled. These detection threshold results complement the appearance of periodic distortions of the image scale: when the luminance layer is distorted visibly, the scene appears distorted, but when the chromatic layer is distorted visibly, there is little apparent scene distortion. We conclude that (a) observers have a built-in sense of how a normal image of a natural scene should appear, and (b) the detection of distortion in, as well as the apparent distortion of, natural scene images is mediated predominantly by the luminance layer and not chromatic layer.

  14. Semi-automatic scene generation using the Digital Anatomist Foundational Model.

    Science.gov (United States)

    Wong, B A; Rosse, C; Brinkley, J F

    1999-01-01

    A recent survey shows that a major impediment to more widespread use of computers in anatomy education is the inability to directly manipulate 3-D models, and to relate these to corresponding textual information. In the University of Washington Digital Anatomist Project we have developed a prototype Web-based scene generation program that combines the symbolic Foundational Model of Anatomy with 3-D models. A Web user can browse the Foundational Model (FM), then click to request that a 3-D scene be created of an object and its parts or branches. The scene is rendered by a graphics server, and a snapshot is sent to the Web client. The user can then manipulate the scene, adding new structures, deleting structures, rotating the scene, zooming, and saving the scene as a VRML file. Applications such as this, when fully realized with fast rendering and more anatomical content, have the potential to significantly change the way computers are used in anatomy education.

  15. Visual search for changes in scenes creates long-term, incidental memory traces.

    Science.gov (United States)

    Utochkin, Igor S; Wolfe, Jeremy M

    2018-05-01

    Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 1-3, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a "depth-of-processing" account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.

  16. S3-2: Colorfulness Perception Adapting to Natural Scenes

    Directory of Open Access Journals (Sweden)

    Yoko Mizokami

    2012-10-01

    Full Text Available Our visual system has the ability to adapt to the color characteristics of environment and maintain stable color appearance. Many researches on chromatic adaptation and color constancy suggested that the different levels of visual processes involve the adaptation mechanism. In the case of colorfulness perception, it has been shown that the perception changes with adaptation to chromatic contrast modulation and to surrounding chromatic variance. However, it is still not clear how the perception changes in natural scenes and what levels of visual mechanisms contribute to the perception. Here, I will mainly present our recent work on colorfulness-adaptation in natural images. In the experiment, we examined whether the colorfulness perception of an image was influenced by the adaptation to natural images with different degrees of saturation. Natural and unnatural (shuffled or phase-scrambled images are used for adapting and test images, and all combinations of adapting and test images were tested (e.g., the combination of natural adapting images and a shuffled test image. The results show that colorfulness perception was influenced by adaptation to the saturation of images. A test image appeared less colorful after adaptation to saturated images, and vice versa. The effect of colorfulness adaptation was the strongest for the combination of natural adapting and natural test images. The fact that the naturalness of the spatial structure in an image affects the strength of the adaptation effect implies that the recognition of natural scene would play an important role in the adaptation mechanism.

  17. Scene reassembly after multimodal digitization and pipeline evaluation using photorealistic rendering

    DEFF Research Database (Denmark)

    Stets, Jonathan Dyssel; Dal Corso, Alessandro; Nielsen, Jannik Boll

    2017-01-01

    of the lighting environment. This enables pixelwise comparison of photographs of the real scene with renderings of the digital version of the scene. Such quantitative evaluation is useful for verifying acquired material appearance and reconstructed surface geometry, which is an important aspect of digital content......Transparent objects require acquisition modalities that are very different from the ones used for objects with more diffuse reflectance properties. Digitizing a scene where objects must be acquired with different modalities requires scene reassembly after reconstruction of the object surfaces....... This reassembly of a scene that was picked apart for scanning seems unexplored. We contribute with a multimodal digitization pipeline for scenes that require this step of reassembly. Our pipeline includes measurement of bidirectional reflectance distribution functions and high dynamic range imaging...

  18. Dynamic Frames Based Generation of 3D Scenes and Applications

    Directory of Open Access Journals (Sweden)

    Danijel Radošević

    2015-05-01

    Full Text Available Modern graphic/programming tools like Unity enables the possibility of creating 3D scenes as well as making 3D scene based program applications, including full physical model, motion, sounds, lightning effects etc. This paper deals with the usage of dynamic frames based generator in the automatic generation of 3D scene and related source code. The suggested model enables the possibility to specify features of the 3D scene in a form of textual specification, as well as exporting such features from a 3D tool. This approach enables higher level of code generation flexibility and the reusability of the main code and scene artifacts in a form of textual templates. An example of the generated application is presented and discussed.

  19. Visual search in scenes involves selective and non-selective pathways

    Science.gov (United States)

    Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R

    2010-01-01

    How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734

  20. The auditory scene: an fMRI study on melody and accompaniment in professional pianists.

    Science.gov (United States)

    Spada, Danilo; Verga, Laura; Iadanza, Antonella; Tettamanti, Marco; Perani, Daniela

    2014-11-15

    The auditory scene is a mental representation of individual sounds extracted from the summed sound waveform reaching the ears of the listeners. Musical contexts represent particularly complex cases of auditory scenes. In such a scenario, melody may be seen as the main object moving on a background represented by the accompaniment. Both melody and accompaniment vary in time according to harmonic rules, forming a typical texture with melody in the most prominent, salient voice. In the present sparse acquisition functional magnetic resonance imaging study, we investigated the interplay between melody and accompaniment in trained pianists, by observing the activation responses elicited by processing: (1) melody placed in the upper and lower texture voices, leading to, respectively, a higher and lower auditory salience; (2) harmonic violations occurring in either the melody, the accompaniment, or both. The results indicated that the neural activation elicited by the processing of polyphonic compositions in expert musicians depends upon the upper versus lower position of the melodic line in the texture, and showed an overall greater activation for the harmonic processing of melody over accompaniment. Both these two predominant effects were characterized by the involvement of the posterior cingulate cortex and precuneus, among other associative brain regions. We discuss the prominent role of the posterior medial cortex in the processing of melodic and harmonic information in the auditory stream, and propose to frame this processing in relation to the cognitive construction of complex multimodal sensory imagery scenes. Copyright © 2014 Elsevier Inc. All rights reserved.

  1. Radiative transfer model for heterogeneous 3-D scenes

    Science.gov (United States)

    Kimes, D. S.; Kirchner, J. A.

    1982-01-01

    A general mathematical framework for simulating processes in heterogeneous 3-D scenes is presented. Specifically, a model was designed and coded for application to radiative transfers in vegetative scenes. The model is unique in that it predicts (1) the directional spectral reflectance factors as a function of the sensor's azimuth and zenith angles and the sensor's position above the canopy, (2) the spectral absorption as a function of location within the scene, and (3) the directional spectral radiance as a function of the sensor's location within the scene. The model was shown to follow known physical principles of radiative transfer. Initial verification of the model as applied to a soybean row crop showed that the simulated directional reflectance data corresponded relatively well in gross trends to the measured data. However, the model can be greatly improved by incorporating more sophisticated and realistic anisotropic scattering algorithms

  2. Hierarchy-associated semantic-rule inference framework for classifying indoor scenes

    Science.gov (United States)

    Yu, Dan; Liu, Peng; Ye, Zhipeng; Tang, Xianglong; Zhao, Wei

    2016-03-01

    Typically, the initial task of classifying indoor scenes is challenging, because the spatial layout and decoration of a scene can vary considerably. Recent efforts at classifying object relationships commonly depend on the results of scene annotation and predefined rules, making classification inflexible. Furthermore, annotation results are easily affected by external factors. Inspired by human cognition, a scene-classification framework was proposed using the empirically based annotation (EBA) and a match-over rule-based (MRB) inference system. The semantic hierarchy of images is exploited by EBA to construct rules empirically for MRB classification. The problem of scene classification is divided into low-level annotation and high-level inference from a macro perspective. Low-level annotation involves detecting the semantic hierarchy and annotating the scene with a deformable-parts model and a bag-of-visual-words model. In high-level inference, hierarchical rules are extracted to train the decision tree for classification. The categories of testing samples are generated from the parts to the whole. Compared with traditional classification strategies, the proposed semantic hierarchy and corresponding rules reduce the effect of a variable background and improve the classification performance. The proposed framework was evaluated on a popular indoor scene dataset, and the experimental results demonstrate its effectiveness.

  3. Cognitive organization of roadway scenes : an empirical study.

    NARCIS (Netherlands)

    Gundy, C.M.

    1995-01-01

    This report describes six studies investigating the cognitive organization of roadway scenes. These scenes were represented by still photographs taken on a number of roads outside of built-up areas. Seventy-eight drivers, stratified by age and sex to simulate the Dutch driving population,

  4. A view not to be missed: Salient scene content interferes with cognitive restoration

    Science.gov (United States)

    Van der Jagt, Alexander P. N.; Craig, Tony; Brewer, Mark J.; Pearson, David G.

    2017-01-01

    Attention Restoration Theory (ART) states that built scenes place greater load on attentional resources than natural scenes. This is explained in terms of "hard" and "soft" fascination of built and natural scenes. Given a lack of direct empirical evidence for this assumption we propose that perceptual saliency of scene content can function as an empirically derived indicator of fascination. Saliency levels were established by measuring speed of scene category detection using a Go/No-Go detection paradigm. Experiment 1 shows that built scenes are more salient than natural scenes. Experiment 2 replicates these findings using greyscale images, ruling out a colour-based response strategy, and additionally shows that built objects in natural scenes affect saliency to a greater extent than the reverse. Experiment 3 demonstrates that the saliency of scene content is directly linked to cognitive restoration using an established restoration paradigm. Overall, these findings demonstrate an important link between the saliency of scene content and related cognitive restoration. PMID:28723975

  5. A view not to be missed: Salient scene content interferes with cognitive restoration.

    Directory of Open Access Journals (Sweden)

    Alexander P N Van der Jagt

    Full Text Available Attention Restoration Theory (ART states that built scenes place greater load on attentional resources than natural scenes. This is explained in terms of "hard" and "soft" fascination of built and natural scenes. Given a lack of direct empirical evidence for this assumption we propose that perceptual saliency of scene content can function as an empirically derived indicator of fascination. Saliency levels were established by measuring speed of scene category detection using a Go/No-Go detection paradigm. Experiment 1 shows that built scenes are more salient than natural scenes. Experiment 2 replicates these findings using greyscale images, ruling out a colour-based response strategy, and additionally shows that built objects in natural scenes affect saliency to a greater extent than the reverse. Experiment 3 demonstrates that the saliency of scene content is directly linked to cognitive restoration using an established restoration paradigm. Overall, these findings demonstrate an important link between the saliency of scene content and related cognitive restoration.

  6. Performance Benefits with Scene-Linked HUD Symbology: An Attentional Phenomenon?

    Science.gov (United States)

    Levy, Jonathan L.; Foyle, David C.; McCann, Robert S.; Null, Cynthia H. (Technical Monitor)

    1999-01-01

    Previous research has shown that in a simulated flight task, navigating a path defined by ground markers while maintaining a target altitude is more accurate when an altitude indicator appears in a virtual "scenelinked" format (projected symbology moving as if it were part of the out-the-window environment) compared to the fixed-location, superimposed format found on present-day HUDs (Foyle, McCann & Shelden, 1995). One explanation of the scene-linked performance advantage is that attention can be divided between scene-linked symbology and the outside world more efficiently than between standard (fixed-position) HUD symbology and the outside world. The present study tested two alternative explanations by manipulating the location of the scene-linked HUD symbology relative to the ground path markers. Scene-linked symbology yielded better ground path-following performance than standard fixed-location superimposed symbology regardless of whether the scene-linked symbology appeared directly along the ground path or at various distances off the path. The results support the explanation that the performance benefits found with scene-linked symbology are attentional.

  7. Crime Scene Investigation.

    Science.gov (United States)

    Harris, Barbara; Kohlmeier, Kris; Kiel, Robert D.

    Casting students in grades 5 through 12 in the roles of reporters, lawyers, and detectives at the scene of a crime, this interdisciplinary activity involves participants in the intrigue and drama of crime investigation. Using a hands-on, step-by-step approach, students work in teams to investigate a crime and solve a mystery. Through role-playing…

  8. Perceptual salience affects the contents of working memory during free-recollection of objects from natural scenes

    Directory of Open Access Journals (Sweden)

    Tiziana ePedale

    2015-02-01

    Full Text Available One of the most important issues in the study of cognition is to understand which are the factors determining internal representation of the external world. Previous literature has started to highlight the impact of low-level sensory features (indexed by saliency-maps in driving attention selection, hence increasing the probability for objects presented in complex and natural scenes to be successfully encoded into working memory(WM and then correctly remembered. Here we asked whether the probability of retrieving high-saliency objects modulates the overall contents of WM, by decreasing the probability of retrieving other, lower-saliency objects. We presented pictures of natural scenes for 4 secs. After a retention period of 8 secs, we asked participants to verbally report as many objects/details as possible of the previous scenes. We then computed how many times the objects located at either the peak of maximal or minimal saliency in the scene (as indexed by a saliency-map; Itti et al., 1998 were recollected by participants. Results showed that maximal-saliency objects were recollected more often and earlier in the stream of successfully reported items than minimal-saliency objects. This indicates that bottom-up sensory salience increases the recollection probability and facilitates the access to memory representation at retrieval, respectively. Moreover, recollection of the maximal- (but not the minimal- saliency objects predicted the overall amount of successfully recollected objects: The higher the probability of having successfully reported the most-salient object in the scene, the lower the amount of recollected objects. These findings highlight that bottom-up sensory saliency modulates the current contents of WM during recollection of objects from natural scenes, most likely by reducing available resources to encode and then retrieve other (lower saliency objects.

  9. SCEGRAM: An image database for semantic and syntactic inconsistencies in scenes.

    Science.gov (United States)

    Öhlschläger, Sabine; Võ, Melissa Le-Hoa

    2017-10-01

    Our visual environment is not random, but follows compositional rules according to what objects are usually found where. Despite the growing interest in how such semantic and syntactic rules - a scene grammar - enable effective attentional guidance and object perception, no common image database containing highly-controlled object-scene modifications has been publically available. Such a database is essential in minimizing the risk that low-level features drive high-level effects of interest, which is being discussed as possible source of controversial study results. To generate the first database of this kind - SCEGRAM - we took photographs of 62 real-world indoor scenes in six consistency conditions that contain semantic and syntactic (both mild and extreme) violations as well as their combinations. Importantly, always two scenes were paired, so that an object was semantically consistent in one scene (e.g., ketchup in kitchen) and inconsistent in the other (e.g., ketchup in bathroom). Low-level salience did not differ between object-scene conditions and was generally moderate. Additionally, SCEGRAM contains consistency ratings for every object-scene condition, as well as object-absent scenes and object-only images. Finally, a cross-validation using eye-movements replicated previous results of longer dwell times for both semantic and syntactic inconsistencies compared to consistent controls. In sum, the SCEGRAM image database is the first to contain well-controlled semantic and syntactic object-scene inconsistencies that can be used in a broad range of cognitive paradigms (e.g., verbal and pictorial priming, change detection, object identification, etc.) including paradigms addressing developmental aspects of scene grammar. SCEGRAM can be retrieved for research purposes from http://www.scenegrammarlab.com/research/scegram-database/ .

  10. Sensory substitution: the spatial updating of auditory scenes ‘mimics’ the spatial updating of visual scenes

    Directory of Open Access Journals (Sweden)

    Achille ePasqualotto

    2016-04-01

    Full Text Available Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or ‘soundscapes’. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localising sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgement of relative direction task (JRD was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s. Moreover, our results have practical implications to improve training methods with sensory substitution devices.

  11. a Low-Cost Panoramic Camera for the 3d Documentation of Contaminated Crime Scenes

    Science.gov (United States)

    Abate, D.; Toschi, I.; Sturdy-Colls, C.; Remondino, F.

    2017-11-01

    Crime scene documentation is a fundamental task which has to be undertaken in a fast, accurate and reliable way, highlighting evidence which can be further used for ensuring justice for victims and for guaranteeing the successful prosecution of perpetrators. The main focus of this paper is on the documentation of a typical crime scene and on the rapid recording of any possible contamination that could have influenced its original appearance. A 3D reconstruction of the environment is first generated by processing panoramas acquired with the low-cost Ricoh Theta 360 camera, and further analysed to highlight potentials and limits of this emerging and consumer-grade technology. Then, a methodology is proposed for the rapid recording of changes occurring between the original and the contaminated crime scene. The approach is based on an automatic 3D feature-based data registration, followed by a cloud-to-cloud distance computation, given as input the 3D point clouds generated before and after e.g. the misplacement of evidence. All the algorithms adopted for panoramas pre-processing, photogrammetric 3D reconstruction, 3D geometry registration and analysis, are presented and currently available in open-source or low-cost software solutions.

  12. Involuntary memories of emotional scenes: The effects of cue discriminability and emotion over time

    DEFF Research Database (Denmark)

    Staugaard, Søren Risløv; Berntsen, Dorthe

    2014-01-01

    a range of clinical disorders, there is no broadly agreed upon explanation of their underlying mechanisms and no successful experimental simulations of their retrieval. In a series of experiments, we experimentally manipulated the activation of involuntary episodic memories for emotional and neutral...... scenes and predicted their activation on the basis of manipulations carried out at encoding and retrieval. Our findings suggest that the interplay between cue discriminability at the time of retrieval and emotional arousal at the time of encoding are crucial for explaining intrusive memories following...... negative events. While cue distinctiveness is important directly following encoding of the scenes, emotional intensity influences retrieval after delays of 24 hr and 1 week. Voluntary remembering follows the same pattern as involuntary remembering. Our results suggest an explanatory model of intrusive...

  13. Mental Layout Extrapolations Prime Spatial Processing of Scenes

    Science.gov (United States)

    Gottesman, Carmela V.

    2011-01-01

    Four experiments examined whether scene processing is facilitated by layout representation, including layout that was not perceived but could be predicted based on a previous partial view (boundary extension). In a priming paradigm (after Sanocki, 2003), participants judged objects' distances in photographs. In Experiment 1, full scenes (target),…

  14. Colour agnosia impairs the recognition of natural but not of non-natural scenes.

    Science.gov (United States)

    Nijboer, Tanja C W; Van Der Smagt, Maarten J; Van Zandvoort, Martine J E; De Haan, Edward H F

    2007-03-01

    Scene recognition can be enhanced by appropriate colour information, yet the level of visual processing at which colour exerts its effects is still unclear. It has been suggested that colour supports low-level sensory processing, while others have claimed that colour information aids semantic categorization and recognition of objects and scenes. We investigated the effect of colour on scene recognition in a case of colour agnosia, M.A.H. In a scene identification task, participants had to name images of natural or non-natural scenes in six different formats. Irrespective of scene format, M.A.H. was much slower on the natural than on the non-natural scenes. As expected, neither M.A.H. nor control participants showed any difference in performance for the non-natural scenes. However, for the natural scenes, appropriate colour facilitated scene recognition in control participants (i.e., shorter reaction times), whereas M.A.H.'s performance did not differ across formats. Our data thus support the hypothesis that the effect of colour occurs at the level of learned associations.

  15. Graphics processing unit (GPU) real-time infrared scene generation

    Science.gov (United States)

    Christie, Chad L.; Gouthas, Efthimios (Themie); Williams, Owen M.

    2007-04-01

    VIRSuite, the GPU-based suite of software tools developed at DSTO for real-time infrared scene generation, is described. The tools include the painting of scene objects with radiometrically-associated colours, translucent object generation, polar plot validation and versatile scene generation. Special features include radiometric scaling within the GPU and the presence of zoom anti-aliasing at the core of VIRSuite. Extension of the zoom anti-aliasing construct to cover target embedding and the treatment of translucent objects is described.

  16. Scene perception and memory revealed by eye movements and receiver-operating characteristic analyses: does a cultural difference truly exist?

    Science.gov (United States)

    Evans, Kris; Rotello, Caren M; Li, Xingshan; Rayner, Keith

    2009-02-01

    Cultural differences have been observed in scene perception and memory: Chinese participants purportedly attend to the background information more than did American participants. We investigated the influence of culture by recording eye movements during scene perception and while participants made recognition memory judgements. Real-world pictures with a focal object on a background were shown to both American and Chinese participants while their eye movements were recorded. Later, memory for the focal object in each scene was tested, and the relationship between the focal object (studied, new) and the background context (studied, new) was manipulated. Receiver-operating characteristic (ROC) curves show that both sensitivity and response bias were changed when objects were tested in new contexts. However, neither the decrease in accuracy nor the response bias shift differed with culture. The eye movement patterns were also similar across cultural groups. Both groups made longer and more fixations on the focal objects than on the contexts. The similarity of eye movement patterns and recognition memory behaviour suggests that both Americans and Chinese use the same strategies in scene perception and memory.

  17. Picture models for 2-scene comics creating system

    Directory of Open Access Journals (Sweden)

    Miki UENO

    2015-03-01

    Full Text Available Recently, computer understanding pictures and stories becomes one of the most important research topics in computer science. However, there are few researches about human like understanding by computers because pictures have not certain format and contain more lyric aspect than that of natural laguage. For picture understanding, a comic is the suitable target because it is consisted by clear and simple plot of stories and separated scenes.In this paper, we propose 2 different types of picture models for 2-scene comics creating system. We also show the method of the application of 2-scene comics creating system by means of proposed picture model.

  18. Scene Categorization in Alzheimer's Disease: A Saccadic Choice Task

    Directory of Open Access Journals (Sweden)

    Quentin Lenoble

    2015-01-01

    Full Text Available Aims: We investigated the performance in scene categorization of patients with Alzheimer's disease (AD using a saccadic choice task. Method: 24 patients with mild AD, 28 age-matched controls and 26 young people participated in the study. The participants were presented pairs of coloured photographs and were asked to make a saccadic eye movement to the picture corresponding to the target scene (natural vs. urban, indoor vs. outdoor. Results: The patients' performance did not differ from chance for natural scenes. Differences between young and older controls and patients with AD were found in accuracy but not saccadic latency. Conclusions: The results are interpreted in terms of cerebral reorganization in the prefrontal and temporo-occipital cortex of patients with AD, but also in terms of impaired processing of visual global properties of scenes.

  19. Defining spatial relations in a specific ontology for automated scene creation

    Directory of Open Access Journals (Sweden)

    D. Contraş

    2013-06-01

    Full Text Available This paper presents the approach of building an ontology for automatic scene generation. Every scene contains various elements (backgrounds, characters, objects which are spatially interrelated. The article focuses on these spatial and temporal relationships of the elements constituting a scene.

  20. System and method for extracting dominant orientations from a scene

    Science.gov (United States)

    Straub, Julian; Rosman, Guy; Freifeld, Oren; Leonard, John J.; Fisher, III; , John W.

    2017-05-30

    In one embodiment, a method of identifying the dominant orientations of a scene comprises representing a scene as a plurality of directional vectors. The scene may comprise a three-dimensional representation of a scene, and the plurality of directional vectors may comprise a plurality of surface normals. The method further comprises determining, based on the plurality of directional vectors, a plurality of orientations describing the scene. The determined plurality of orientations explains the directionality of the plurality of directional vectors. In certain embodiments, the plurality of orientations may have independent axes of rotation. The plurality of orientations may be determined by representing the plurality of directional vectors as lying on a mathematical representation of a sphere, and inferring the parameters of a statistical model to adapt the plurality of orientations to explain the positioning of the plurality of directional vectors lying on the mathematical representation of the sphere.

  1. AR goggles make crime scene investigation a desk job

    OpenAIRE

    Aron, Jacob; NORTHFIELD, Dean

    2012-01-01

    CRIME scene investigators could one day help solve murders without leaving the office. A pair of augmented reality glasses could allow local police to virtually tag objects in a crime scene, and build a clean record of the scene in 3D video before evidence is removed for processing.\\ud The system, being developed by Oytun Akman and colleagues at the Delft University of Technology in the Netherlands, consists of a head-mounted display receiving 3D video from a pair of attached cameras controll...

  2. Gay and Lesbian Scene in Metelkova

    Directory of Open Access Journals (Sweden)

    Nataša Velikonja

    2013-09-01

    Full Text Available The article deals with the development of the gay and lesbian scene in ACC Metelkova, while specifying the preliminary aspects of establishing and building gay and lesbian activism associated with spatial issues. The struggle for space or occupying public space is vital for the gay and lesbian scene, as it provides not only the necessary socializing opportunities for gays and lesbians, but also does away with the historical hiding of homosexuality in the closet, in seclusion and silence. Because of their autonomy and long-term, continuous existence, homo-clubs at Metelkova contributed to the consolidation of the gay and lesbian scene in Slovenia and significantly improved the opportunities for cultural, social and political expression of gays and lesbians. Such a synthesis of the cultural, social and political, further intensified in Metelkova, and characterizes the gay and lesbian community in Slovenia from the very outset of gay and lesbian activism in 1984. It is this long-term synthesis that keeps this community in Slovenia so vital and politically resilient.

  3. Integration of heterogeneous features for remote sensing scene classification

    Science.gov (United States)

    Wang, Xin; Xiong, Xingnan; Ning, Chen; Shi, Aiye; Lv, Guofang

    2018-01-01

    Scene classification is one of the most important issues in remote sensing (RS) image processing. We find that features from different channels (shape, spectral, texture, etc.), levels (low-level and middle-level), or perspectives (local and global) could provide various properties for RS images, and then propose a heterogeneous feature framework to extract and integrate heterogeneous features with different types for RS scene classification. The proposed method is composed of three modules (1) heterogeneous features extraction, where three heterogeneous feature types, called DS-SURF-LLC, mean-Std-LLC, and MS-CLBP, are calculated, (2) heterogeneous features fusion, where the multiple kernel learning (MKL) is utilized to integrate the heterogeneous features, and (3) an MKL support vector machine classifier for RS scene classification. The proposed method is extensively evaluated on three challenging benchmark datasets (a 6-class dataset, a 12-class dataset, and a 21-class dataset), and the experimental results show that the proposed method leads to good classification performance. It produces good informative features to describe the RS image scenes. Moreover, the integration of heterogeneous features outperforms some state-of-the-art features on RS scene classification tasks.

  4. Audio scene segmentation for video with generic content

    Science.gov (United States)

    Niu, Feng; Goela, Naveen; Divakaran, Ajay; Abdel-Mottaleb, Mohamed

    2008-01-01

    In this paper, we present a content-adaptive audio texture based method to segment video into audio scenes. The audio scene is modeled as a semantically consistent chunk of audio data. Our algorithm is based on "semantic audio texture analysis." At first, we train GMM models for basic audio classes such as speech, music, etc. Then we define the semantic audio texture based on those classes. We study and present two types of scene changes, those corresponding to an overall audio texture change and those corresponding to a special "transition marker" used by the content creator, such as a short stretch of music in a sitcom or silence in dramatic content. Unlike prior work using genre specific heuristics, such as some methods presented for detecting commercials, we adaptively find out if such special transition markers are being used and if so, which of the base classes are being used as markers without any prior knowledge about the content. Our experimental results show that our proposed audio scene segmentation works well across a wide variety of broadcast content genres.

  5. Simulator scene display evaluation device

    Science.gov (United States)

    Haines, R. F. (Inventor)

    1986-01-01

    An apparatus for aligning and calibrating scene displays in an aircraft simulator has a base on which all of the instruments for the aligning and calibrating are mounted. Laser directs beam at double right prism which is attached to pivoting support on base. The pivot point of the prism is located at the design eye point (DEP) of simulator during the aligning and calibrating. The objective lens in the base is movable on a track to follow the laser beam at different angles within the field of vision at the DEP. An eyepiece and a precision diopter are movable into a position behind the prism during the scene evaluation. A photometer or illuminometer is pivotable about the pivot into and out of position behind the eyepiece.

  6. Parallel programming of saccades during natural scene viewing: evidence from eye movement positions.

    Science.gov (United States)

    Wu, Esther X W; Gilani, Syed Omer; van Boxtel, Jeroen J A; Amihai, Ido; Chua, Fook Kee; Yen, Shih-Cheng

    2013-10-24

    Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (>100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis.

  7. [Study on the modeling of earth-atmosphere coupling over rugged scenes for hyperspectral remote sensing].

    Science.gov (United States)

    Zhao, Hui-Jie; Jiang, Cheng; Jia, Guo-Rui

    2014-01-01

    Adjacency effects may introduce errors in the quantitative applications of hyperspectral remote sensing, of which the significant item is the earth-atmosphere coupling radiance. However, the surrounding relief and shadow induce strong changes in hyperspectral images acquired from rugged terrain, which is not accurate to describe the spectral characteristics. Furthermore, the radiative coupling process between the earth and the atmosphere is more complex over the rugged scenes. In order to meet the requirements of real-time processing in data simulation, an equivalent reflectance of background was developed by taking into account the topography and the geometry between surroundings and targets based on the radiative transfer process. The contributions of the coupling to the signal at sensor level were then evaluated. This approach was integrated to the sensor-level radiance simulation model and then validated through simulating a set of actual radiance data. The results show that the visual effect of simulated images is consistent with that of observed images. It was also shown that the spectral similarity is improved over rugged scenes. In addition, the model precision is maintained at the same level over flat scenes.

  8. Synchronous contextual irregularities affect early scene processing: replication and extension.

    Science.gov (United States)

    Mudrik, Liad; Shalgi, Shani; Lamy, Dominique; Deouell, Leon Y

    2014-04-01

    Whether contextual regularities facilitate perceptual stages of scene processing is widely debated, and empirical evidence is still inconclusive. Specifically, it was recently suggested that contextual violations affect early processing of a scene only when the incongruent object and the scene are presented a-synchronously, creating expectations. We compared event-related potentials (ERPs) evoked by scenes that depicted a person performing an action using either a congruent or an incongruent object (e.g., a man shaving with a razor or with a fork) when scene and object were presented simultaneously. We also explored the role of attention in contextual processing by using a pre-cue to direct subjects׳ attention towards or away from the congruent/incongruent object. Subjects׳ task was to determine how many hands the person in the picture used in order to perform the action. We replicated our previous findings of frontocentral negativity for incongruent scenes that started ~ 210 ms post stimulus presentation, even earlier than previously found. Surprisingly, this incongruency ERP effect was negatively correlated with the reaction times cost on incongruent scenes. The results did not allow us to draw conclusions about the role of attention in detecting the regularity, due to a weak attention manipulation. By replicating the 200-300 ms incongruity effect with a new group of subjects at even earlier latencies than previously reported, the results strengthen the evidence for contextual processing during this time window even when simultaneous presentation of the scene and object prevent the formation of prior expectations. We discuss possible methodological limitations that may account for previous failures to find this an effect, and conclude that contextual information affects object model selection processes prior to full object identification, with semantic knowledge activation stages unfolding only later on. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Multi-view 3D scene reconstruction using ant colony optimization techniques

    International Nuclear Information System (INIS)

    Chrysostomou, Dimitrios; Gasteratos, Antonios; Nalpantidis, Lazaros; Sirakoulis, Georgios C

    2012-01-01

    This paper presents a new method performing high-quality 3D object reconstruction of complex shapes derived from multiple, calibrated photographs of the same scene. The novelty of this research is found in two basic elements, namely: (i) a novel voxel dissimilarity measure, which accommodates the elimination of the lighting variations of the models and (ii) the use of an ant colony approach for further refinement of the final 3D models. The proposed reconstruction procedure employs a volumetric method based on a novel projection test for the production of a visual hull. While the presented algorithm shares certain aspects with the space carving algorithm, it is, nevertheless, first enhanced with the lightness compensating image comparison method, and then refined using ant colony optimization. The algorithm is fast, computationally simple and results in accurate representations of the input scenes. In addition, compared to previous publications, the particular nature of the proposed algorithm allows accurate 3D volumetric measurements under demanding lighting environmental conditions, due to the fact that it can cope with uneven light scenes, resulting from the characteristics of the voxel dissimilarity measure applied. Besides, the intelligent behavior of the ant colony framework provides the opportunity to formulate the process as a combinatorial optimization problem, which can then be solved by means of a colony of cooperating artificial ants, resulting in very promising results. The method is validated with several real datasets, along with qualitative comparisons with other state-of-the-art 3D reconstruction techniques, following the Middlebury benchmark. (paper)

  10. Motivational Objects in Natural Scenes (MONS: A Database of >800 Objects

    Directory of Open Access Journals (Sweden)

    Judith Schomaker

    2017-09-01

    Full Text Available In daily life, we are surrounded by objects with pre-existing motivational associations. However, these are rarely controlled for in experiments with natural stimuli. Research on natural stimuli would therefore benefit from stimuli with well-defined motivational properties; in turn, such stimuli also open new paths in research on motivation. Here we introduce a database of Motivational Objects in Natural Scenes (MONS. The database consists of 107 scenes. Each scene contains 2 to 7 objects placed at approximately equal distance from the scene center. Each scene was photographed creating 3 versions, with one object (“critical object” being replaced to vary the overall motivational value of the scene (appetitive, aversive, and neutral, while maintaining high visual similarity between the three versions. Ratings on motivation, valence, arousal and recognizability were obtained using internet-based questionnaires. Since the main objective was to provide stimuli of well-defined motivational value, three motivation scales were used: (1 Desire to own the object; (2 Approach/Avoid; (3 Desire to interact with the object. Three sets of ratings were obtained in independent sets of observers: for all 805 objects presented on a neutral background, for 321 critical objects presented in their scene context, and for the entire scenes. On the basis of the motivational ratings, objects were subdivided into aversive, neutral, and appetitive categories. The MONS database will provide a standardized basis for future studies on motivational value under realistic conditions.

  11. Motivational Objects in Natural Scenes (MONS): A Database of >800 Objects.

    Science.gov (United States)

    Schomaker, Judith; Rau, Elias M; Einhäuser, Wolfgang; Wittmann, Bianca C

    2017-01-01

    In daily life, we are surrounded by objects with pre-existing motivational associations. However, these are rarely controlled for in experiments with natural stimuli. Research on natural stimuli would therefore benefit from stimuli with well-defined motivational properties; in turn, such stimuli also open new paths in research on motivation. Here we introduce a database of Motivational Objects in Natural Scenes (MONS). The database consists of 107 scenes. Each scene contains 2 to 7 objects placed at approximately equal distance from the scene center. Each scene was photographed creating 3 versions, with one object ("critical object") being replaced to vary the overall motivational value of the scene (appetitive, aversive, and neutral), while maintaining high visual similarity between the three versions. Ratings on motivation, valence, arousal and recognizability were obtained using internet-based questionnaires. Since the main objective was to provide stimuli of well-defined motivational value, three motivation scales were used: (1) Desire to own the object; (2) Approach/Avoid; (3) Desire to interact with the object. Three sets of ratings were obtained in independent sets of observers: for all 805 objects presented on a neutral background, for 321 critical objects presented in their scene context, and for the entire scenes. On the basis of the motivational ratings, objects were subdivided into aversive, neutral, and appetitive categories. The MONS database will provide a standardized basis for future studies on motivational value under realistic conditions.

  12. Do reference surfaces influence exocentric pointing?

    NARCIS (Netherlands)

    Doumen, M. J A; Kappers, A. M L; Koenderink, Jan J.

    All elements of the visual field are known to influence the perception of the egocentric distances of objects. Not only the ground surface of a scene, but also the surface at the back or other objects in the scene can affect an observer's egocentric distance estimation of an object. We tested

  13. Using MODIS spectral information to classify sea ice scenes for CERES radiance-to-flux inversion

    Science.gov (United States)

    Corbett, J.; Su, W.; Liang, L.; Eitzen, Z.

    2013-12-01

    The Clouds and Earth's Radiant Energy System (CERES) instruments on NASA's Terra and Aqua satellites measure the shortwave (SW) radiance reflected from the Earth. In order to provide an estimate of the top-of-atmosphere reflected SW flux we need to know the anisotropy of the radiance reflected from the scene. Sea Ice scenes are particularly complex due to the wide range of surface conditions that comprise sea ice. For example, the anisotropy of snow-covered sea ice is quite different to that of sea ice with melt-ponds. To attempt to provide a consistent scene classification we have developed the Sea Ice Brightness Index (SIBI). The SIBI is defined as one minus the normalized difference between reflectances from the 0.469 micron and 0.858 micron bands from the Moderate Resolution Imaging Spectroradiometer (MODIS) instrument. For brighter snow-covered sea ice scenes the SIBI value is close to 1.0. As the surface changes to bare ice, melt ponds, etc. the SIBI decreases. For open water the SIBI value is around 0.2-0.3. The SIBI exhibits no dependence on viewing zenith or solar zenith angle, allowing for consistent scene identification. To use the SIBI we classify clear-sky CERES field-of-views over sea ice into 3 groups; SIBI≥0.935, 0.935>SIBI≥0.85 and SIBISIBI based ADMs. Using the second metric, we see a reduction in the latitude/longitude binned mean RMS error between the ADM predicted radiance and the measured radiance from 8% to 7% in May and from 17% to 12% in July. These improvements suggest that using the SIBI to account for changes in the sea ice surface will lead to improved CERES flux retrievals.

  14. Improved content aware scene retargeting for retinitis pigmentosa patients

    Directory of Open Access Journals (Sweden)

    Al-Atabany Walid I

    2010-09-01

    Full Text Available Abstract Background In this paper we present a novel scene retargeting technique to reduce the visual scene while maintaining the size of the key features. The algorithm is scalable to implementation onto portable devices, and thus, has potential for augmented reality systems to provide visual support for those with tunnel vision. We therefore test the efficacy of our algorithm on shrinking the visual scene into the remaining field of view for those patients. Methods Simple spatial compression of visual scenes makes objects appear further away. We have therefore developed an algorithm which removes low importance information, maintaining the size of the significant features. Previous approaches in this field have included seam carving, which removes low importance seams from the scene, and shrinkability which dynamically shrinks the scene according to a generated importance map. The former method causes significant artifacts and the latter is inefficient. In this work we have developed a new algorithm, combining the best aspects of both these two previous methods. In particular, our approach is to generate a shrinkability importance map using as seam based approach. We then use it to dynamically shrink the scene in similar fashion to the shrinkability method. Importantly, we have implemented it so that it can be used in real time without prior knowledge of future frames. Results We have evaluated and compared our algorithm to the seam carving and image shrinkability approaches from a content preservation perspective and a compression quality perspective. Also our technique has been evaluated and tested on a trial included 20 participants with simulated tunnel vision. Results show the robustness of our method at reducing scenes up to 50% with minimal distortion. We also demonstrate efficacy in its use for those with simulated tunnel vision of 22 degrees of field of view or less. Conclusions Our approach allows us to perform content aware video

  15. Separate and simultaneous adjustment of light qualities in a real scene

    NARCIS (Netherlands)

    Xia, L.; Pont, S.C.; Heynderickx, I.E.J.R.

    2017-01-01

    Humans are able to estimate light field properties in a scene in that they have expectations of the objects' appearance inside it. Previously, we probed such expectations in a real scene by asking whether a "probe object" fitted a real scene with regard to its lighting. But how well are observers

  16. Local self-similarity descriptor for point-of-interest reconstruction of real-world scenes

    International Nuclear Information System (INIS)

    Gao, Xianglu; Wan, Weibing; Zhao, Qunfei; Zhang, Xianmin

    2015-01-01

    Scene reconstruction is utilized commonly in close-range photogrammetry, with diverse applications in fields such as industry, biology, and aerospace industries. Presented surfaces or wireframe three-dimensional (3D) model reconstruction applications are either too complex or too inflexible to accommodate various types of real-world scenes, however. This paper proposes an algorithm for acquiring point-of-interest (referred to throughout the study as POI) coordinates in 3D space, based on multi-view geometry and a local self-similarity descriptor. After reconstructing several POIs specified by a user, a concise and flexible target object measurement method, which obtains the distance between POIs, is described in detail. The proposed technique is able to measure targets with high accuracy even in the presence of obstacles and non-Lambertian surfaces. The method is so flexible that target objects can be measured with a handheld digital camera. Experimental results further demonstrate the effectiveness of the algorithm. (paper)

  17. Recognizing the Stranger: Recognition Scenes in the Gospel of John

    DEFF Research Database (Denmark)

    Larsen, Kasper Bro

    Recognizing the Stranger is the first monographic study of recognition scenes and motifs in the Gospel of John. The recognition type-scene (anagnōrisis) was a common feature in ancient drama and narrative, highly valued by Aristotle as a touching moment of truth, e.g., in Oedipus’ tragic self...... structures of the type-scene in order to show how Jesus’ true identity can be recognized behind the half-mask of his human appearance....

  18. Memory for Items and Relationships among Items Embedded in Realistic Scenes: Disproportionate Relational Memory Impairments in Amnesia

    Science.gov (United States)

    Hannula, Deborah E.; Tranel, Daniel; Allen, John S.; Kirchhoff, Brenda A.; Nickel, Allison E.; Cohen, Neal J.

    2014-01-01

    Objective The objective of this study was to examine the dependence of item memory and relational memory on medial temporal lobe (MTL) structures. Patients with amnesia, who either had extensive MTL damage or damage that was relatively restricted to the hippocampus, were tested, as was a matched comparison group. Disproportionate relational memory impairments were predicted for both patient groups, and those with extensive MTL damage were also expected to have impaired item memory. Method Participants studied scenes, and were tested with interleaved two-alternative forced-choice probe trials. Probe trials were either presented immediately after the corresponding study trial (lag 1), five trials later (lag 5), or nine trials later (lag 9) and consisted of the studied scene along with a manipulated version of that scene in which one item was replaced with a different exemplar (item memory test) or was moved to a new location (relational memory test). Participants were to identify the exact match of the studied scene. Results As predicted, patients were disproportionately impaired on the test of relational memory. Item memory performance was marginally poorer among patients with extensive MTL damage, but both groups were impaired relative to matched comparison participants. Impaired performance was evident at all lags, including the shortest possible lag (lag 1). Conclusions The results are consistent with the proposed role of the hippocampus in relational memory binding and representation, even at short delays, and suggest that the hippocampus may also contribute to successful item memory when items are embedded in complex scenes. PMID:25068665

  19. Eye tracking to evaluate evidence recognition in crime scene investigations.

    Science.gov (United States)

    Watalingam, Renuka Devi; Richetelli, Nicole; Pelz, Jeff B; Speir, Jacqueline A

    2017-11-01

    Crime scene analysts are the core of criminal investigations; decisions made at the scene greatly affect the speed of analysis and the quality of conclusions, thereby directly impacting the successful resolution of a case. If an examiner fails to recognize the pertinence of an item on scene, the analyst's theory regarding the crime will be limited. Conversely, unselective evidence collection will most likely include irrelevant material, thus increasing a forensic laboratory's backlog and potentially sending the investigation into an unproductive and costly direction. Therefore, it is critical that analysts recognize and properly evaluate forensic evidence that can assess the relative support of differing hypotheses related to event reconstruction. With this in mind, the aim of this study was to determine if quantitative eye tracking data and qualitative reconstruction accuracy could be used to distinguish investigator expertise. In order to assess this, 32 participants were successfully recruited and categorized as experts or trained novices based on their practical experiences and educational backgrounds. Each volunteer then processed a mock crime scene while wearing a mobile eye tracker, wherein visual fixations, durations, search patterns, and reconstruction accuracy were evaluated. The eye tracking data (dwell time and task percentage on areas of interest or AOIs) were compared using Earth Mover's Distance (EMD) and the Needleman-Wunsch (N-W) algorithm, revealing significant group differences for both search duration (EMD), as well as search sequence (N-W). More specifically, experts exhibited greater dissimilarity in search duration, but greater similarity in search sequences than their novice counterparts. In addition to the quantitative visual assessment of examiner variability, each participant's reconstruction skill was assessed using a 22-point binary scoring system, in which significant group differences were detected as a function of total

  20. Do reference surfaces influence exocentric pointing?

    NARCIS (Netherlands)

    Doumen, M.J.A.; Kappers, A.M.L.; Koenderink, J.J.

    2008-01-01

    All elements of the visual field are known to influence the perception of the egocentric distances of objects. Not only the ground surface of a scene, but also the surface at the back or other objects in the scene can affect an observer’s egocentric distance estimation of an object. We tested

  1. The role of scene type and priming in the processing and selection of a spatial frame of reference

    Directory of Open Access Journals (Sweden)

    Katrin eJohannsen

    2013-04-01

    Full Text Available The selection and processing of a spatial frame of reference (FOR in interpreting verbal scene descriptions is of great interest to psycholinguistics. In this study, we focus on the choice between the relative and the intrinsic FOR, addressing two questions: a does the presence or absence of a background in the scene influence the selection of a FOR, and b what is the effect of a previously selected FOR on the subsequent processing of a different FOR. Our results show that if a scene includes a realistic background, this will make the selection of the relative FOR more likely. We attribute this effect to the facilitation of mental simulation, which enhances the relation between the viewer and the objects. With respect to the response accuracy, we found both a higher (due to FOR priming and a lower accuracy (due to different FOR, while for the response latencies, we only found a delay effect.

  2. Scene text recognition in mobile applications by character descriptor and structure configuration.

    Science.gov (United States)

    Yi, Chucai; Tian, Yingli

    2014-07-01

    Text characters and strings in natural scene can provide valuable information for many applications. Extracting text directly from natural scene images or videos is a challenging task because of diverse text patterns and variant background interferences. This paper proposes a method of scene text recognition from detected text regions. In text detection, our previously proposed algorithms are applied to obtain text regions from scene image. First, we design a discriminative character descriptor by combining several state-of-the-art feature detectors and descriptors. Second, we model character structure at each character class by designing stroke configuration maps. Our algorithm design is compatible with the application of scene text extraction in smart mobile devices. An Android-based demo system is developed to show the effectiveness of our proposed method on scene text information extraction from nearby objects. The demo system also provides us some insight into algorithm design and performance improvement of scene text extraction. The evaluation results on benchmark data sets demonstrate that our proposed scheme of text recognition is comparable with the best existing methods.

  3. Fourier power, subjective distance, and object categories all provide plausible models of BOLD responses in scene-selective visual areas

    Science.gov (United States)

    Lescroart, Mark D.; Stansbury, Dustin E.; Gallant, Jack L.

    2015-01-01

    Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue. PMID:26594164

  4. Radio Wave Propagation Scene Partitioning for High-Speed Rails

    Directory of Open Access Journals (Sweden)

    Bo Ai

    2012-01-01

    Full Text Available Radio wave propagation scene partitioning is necessary for wireless channel modeling. As far as we know, there are no standards of scene partitioning for high-speed rail (HSR scenarios, and therefore we propose the radio wave propagation scene partitioning scheme for HSR scenarios in this paper. Based on our measurements along the Wuhan-Guangzhou HSR, Zhengzhou-Xian passenger-dedicated line, Shijiazhuang-Taiyuan passenger-dedicated line, and Beijing-Tianjin intercity line in China, whose operation speeds are above 300 km/h, and based on the investigations on Beijing South Railway Station, Zhengzhou Railway Station, Wuhan Railway Station, Changsha Railway Station, Xian North Railway Station, Shijiazhuang North Railway Station, Taiyuan Railway Station, and Tianjin Railway Station, we obtain an overview of HSR propagation channels and record many valuable measurement data for HSR scenarios. On the basis of these measurements and investigations, we partitioned the HSR scene into twelve scenarios. Further work on theoretical analysis based on radio wave propagation mechanisms, such as reflection and diffraction, may lead us to develop the standard of radio wave propagation scene partitioning for HSR. Our work can also be used as a basis for the wireless channel modeling and the selection of some key techniques for HSR systems.

  5. Unconscious analyses of visual scenes based on feature conjunctions.

    Science.gov (United States)

    Tachibana, Ryosuke; Noguchi, Yasuki

    2015-06-01

    To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).

  6. Angular difference feature extraction for urban scene classification using ZY-3 multi-angle high-resolution satellite imagery

    Science.gov (United States)

    Huang, Xin; Chen, Huijun; Gong, Jianya

    2018-01-01

    Spaceborne multi-angle images with a high-resolution are capable of simultaneously providing spatial details and three-dimensional (3D) information to support detailed and accurate classification of complex urban scenes. In recent years, satellite-derived digital surface models (DSMs) have been increasingly utilized to provide height information to complement spectral properties for urban classification. However, in such a way, the multi-angle information is not effectively exploited, which is mainly due to the errors and difficulties of the multi-view image matching and the inaccuracy of the generated DSM over complex and dense urban scenes. Therefore, it is still a challenging task to effectively exploit the available angular information from high-resolution multi-angle images. In this paper, we investigate the potential for classifying urban scenes based on local angular properties characterized from high-resolution ZY-3 multi-view images. Specifically, three categories of angular difference features (ADFs) are proposed to describe the angular information at three levels (i.e., pixel, feature, and label levels): (1) ADF-pixel: the angular information is directly extrapolated by pixel comparison between the multi-angle images; (2) ADF-feature: the angular differences are described in the feature domains by comparing the differences between the multi-angle spatial features (e.g., morphological attribute profiles (APs)). (3) ADF-label: label-level angular features are proposed based on a group of urban primitives (e.g., buildings and shadows), in order to describe the specific angular information related to the types of primitive classes. In addition, we utilize spatial-contextual information to refine the multi-level ADF features using superpixel segmentation, for the purpose of alleviating the effects of salt-and-pepper noise and representing the main angular characteristics within a local area. The experiments on ZY-3 multi-angle images confirm that the proposed

  7. Global Registration of 3D LiDAR Point Clouds Based on Scene Features: Application to Structured Environments

    Directory of Open Access Journals (Sweden)

    Julia Sanchez

    2017-09-01

    Full Text Available Acquiring 3D data with LiDAR systems involves scanning multiple scenes from different points of view. In actual systems, the ICP algorithm (Iterative Closest Point is commonly used to register the acquired point clouds together to form a unique one. However, this method faces local minima issues and often needs a coarse initial alignment to converge to the optimum. This paper develops a new method for registration adapted to indoor environments and based on structure priors of such scenes. Our method works without odometric data or physical targets. The rotation and translation of the rigid transformation are computed separately, using, respectively, the Gaussian image of the point clouds and a correlation of histograms. To evaluate our algorithm on challenging registration cases, two datasets were acquired and are available for comparison with other methods online. The evaluation of our algorithm on four datasets against six existing methods shows that the proposed method is more robust against sampling and scene complexity. Moreover, the time performances enable a real-time implementation.

  8. SAMPEG: a scene-adaptive parallel MPEG-2 software encoder

    NARCIS (Netherlands)

    Farin, D.S.; Mache, N.; With, de P.H.N.; Girod, B.; Bouman, C.A.; Steinbach, E.G.

    2001-01-01

    This paper presents a fully software-based MPEG-2 encoder architecture, which uses scene-change detection to optimize the Group-of-Picture (GOP) structure for the actual video sequence. This feature enables easy, lossless edit cuts at scene-change positions and it also improves overall picture

  9. Review of On-Scene Management of Mass-Casualty Attacks

    Directory of Open Access Journals (Sweden)

    Annelie Holgersson

    2016-02-01

    Full Text Available Background: The scene of a mass-casualty attack (MCA entails a crime scene, a hazardous space, and a great number of people needing medical assistance. Public transportation has been the target of such attacks and involves a high probability of generating mass casualties. The review aimed to investigate challenges for on-scene responses to MCAs and suggestions made to counter these challenges, with special attention given to attacks on public transportation and associated terminals. Methods: Articles were found through PubMed and Scopus, “relevant articles” as defined by the databases, and a manual search of references. Inclusion criteria were that the article referred to attack(s and/or a public transportation-related incident and issues concerning formal on-scene response. An appraisal of the articles’ scientific quality was conducted based on an evidence hierarchy model developed for the study. Results: One hundred and five articles were reviewed. Challenges for command and coordination on scene included establishing leadership, inter-agency collaboration, multiple incident sites, and logistics. Safety issues entailed knowledge and use of personal protective equipment, risk awareness and expectations, cordons, dynamic risk assessment, defensive versus offensive approaches, and joining forces. Communication concerns were equipment shortfalls, dialoguing, and providing information. Assessment problems were scene layout and interpreting environmental indicators as well as understanding setting-driven needs for specialist skills and resources. Triage and treatment difficulties included differing triage systems, directing casualties, uncommon injuries, field hospitals, level of care, providing psychological and pediatric care. Transportation hardships included scene access, distance to hospitals, and distribution of casualties. Conclusion: Commonly encountered challenges during unintentional incidents were added to during MCAs

  10. Clandestine laboratory scene investigation and processing using portable GC/MS

    Science.gov (United States)

    Matejczyk, Raymond J.

    1997-02-01

    This presentation describes the use of portable gas chromatography/mass spectrometry for on-scene investigation and processing of clandestine laboratories. Clandestine laboratory investigations present special problems to forensic investigators. These crime scenes contain many chemical hazards that must be detected, identified and collected as evidence. Gas chromatography/mass spectrometry performed on-scene with a rugged, portable unit is capable of analyzing a variety of matrices for drugs and chemicals used in the manufacture of illicit drugs, such as methamphetamine. Technologies used to detect various materials at a scene have particular applications but do not address the wide range of samples, chemicals, matrices and mixtures that exist in clan labs. Typical analyses performed by GC/MS are for the purpose of positively establishing the identity of starting materials, chemicals and end-product collected from clandestine laboratories. Concerns for the public and investigator safety and the environment are also important factors for rapid on-scene data generation. Here is described the implementation of a portable multiple-inlet GC/MS system designed for rapid deployment to a scene to perform forensic investigations of clandestine drug manufacturing laboratories. GC/MS has long been held as the 'gold standard' in performing forensic chemical analyses. With the capability of GC/MS to separate and produce a 'chemical fingerprint' of compounds, it is utilized as an essential technique for detecting and positively identifying chemical evidence. Rapid and conclusive on-scene analysis of evidence will assist the forensic investigators in collecting only pertinent evidence thereby reducing the amount of evidence to be transported, reducing chain of custody concerns, reducing costs and hazards, maintaining sample integrity and speeding the completion of the investigative process.

  11. Viewing nature scenes positively affects recovery of autonomic function following acute-mental stress.

    Science.gov (United States)

    Brown, Daniel K; Barton, Jo L; Gladwell, Valerie F

    2013-06-04

    A randomized crossover study explored whether viewing different scenes prior to a stressor altered autonomic function during the recovery from the stressor. The two scenes were (a) nature (composed of trees, grass, fields) or (b) built (composed of man-made, urban scenes lacking natural characteristics) environments. Autonomic function was assessed using noninvasive techniques of heart rate variability; in particular, time domain analyses evaluated parasympathetic activity, using root-mean-square of successive differences (RMSSD). During stress, secondary cardiovascular markers (heart rate, systolic and diastolic blood pressure) showed significant increases from baseline which did not differ between the two viewing conditions. Parasympathetic activity, however, was significantly higher in recovery following the stressor in the viewing scenes of nature condition compared to viewing scenes depicting built environments (RMSSD; 50.0 ± 31.3 vs 34.8 ± 14.8 ms). Thus, viewing nature scenes prior to a stressor alters autonomic activity in the recovery period. The secondary aim was to examine autonomic function during viewing of the two scenes. Standard deviation of R-R intervals (SDRR), as change from baseline, during the first 5 min of viewing nature scenes was greater than during built scenes. Overall, this suggests that nature can elicit improvements in the recovery process following a stressor.

  12. Typical Toddlers' Participation in "Just-in-Time" Programming of Vocabulary for Visual Scene Display Augmentative and Alternative Communication Apps on Mobile Technology: A Descriptive Study.

    Science.gov (United States)

    Holyfield, Christine; Drager, Kathryn; Light, Janice; Caron, Jessica Gosnell

    2017-08-15

    Augmentative and alternative communication (AAC) promotes communicative participation and language development for young children with complex communication needs. However, the motor, linguistic, and cognitive demands of many AAC technologies restrict young children's operational use of and influence over these technologies. The purpose of the current study is to better understand young children's participation in programming vocabulary "just in time" on an AAC application with minimized demands. A descriptive study was implemented to highlight the participation of 10 typically developing toddlers (M age: 16 months, range: 10-22 months) in just-in-time vocabulary programming in an AAC app with visual scene displays. All 10 toddlers participated in some capacity in adding new visual scene displays and vocabulary to the app just in time. Differences in participation across steps were observed, suggesting variation in the developmental demands of controls involved in vocabulary programming. Results from the current study provide clinical insights toward involving young children in AAC programming just in time and steps that may allow for more independent participation or require more scaffolding. Technology designed to minimize motor, cognitive, and linguistic demands may allow children to participate in programming devices at a younger age.

  13. How Perceived Distractor Distance Influences Reference Production : Effects of Perceptual Grouping in 2D and 3D Scenes

    NARCIS (Netherlands)

    Koolen, R.M.F.; Houben, E.; Huntjens, J.; Krahmer, E.J.; Bello, Paul; Guarini, Marcello; McShane, Marjorie; Scassellati, Brian

    2014-01-01

    This study explored two factors that might have an impact on how participants perceive distance between objects in a visual scene: perceptual grouping and presentation mode (2D versus 3D). More specifically, we examined how these factors affect language production, asking if they cause speakers to

  14. Real-time maritime scene simulation for ladar sensors

    Science.gov (United States)

    Christie, Chad L.; Gouthas, Efthimios; Swierkowski, Leszek; Williams, Owen M.

    2011-06-01

    Continuing interest exists in the development of cost-effective synthetic environments for testing Laser Detection and Ranging (ladar) sensors. In this paper we describe a PC-based system for real-time ladar scene simulation of ships and small boats in a dynamic maritime environment. In particular, we describe the techniques employed to generate range imagery accompanied by passive radiance imagery. Our ladar scene generation system is an evolutionary extension of the VIRSuite infrared scene simulation program and includes all previous features such as ocean wave simulation, the physically-realistic representation of boat and ship dynamics, wake generation and simulation of whitecaps, spray, wake trails and foam. A terrain simulation extension is also under development. In this paper we outline the development, capabilities and limitations of the VIRSuite extensions.

  15. Cybersickness in the presence of scene rotational movements along different axes.

    Science.gov (United States)

    Lo, W T; So, R H

    2001-02-01

    Compelling scene movements in a virtual reality (VR) system can cause symptoms of motion sickness (i.e., cybersickness). A within-subject experiment has been conducted to investigate the effects of scene oscillations along different axes on the level of cybersickness. Sixteen male participants were exposed to four 20-min VR simulation sessions. The four sessions used the same virtual environment but with scene oscillations along different axes, i.e., pitch, yaw, roll, or no oscillation (speed: 30 degrees/s, range: +/- 60 degrees). Verbal ratings of the level of nausea were taken at 5-min intervals during the sessions and sickness symptoms were also measured before and after the sessions using the Simulator Sickness Questionnaire (SSQ). In the presence of scene oscillation, both nausea ratings and SSQ scores increased at significantly higher rates than with no oscillation. While individual participants exhibited different susceptibilities to nausea associated with VR simulation containing scene oscillations along different rotational axes, the overall effects of axis among our group of 16 randomly selected participants were not significant. The main effects of, and interactions among, scene oscillation, duration, and participants are discussed in the paper.

  16. “Getting out of downtown”: a longitudinal study of how street-entrenched youth attempt to exit an inner city drug scene

    Directory of Open Access Journals (Sweden)

    Rod Knight

    2017-05-01

    Full Text Available Abstract Background Urban drug “scenes” have been identified as important risk environments that shape the health of street-entrenched youth. New knowledge is needed to inform policy and programing interventions to help reduce youths’ drug scene involvement and related health risks. The aim of this study was to identify how young people envisioned exiting a local, inner-city drug scene in Vancouver, Canada, as well as the individual, social and structural factors that shaped their experiences. Methods Between 2008 and 2016, we draw on 150 semi-structured interviews with 75 street-entrenched youth. We also draw on data generated through ethnographic fieldwork conducted with a subgroup of 25 of these youth between. Results Youth described that, in order to successfully exit Vancouver’s inner city drug scene, they would need to: (a secure legitimate employment and/or obtain education or occupational training; (b distance themselves – both physically and socially – from the urban drug scene; and (c reduce their drug consumption. As youth attempted to leave the scene, most experienced substantial social and structural barriers (e.g., cycling in and out of jail, the need to access services that are centralized within a place that they are trying to avoid, in addition to managing complex individual health issues (e.g., substance dependence. Factors that increased youth’s capacity to successfully exit the drug scene included access to various forms of social and cultural capital operating outside of the scene, including supportive networks of friends and/or family, as well as engagement with addiction treatment services (e.g., low-threshold access to methadone to support cessation or reduction of harmful forms of drug consumption. Conclusions Policies and programming interventions that can facilitate young people’s efforts to reduce engagement with Vancouver’s inner-city drug scene are critically needed, including meaningful

  17. INFLUENCE OF COMPLEXITY ON THE MARGINAL LOGISTICAL COSTS

    Directory of Open Access Journals (Sweden)

    Juraj DUBOVEC

    2013-12-01

    Full Text Available In the article is described the influence of complexity on the logistical operations. Ideal example is the exchange of storage in the system "kanban", which has complexity equal one. Clients directed production increases the consumption of time in logistics within manipulation with critical parts, when the ratio isn't equal to multiple capacity relevant storage. Economical consequence is the increase of costs.

  18. Framework of passive millimeter-wave scene simulation based on material classification

    Science.gov (United States)

    Park, Hyuk; Kim, Sung-Hyun; Lee, Ho-Jin; Kim, Yong-Hoon; Ki, Jae-Sug; Yoon, In-Bok; Lee, Jung-Min; Park, Soon-Jun

    2006-05-01

    Over the past few decades, passive millimeter-wave (PMMW) sensors have emerged as useful implements in transportation and military applications such as autonomous flight-landing system, smart weapons, night- and all weather vision system. As an efficient way to predict the performance of a PMMW sensor and apply it to system, it is required to test in SoftWare-In-the-Loop (SWIL). The PMMW scene simulation is a key component for implementation of this simulator. However, there is no commercial on-the-shelf available to construct the PMMW scene simulation; only there have been a few studies on this technology. We have studied the PMMW scene simulation method to develop the PMMW sensor SWIL simulator. This paper describes the framework of the PMMW scene simulation and the tentative results. The purpose of the PMMW scene simulation is to generate sensor outputs (or image) from a visible image and environmental conditions. We organize it into four parts; material classification mapping, PMMW environmental setting, PMMW scene forming, and millimeter-wave (MMW) sensorworks. The background and the objects in the scene are classified based on properties related with MMW radiation and reflectivity. The environmental setting part calculates the following PMMW phenomenology; atmospheric propagation and emission including sky temperature, weather conditions, and physical temperature. Then, PMMW raw images are formed with surface geometry. Finally, PMMW sensor outputs are generated from PMMW raw images by applying the sensor characteristics such as an aperture size and noise level. Through the simulation process, PMMW phenomenology and sensor characteristics are simulated on the output scene. We have finished the design of framework of the simulator, and are working on implementation in detail. As a tentative result, the flight observation was simulated in specific conditions. After implementation details, we plan to increase the reliability of the simulation by data collecting

  19. Integration of Trace Images in Three-dimensional Crime Scene Reconstruction

    Directory of Open Access Journals (Sweden)

    Quentin Milliet

    2016-01-01

    Full Text Available Forensic image analysis has greatly developed with the proliferation of photography and video recording devices. Trace images of serious incidents are increasingly captured by first responders, witnesses, bystanders, or surveillance systems. Image perception is exposed with a special emphasis on the influence of the field of view on observation. In response to the pitfalls of the mental eye, a way to systematize the integration of images as traces in three-dimensional crime scene reconstruction is proposed. The systematic approach is based on the application of photogrammetric principles to slightly modify the usual photographic documentation as well as on the early collection and review of available trace images. The integration of images as traces provides valuable contributions to contextualize what happened at a crime scene based on the information that can be obtained from images. In a wider perspective, the systematic analysis of images fosters the use and interpretation of forensic evidence to complement witness statements in the criminal justice system. This article outlines the benefits of integrating trace images into a coherent reconstruction framework in order to improve interpretation of their content. A solution is proposed to integrate perception differences between the field of view of cameras and the human eye.

  20. Learning object-to-class kernels for scene classification.

    Science.gov (United States)

    Zhang, Lei; Zhen, Xiantong; Shao, Ling

    2014-08-01

    High-level image representations have drawn increasing attention in visual recognition, e.g., scene classification, since the invention of the object bank. The object bank represents an image as a response map of a large number of pretrained object detectors and has achieved superior performance for visual recognition. In this paper, based on the object bank representation, we propose the object-to-class (O2C) distances to model scene images. In particular, four variants of O2C distances are presented, and with the O2C distances, we can represent the images using the object bank by lower-dimensional but more discriminative spaces, called distance spaces, which are spanned by the O2C distances. Due to the explicit computation of O2C distances based on the object bank, the obtained representations can possess more semantic meanings. To combine the discriminant ability of the O2C distances to all scene classes, we further propose to kernalize the distance representation for the final classification. We have conducted extensive experiments on four benchmark data sets, UIUC-Sports, Scene-15, MIT Indoor, and Caltech-101, which demonstrate that the proposed approaches can significantly improve the original object bank approach and achieve the state-of-the-art performance.

  1. Human matching performance of genuine crime scene latent fingerprints.

    Science.gov (United States)

    Thompson, Matthew B; Tangen, Jason M; McCarthy, Duncan J

    2014-02-01

    There has been very little research into the nature and development of fingerprint matching expertise. Here we present the results of an experiment testing the claimed matching expertise of fingerprint examiners. Expert (n = 37), intermediate trainee (n = 8), new trainee (n = 9), and novice (n = 37) participants performed a fingerprint discrimination task involving genuine crime scene latent fingerprints, their matches, and highly similar distractors, in a signal detection paradigm. Results show that qualified, court-practicing fingerprint experts were exceedingly accurate compared with novices. Experts showed a conservative response bias, tending to err on the side of caution by making more errors of the sort that could allow a guilty person to escape detection than errors of the sort that could falsely incriminate an innocent person. The superior performance of experts was not simply a function of their ability to match prints, per se, but a result of their ability to identify the highly similar, but nonmatching fingerprints as such. Comparing these results with previous experiments, experts were even more conservative in their decision making when dealing with these genuine crime scene prints than when dealing with simulated crime scene prints, and this conservatism made them relatively less accurate overall. Intermediate trainees-despite their lack of qualification and average 3.5 years experience-performed about as accurately as qualified experts who had an average 17.5 years experience. New trainees-despite their 5-week, full-time training course or their 6 months experience-were not any better than novices at discriminating matching and similar nonmatching prints, they were just more conservative. Further research is required to determine the precise nature of fingerprint matching expertise and the factors that influence performance. The findings of this representative, lab-based experiment may have implications for the way fingerprint examiners testify in

  2. The time course of natural scene perception with reduced attention

    NARCIS (Netherlands)

    Groen, I.I.A.; Ghebreab, S.; Lamme, V.A.F.; Scholte, H.S.

    Attention is thought to impose an informational bottleneck on vision by selecting particular information from visual scenes for enhanced processing. Behavioral evidence suggests, however, that some scene information is extracted even when attention is directed elsewhere. Here, we investigated the

  3. A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification

    Directory of Open Access Journals (Sweden)

    Yunlong Yu

    2018-01-01

    Full Text Available One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references.

  4. Research on hyperspectral dynamic scene and image sequence simulation

    Science.gov (United States)

    Sun, Dandan; Liu, Fang; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyperspectral dynamic scene and image sequence for hyperspectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyperspectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyperspectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyperspectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyperspectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyperspectral images are consistent with the theoretical analysis results.

  5. Dynamics of scene representations in the human brain revealed by magnetoencephalography and deep neural networks

    Science.gov (United States)

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude

    2017-01-01

    Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100 ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250 ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. PMID:27039703

  6. Recognition and attention guidance during contextual cueing in real-world scenes: evidence from eye movements.

    Science.gov (United States)

    Brockmole, James R; Henderson, John M

    2006-07-01

    When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.

  7. The inverse-trans-influence in tetravalent lanthanide and actinide bis(carbene) complexes

    Science.gov (United States)

    Gregson, Matthew; Lu, Erli; Mills, David P.; Tuna, Floriana; McInnes, Eric J. L.; Hennig, Christoph; Scheinost, Andreas C.; McMaster, Jonathan; Lewis, William; Blake, Alexander J.; Kerridge, Andrew; Liddle, Stephen T.

    2017-02-01

    Across the periodic table the trans-influence operates, whereby tightly bonded ligands selectively lengthen mutually trans metal-ligand bonds. Conversely, in high oxidation state actinide complexes the inverse-trans-influence operates, where normally cis strongly donating ligands instead reside trans and actually reinforce each other. However, because the inverse-trans-influence is restricted to high-valent actinyls and a few uranium(V/VI) complexes, it has had limited scope in an area with few unifying rules. Here we report tetravalent cerium, uranium and thorium bis(carbene) complexes with trans C=M=C cores where experimental and theoretical data suggest the presence of an inverse-trans-influence. Studies of hypothetical praseodymium(IV) and terbium(IV) analogues suggest the inverse-trans-influence may extend to these ions but it also diminishes significantly as the 4f orbitals are populated. This work suggests that the inverse-trans-influence may occur beyond high oxidation state 5f metals and hence could encompass mid-range oxidation state actinides and lanthanides. Thus, the inverse-trans-influence might be a more general f-block principle.

  8. Age-related changes in visual exploratory behavior in a natural scene setting.

    Science.gov (United States)

    Hamel, Johanna; De Beukelaer, Sophie; Kraft, Antje; Ohl, Sven; Audebert, Heinrich J; Brandt, Stephan A

    2013-01-01

    Diverse cognitive functions decline with increasing age, including the ability to process central and peripheral visual information in a laboratory testing situation (useful visual field of view). To investigate whether and how this influences activities of daily life, we studied age-related changes in visual exploratory behavior in a natural scene setting: a driving simulator paradigm of variable complexity was tested in subjects of varying ages with simultaneous eye- and head-movement recordings via a head-mounted camera. Detection and reaction times were also measured by visual fixation and manual reaction. We considered video computer game experience as a possible influence on performance. Data of 73 participants of varying ages were analyzed, driving two different courses. We analyzed the influence of route difficulty level, age, and eccentricity of test stimuli on oculomotor and driving behavior parameters. No significant age effects were found regarding saccadic parameters. In the older subjects head-movements increasingly contributed to gaze amplitude. More demanding courses and more peripheral stimuli locations induced longer reaction times in all age groups. Deterioration of the functionally useful visual field of view with increasing age was not suggested in our study group. However, video game-experienced subjects revealed larger saccade amplitudes and a broader distribution of fixations on the screen. They reacted faster to peripheral objects suggesting the notion of a general detection task rather than perceiving driving as a central task. As the video game-experienced population consisted of younger subjects, our study indicates that effects due to video game experience can easily be misinterpreted as age effects if not accounted for. We therefore view it as essential to consider video game experience in all testing methods using virtual media.

  9. Age-related changes in visual exploratory behavior in a natural scene setting

    Directory of Open Access Journals (Sweden)

    Johanna eHamel

    2013-06-01

    Full Text Available Diverse cognitive functions decline with increasing age, including the ability to process central and peripheral visual information in a laboratory testing situation (useful visual field of view. To investigate whether and how this influences activities of daily life, we studied age-related changes in visual exploratory behavior in a natural scene setting: a driving simulator paradigm of variable complexity was tested in subjects of varying ages with simultaneous eye- and head-movement recordings via a head-mounted camera. Detection and reaction times were also measured by visual fixation and manual reaction. We considered video computer game experience as a possible influence on performance. Data of 73 participants of varying ages were analyzed, driving two different courses. We analyzed the influence of route difficulty level, age and eccentricity of test stimuli on oculomotor and driving behavior parameters. No significant age effects were found regarding saccadic parameters. In the older subjects head-movements increasingly contributed to gaze amplitude. More demanding courses and more peripheral stimuli locations, induced longer reaction times in all age groups. Deterioration of the functionally useful visual field of view with increasing age was not suggested in our study group. However, video game-experienced subjects revealed larger saccade amplitudes and a broader distribution of fixations on the screen. They reacted faster to peripheral objects suggesting the notion of a general detection task rather than perceiving driving as a central task. As the video game experienced population consisted of younger subjects, our study indicates that effects due to video game experience can easily be misinterpreted as age effects if not accounted for. We therefore view it as essential to consider video game experience in all testing methods using virtual media.

  10. Review of infrared scene projector technology-1993

    Science.gov (United States)

    Driggers, Ronald G.; Barnard, Kenneth J.; Burroughs, E. E.; Deep, Raymond G.; Williams, Owen M.

    1994-07-01

    The importance of testing IR imagers and missile seekers with realistic IR scenes warrants a review of the current technologies used in dynamic infrared scene projection. These technologies include resistive arrays, deformable mirror arrays, mirror membrane devices, liquid crystal light valves, laser writers, laser diode arrays, and CRTs. Other methods include frustrated total internal reflection, thermoelectric devices, galvanic cells, Bly cells, and vanadium dioxide. A description of each technology is presented along with a discussion of their relative benefits and disadvantages. The current state of each methodology is also summarized. Finally, the methods are compared and contrasted in terms of their performance parameters.

  11. Use of AFIS for linking scenes of crime.

    Science.gov (United States)

    Hefetz, Ido; Liptz, Yakir; Vaturi, Shaul; Attias, David

    2016-05-01

    Forensic intelligence can provide critical information in criminal investigations - the linkage of crime scenes. The Automatic Fingerprint Identification System (AFIS) is an example of a technological improvement that has advanced the entire forensic identification field to strive for new goals and achievements. In one example using AFIS, a series of burglaries into private apartments enabled a fingerprint examiner to search latent prints from different burglary scenes against an unsolved latent print database. Latent finger and palm prints coming from the same source were associated with over than 20 cases. Then, by forensic intelligence and profile analysis the offender's behavior could be anticipated. He was caught, identified, and arrested. It is recommended to perform an AFIS search of LT/UL prints against current crimes automatically as part of laboratory protocol and not by an examiner's discretion. This approach may link different crime scenes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Automatic video surveillance of outdoor scenes using track before detect

    DEFF Research Database (Denmark)

    Hansen, Morten; Sørensen, Helge Bjarup Dissing; Birkemark, Christian M.

    2005-01-01

    This paper concerns automatic video surveillance of outdoor scenes using a single camera. The first step in automatic interpretation of the video stream is activity detection based on background subtraction. Usually, this process will generate a large number of false alarms in outdoor scenes due...

  13. Abnormal brain activation and connectivity to standardized disorder-related visual scenes in social anxiety disorder.

    Science.gov (United States)

    Heitmann, Carina Yvonne; Feldker, Katharina; Neumeister, Paula; Zepp, Britta Maria; Peterburs, Jutta; Zwitserlood, Pienie; Straube, Thomas

    2016-04-01

    Our understanding of altered emotional processing in social anxiety disorder (SAD) is hampered by a heterogeneity of findings, which is probably due to the vastly different methods and materials used so far. This is why the present functional magnetic resonance imaging (fMRI) study investigated immediate disorder-related threat processing in 30 SAD patients and 30 healthy controls (HC) with a novel, standardized set of highly ecologically valid, disorder-related complex visual scenes. SAD patients rated disorder-related as compared with neutral scenes as more unpleasant, arousing and anxiety-inducing than HC. On the neural level, disorder-related as compared with neutral scenes evoked differential responses in SAD patients in a widespread emotion processing network including (para-)limbic structures (e.g. amygdala, insula, thalamus, globus pallidus) and cortical regions (e.g. dorsomedial prefrontal cortex (dmPFC), posterior cingulate cortex (PCC), and precuneus). Functional connectivity analysis yielded an altered interplay between PCC/precuneus and paralimbic (insula) as well as cortical regions (dmPFC, precuneus) in SAD patients, which emphasizes a central role for PCC/precuneus in disorder-related scene processing. Hyperconnectivity of globus pallidus with amygdala, anterior cingulate cortex (ACC) and medial prefrontal cortex (mPFC) additionally underlines the relevance of this region in socially anxious threat processing. Our findings stress the importance of specific disorder-related stimuli for the investigation of altered emotion processing in SAD. Disorder-related threat processing in SAD reveals anomalies at multiple stages of emotion processing which may be linked to increased anxiety and to dysfunctionally elevated levels of self-referential processing reported in previous studies. © 2016 Wiley Periodicals, Inc.

  14. Gordon Craig's Scene Project: a history open to revision

    Directory of Open Access Journals (Sweden)

    Luiz Fernando

    2014-09-01

    Full Text Available The article proposes a review of Gordon Craig’s Scene project, an invention patented in 1910 and developed until 1922. Craig himself kept an ambiguous position whether it was an unfulfilled project or not. His son and biographer Edward Craig sustained that Craig’s original aims were never achieved because of technical limitation, and most of the scholars who examined the matter followed this position. Departing from the actual screen models saved in the Bibliothèque Nationale de France, Craig’s original notebooks, and a short film from 1963, I defend that the patented project and the essay published in 1923 mean, indeed, the materialisation of the dreamed device of the thousand scenes in one scene

  15. A view not to be missed: Salient scene content interferes with cognitive restoration

    NARCIS (Netherlands)

    van der Jagt, A.P.N.; Craig, Tony; Brewer, Mark J.; Pearson, David G.

    2017-01-01

    Attention Restoration Theory (ART) states that built scenes place greater load on attentional resources than natural scenes. This is explained in terms of "hard" and "soft" fascination of built and natural scenes. Given a lack of direct empirical evidence for this assumption we propose that

  16. Estimating cotton canopy ground cover from remotely sensed scene reflectance

    International Nuclear Information System (INIS)

    Maas, S.J.

    1998-01-01

    Many agricultural applications require spatially distributed information on growth-related crop characteristics that could be supplied through aircraft or satellite remote sensing. A study was conducted to develop and test a methodology for estimating plant canopy ground cover for cotton (Gossypium hirsutum L.) from scene reflectance. Previous studies indicated that a relatively simple relationship between ground cover and scene reflectance could be developed based on linear mixture modeling. Theoretical analysis indicated that the effects of shadows in the scene could be compensated for by averaging the results obtained using scene reflectance in the red and near-infrared wavelengths. The methodology was tested using field data collected over several years from cotton test plots in Texas and California. Results of the study appear to verify the utility of this approach. Since the methodology relies on information that can be obtained solely through remote sensing, it would be particularly useful in applications where other field information, such as plant size, row spacing, and row orientation, is unavailable

  17. Significance of perceptually relevant image decolorization for scene classification

    Science.gov (United States)

    Viswanathan, Sowmya; Divakaran, Govind; Soman, Kutti Padanyl

    2017-11-01

    Color images contain luminance and chrominance components representing the intensity and color information, respectively. The objective of this paper is to show the significance of incorporating chrominance information to the task of scene classification. An improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the color-to-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images. The experimental results based on an image quality assessment for image decolorization and its success rate (using the Cadik and COLOR250 datasets) show that the proposed image decolorization technique performs better than eight existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component for scene classification tasks is demonstrated using a deep belief network-based image classification system developed using dense scale-invariant feature transforms. The amount of chrominance information incorporated into the proposed image decolorization technique is confirmed with the improvement to the overall scene classification accuracy. Moreover, the overall scene classification performance improved by combining the models obtained using the proposed method and conventional decolorization methods.

  18. Medial Temporal Lobe Contributions to Episodic Future Thinking: Scene Construction or Future Projection?

    Science.gov (United States)

    Palombo, D J; Hayes, S M; Peterson, K M; Keane, M M; Verfaellie, M

    2018-02-01

    Previous research has shown that the medial temporal lobes (MTL) are more strongly engaged when individuals think about the future than about the present, leading to the suggestion that future projection drives MTL engagement. However, future thinking tasks often involve scene processing, leaving open the alternative possibility that scene-construction demands, rather than future projection, are responsible for the MTL differences observed in prior work. This study explores this alternative account. Using functional magnetic resonance imaging, we directly contrasted MTL activity in 1) high scene-construction and low scene-construction imagination conditions matched in future thinking demands and 2) future-oriented and present-oriented imagination conditions matched in scene-construction demands. Consistent with the alternative account, the MTL was more active for the high versus low scene-construction condition. By contrast, MTL differences were not observed when comparing the future versus present conditions. Moreover, the magnitude of MTL activation was associated with the extent to which participants imagined a scene but was not associated with the extent to which participants thought about the future. These findings help disambiguate which component processes of imagination specifically involve the MTL. Published by Oxford University Press 2016.

  19. Text Detection in Natural Scene Images by Stroke Gabor Words.

    Science.gov (United States)

    Yi, Chucai; Tian, Yingli

    2011-01-01

    In this paper, we propose a novel algorithm, based on stroke components and descriptive Gabor filters, to detect text regions in natural scene images. Text characters and strings are constructed by stroke components as basic units. Gabor filters are used to describe and analyze the stroke components in text characters or strings. We define a suitability measurement to analyze the confidence of Gabor filters in describing stroke component and the suitability of Gabor filters on an image window. From the training set, we compute a set of Gabor filters that can describe principle stroke components of text by their parameters. Then a K -means algorithm is applied to cluster the descriptive Gabor filters. The clustering centers are defined as Stroke Gabor Words (SGWs) to provide a universal description of stroke components. By suitability evaluation on positive and negative training samples respectively, each SGW generates a pair of characteristic distributions of suitability measurements. On a testing natural scene image, heuristic layout analysis is applied first to extract candidate image windows. Then we compute the principle SGWs for each image window to describe its principle stroke components. Characteristic distributions generated by principle SGWs are used to classify text or nontext windows. Experimental results on benchmark datasets demonstrate that our algorithm can handle complex backgrounds and variant text patterns (font, color, scale, etc.).

  20. Environmental influences on neural systems of relational complexity

    Directory of Open Access Journals (Sweden)

    Layne eKalbfleisch

    2013-09-01

    Full Text Available Constructivist learning theory contends that we construct knowledge by experience and that environmental context influences learning. To explore this principle, we examined the cognitive process relational complexity (RC, defined as the number of visual dimensions considered during problem solving on a matrix reasoning task and a well-documented measure of mature reasoning capacity. We sought to determine how the visual environment influences RC by examining the influence of color and visual contrast on RC in a neuroimaging task. To specify the contributions of sensory demand and relational integration to reasoning, our participants performed a non-verbal matrix task comprised of color, no-color line, or black-white visual contrast conditions parametrically varied by complexity (relations 0, 1, 2. The use of matrix reasoning is ecologically valid for its psychometric relevance and for its potential to link the processing of psychophysically specific visual properties with various levels of relational complexity during reasoning. The role of these elements is important because matrix tests assess intellectual aptitude based on these seemingly context-less exercises. This experiment is a first step toward examining the psychophysical underpinnings of performance on these types of problems. The importance of this is increased in light of recent evidence that intelligence can be linked to visual discrimination. We submit three main findings. First, color and black-white visual contrast add demand at a basic sensory level, but contributions from color and from black-white visual contrast are dissociable in cortex such that color engages a reasoning heuristic and black-white visual contrast engages a sensory heuristic. Second, color supports contextual sense-making by boosting salience resulting in faster problem solving. Lastly, when visual complexity reaches 2-relations, color and visual contrast relinquish salience to other dimensions of problem

  1. Implementing complex innovations: factors influencing middle manager support.

    Science.gov (United States)

    Chuang, Emmeline; Jason, Kendra; Morgan, Jennifer Craft

    2011-01-01

    Middle manager resistance is often described as a major challenge for upper-level administrators seeking to implement complex innovations such as evidence-based protocols or new skills training. However, factors influencing middle manager support for innovation implementation are currently understudied in the U.S. health care literature. This article examined the factors that influence middle managers' support for and participation in the implementation of work-based learning, a complex innovation adopted by health care organizations to improve the jobs, educational pathways, skills, and/or credentials of their frontline workers. We conducted semistructured interviews and focus groups with 92 middle managers in 17 health care organizations. Questions focused on understanding middle managers' support for work-based learning as a complex innovation, facilitators and barriers to the implementation process, and the systems changes needed to support the implementation of this innovation. Factors that emerged as influential to middle manager support were similar to those found in broader models of innovation implementation within the health care literature. However, our findings extend previous research by developing an understanding about how middle managers perceived these constructs and by identifying specific strategies for how to influence middle manager support for the innovation implementation process. These findings were generally consistent across different types of health care organizations. Study findings suggest that middle manager support was highest when managers felt the innovation fit their workplace needs and priorities and when they had more discretion and control over how it was implemented. Leaders seeking to implement innovations should consider the interplay between middle managers' control and discretion, their narrow focus on the performance of their own departments or units, and the dedication of staff and other resources for empowering their

  2. Eye movements and attention in reading, scene perception, and visual search.

    Science.gov (United States)

    Rayner, Keith

    2009-08-01

    Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with "real-world" tasks and research utilizing the visual-world paradigm are also briefly discussed.

  3. The influence of advertisements on the conspicuity of routing information

    NARCIS (Netherlands)

    Boersema, T.; Zwaga, H.J.G.

    1985-01-01

    An experiment is described in which the influence of advertisements on the conspicuity of routing information was investigated. Stimulus material consisted of colour slides of 12 railway station scenes. In two of these scenes, number and size of advertisements were systematically varied. Subjects

  4. The Influence of Environmental Spatial Layout on Perceived Lightness

    Science.gov (United States)

    Kanari, Kei; Inagami, Makoto; Kaneko, Hirohiko

    2011-01-01

    It is obvious that perceived lightness of a surface depends on the surrounding luminance distribution in 2D and 3D. These effects are usually explained by the mechanisms at relatively low level of visual system. However, there seems to be a relation between the illuminance and spatial layout of the scene regardless of the surrounding luminance distribution. If this is valid, perceived lightness of a surface in the scene could be influenced by the spatial layout in the scene. In this research, we investigated the relation between the perceived lightness of surface and the spatial layout of the scene. The subject matched the lightness of test patch presented on a natural picture with various spatial layout to that of comparison stimulus presented on a uniform gray background. The mean luminance of the surround stimuli were the same and the local contrast between the text patch and the surround was kept constant. Results showed that the perceived lightness of a stimulus depended on the spatial structure presented in the background. This result indicates that the spatial layout of the scene is related to the illuminance of that and influenced on perceived lightness.

  5. Robotic Discovery of the Auditory Scene

    National Research Council Canada - National Science Library

    Martinson, E; Schultz, A

    2007-01-01

    .... Motivated by the large negative effect of ambient noise sources on robot audition, the long-term goal is to provide awareness of the auditory scene to a robot, so that it may more effectively act...

  6. Open and scalable analytics of large Earth observation datasets: From scenes to multidimensional arrays using SciDB and GDAL

    Science.gov (United States)

    Appel, Marius; Lahn, Florian; Buytaert, Wouter; Pebesma, Edzer

    2018-04-01

    Earth observation (EO) datasets are commonly provided as collection of scenes, where individual scenes represent a temporal snapshot and cover a particular region on the Earth's surface. Using these data in complex spatiotemporal modeling becomes difficult as soon as data volumes exceed a certain capacity or analyses include many scenes, which may spatially overlap and may have been recorded at different dates. In order to facilitate analytics on large EO datasets, we combine and extend the geospatial data abstraction library (GDAL) and the array-based data management and analytics system SciDB. We present an approach to automatically convert collections of scenes to multidimensional arrays and use SciDB to scale computationally intensive analytics. We evaluate the approach in three study cases on national scale land use change monitoring with Landsat imagery, global empirical orthogonal function analysis of daily precipitation, and combining historical climate model projections with satellite-based observations. Results indicate that the approach can be used to represent various EO datasets and that analyses in SciDB scale well with available computational resources. To simplify analyses of higher-dimensional datasets as from climate model output, however, a generalization of the GDAL data model might be needed. All parts of this work have been implemented as open-source software and we discuss how this may facilitate open and reproducible EO analyses.

  7. Adaptive attunement of selective covert attention to evolutionary-relevant emotional visual scenes.

    Science.gov (United States)

    Fernández-Martín, Andrés; Gutiérrez-García, Aída; Capafons, Juan; Calvo, Manuel G

    2017-05-01

    We investigated selective attention to emotional scenes in peripheral vision, as a function of adaptive relevance of scene affective content for male and female observers. Pairs of emotional-neutral images appeared peripherally-with perceptual stimulus differences controlled-while viewers were fixating on a different stimulus in central vision. Early selective orienting was assessed by the probability of directing the first fixation towards either scene, and the time until first fixation. Emotional scenes selectively captured covert attention even when they were task-irrelevant, thus revealing involuntary, automatic processing. Sex of observers and specific emotional scene content (e.g., male-to-female-aggression, families and babies, etc.) interactively modulated covert attention, depending on adaptive priorities and goals for each sex, both for pleasant and unpleasant content. The attentional system exhibits domain-specific and sex-specific biases and attunements, probably rooted in evolutionary pressures to enhance reproductive and protective success. Emotional cues selectively capture covert attention based on their bio-social significance. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. Adaptive attunement of selective covert attention to evolutionary-relevant emotional visual scenes

    OpenAIRE

    Fernández-Martín, Andrés (UNIR); Gutiérrez-García, Aida; Capafons, Juan; Calvo, Manuel G

    2017-01-01

    We investigated selective attention to emotional scenes in peripheral vision, as a function of adaptive relevance of scene affective content for male and female observers. Pairs of emotional neutral images appeared peripherally with perceptual stimulus differences controlled while viewers were fixating on a different stimulus in central vision. Early selective orienting was assessed by the probability of directing the first fixation towards either scene, and the time until first fixation. Emo...

  9. Knowledge Guided Disambiguation for Large-Scale Scene Classification With Multi-Resolution CNNs

    Science.gov (United States)

    Wang, Limin; Guo, Sheng; Huang, Weilin; Xiong, Yuanjun; Qiao, Yu

    2017-04-01

    Convolutional Neural Networks (CNNs) have made remarkable progress on scene recognition, partially due to these recent large-scale scene datasets, such as the Places and Places2. Scene categories are often defined by multi-level information, including local objects, global layout, and background environment, thus leading to large intra-class variations. In addition, with the increasing number of scene categories, label ambiguity has become another crucial issue in large-scale classification. This paper focuses on large-scale scene recognition and makes two major contributions to tackle these issues. First, we propose a multi-resolution CNN architecture that captures visual content and structure at multiple levels. The multi-resolution CNNs are composed of coarse resolution CNNs and fine resolution CNNs, which are complementary to each other. Second, we design two knowledge guided disambiguation techniques to deal with the problem of label ambiguity. (i) We exploit the knowledge from the confusion matrix computed on validation data to merge ambiguous classes into a super category. (ii) We utilize the knowledge of extra networks to produce a soft label for each image. Then the super categories or soft labels are employed to guide CNN training on the Places2. We conduct extensive experiments on three large-scale image datasets (ImageNet, Places, and Places2), demonstrating the effectiveness of our approach. Furthermore, our method takes part in two major scene recognition challenges, and achieves the second place at the Places2 challenge in ILSVRC 2015, and the first place at the LSUN challenge in CVPR 2016. Finally, we directly test the learned representations on other scene benchmarks, and obtain the new state-of-the-art results on the MIT Indoor67 (86.7\\%) and SUN397 (72.0\\%). We release the code and models at~\\url{https://github.com/wanglimin/MRCNN-Scene-Recognition}.

  10. The influence of advertisements on the conspicuity of routing information

    OpenAIRE

    Boersema, T.; Zwaga, H.J.G.

    1985-01-01

    An experiment is described in which the influence of advertisements on the conspicuity of routing information was investigated. Stimulus material consisted of colour slides of 12 railway station scenes. In two of these scenes, number and size of advertisements were systematically varied. Subjects were instructed to locate routing signs in the scenes. Performance on the location task was used as a measure of the routing sign conspicuity. The results show that inserting an advertisement lessens...

  11. Representations and Techniques for 3D Object Recognition and Scene Interpretation

    CERN Document Server

    Hoiem, Derek

    2011-01-01

    One of the grand challenges of artificial intelligence is to enable computers to interpret 3D scenes and objects from imagery. This book organizes and introduces major concepts in 3D scene and object representation and inference from still images, with a focus on recent efforts to fuse models of geometry and perspective with statistical machine learning. The book is organized into three sections: (1) Interpretation of Physical Space; (2) Recognition of 3D Objects; and (3) Integrated 3D Scene Interpretation. The first discusses representations of spatial layout and techniques to interpret physi

  12. A distributed code for colour in natural scenes derived from centre-surround filtered cone signals

    Directory of Open Access Journals (Sweden)

    Christian Johannes Kellner

    2013-09-01

    Full Text Available In the retina of trichromatic primates, chromatic information is encoded in an opponent fashion and transmitted to the lateral geniculate nucleus (LGN and visual cortex via parallel pathways. Chromatic selectivities of neurons in the LGN form two separate clusters, corresponding to two classes of cone opponency. In the visual cortex, however, the chromatic selectivities are more distributed, which is in accordance with a population code for colour. Previous studies of cone signals in natural scenes typically found opponent codes with chromatic selectivities corresponding to two directions in colour space. Here we investigated how the nonlinear spatiochromatic filtering in the retina influences the encoding of colour signals. Cone signals were derived from hyperspectral images of natural scenes and pre-processed by centre-surround filtering and rectification, resulting in parallel ON and OFF channels. Independent Component Analysis on these signals yielded a highly sparse code with basis functions that showed spatio-chromatic selectivities. In contrast to previous analyses of linear transformations of cone signals, chromatic selectivities were not restricted to two main chromatic axes, but were more continuously distributed in colour space, similar to the population code of colour in the early visual cortex. Our results indicate that spatiochromatic processing in the retina leads to a more distributed and more efficient code for natural scenes.

  13. Contextual Guidance of Eye Movements and Attention in Real-World Scenes: The Role of Global Features in Object Search

    Science.gov (United States)

    Torralba, Antonio; Oliva, Aude; Castelhano, Monica S.; Henderson, John M.

    2006-01-01

    Many experiments have shown that the human visual system makes extensive use of contextual information for facilitating object search in natural scenes. However, the question of how to formally model contextual influences is still open. On the basis of a Bayesian framework, the authors present an original approach of attentional guidance by global…

  14. 'Working behind the scenes'. An ethical view of mental health nursing and first-episode psychosis.

    Science.gov (United States)

    Moe, Cathrine; Kvig, Erling I; Brinchmann, Beate; Brinchmann, Berit S

    2013-08-01

    The aim of this study was to explore and reflect upon mental health nursing and first-episode psychosis. Seven multidisciplinary focus group interviews were conducted, and data analysis was influenced by a grounded theory approach. The core category was found to be a process named 'working behind the scenes'. It is presented along with three subcategories: 'keeping the patient in mind', 'invisible care' and 'invisible network contact'. Findings are illuminated with the ethical principles of respect for autonomy and paternalism. Nursing care is dynamic, and clinical work moves along continuums between autonomy and paternalism and between ethical reflective and non-reflective practice. 'Working behind the scenes' is considered to be in a paternalistic area, containing an ethical reflection. Treating and caring for individuals experiencing first-episode psychosis demands an ethical awareness and great vigilance by nurses. The study is a contribution to reflection upon everyday nursing practice, and the conclusion concerns the importance of making invisible work visible.

  15. Automating the construction of scene classifiers for content-based video retrieval

    NARCIS (Netherlands)

    Khan, L.; Israël, Menno; Petrushin, V.A.; van den Broek, Egon; van der Putten, Peter

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a

  16. Number of perceptually distinct surface colors in natural scenes.

    Science.gov (United States)

    Marín-Franch, Iván; Foster, David H

    2010-09-30

    The ability to perceptually identify distinct surfaces in natural scenes by virtue of their color depends not only on the relative frequency of surface colors but also on the probabilistic nature of observer judgments. Previous methods of estimating the number of discriminable surface colors, whether based on theoretical color gamuts or recorded from real scenes, have taken a deterministic approach. Thus, a three-dimensional representation of the gamut of colors is divided into elementary cells or points which are spaced at one discrimination-threshold unit intervals and which are then counted. In this study, information-theoretic methods were used to take into account both differing surface-color frequencies and observer response uncertainty. Spectral radiances were calculated from 50 hyperspectral images of natural scenes and were represented in a perceptually almost uniform color space. The average number of perceptually distinct surface colors was estimated as 7.3 × 10(3), much smaller than that based on counting methods. This number is also much smaller than the number of distinct points in a scene that are, in principle, available for reliable identification under illuminant changes, suggesting that color constancy, or the lack of it, does not generally determine the limit on the use of color for surface identification.

  17. Characteristics of nontrauma scene flights for air medical transport.

    Science.gov (United States)

    Krebs, Margaret G; Fletcher, Erica N; Werman, Howard; McKenzie, Lara B

    2014-01-01

    Little is known about the use of air medical transport for patients with medical, rather than traumatic, emergencies. This study describes the practices of air transport programs, with respect to nontrauma scene responses, in several areas throughout the United States and Canada. A descriptive, retrospective study was conducted of all nontrauma scene flights from 2008 and 2009. Flight information and patient demographic data were collected from 5 air transport programs. Descriptive statistics were used to examine indications for transport, Glasgow Coma Scale Scores, and loaded miles traveled. A total of 1,785 nontrauma scene flights were evaluated. The percentage of scene flights contributed by nontraumatic emergencies varied between programs, ranging from 0% to 44.3%. The most common indication for transport was cardiac, nonST-segment elevation myocardial infarction (22.9%). Cardiac arrest was the indication for transport in 2.5% of flights. One air transport program reported a high percentage (49.4) of neurologic, stroke, flights. The use of air transport for nontraumatic emergencies varied considerably between various air transport programs and regions. More research is needed to evaluate which nontraumatic emergencies benefit from air transport. National guidelines regarding the use of air transport for nontraumatic emergencies are needed. Copyright © 2014 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.

  18. Narrative Collage of Image Collections by Scene Graph Recombination.

    Science.gov (United States)

    Fang, Fei; Yi, Miao; Feng, Hui; Hu, Shenghong; Xiao, Chunxia

    2017-10-04

    Narrative collage is an interesting image editing art to summarize the main theme or storyline behind an image collection. We present a novel method to generate narrative images with plausible semantic scene structures. To achieve this goal, we introduce a layer graph and a scene graph to represent relative depth order and semantic relationship between image objects, respectively. We firstly cluster the input image collection to select representative images, and then extract a group of semantic salient objects from each representative image. Both Layer graphs and scene graphs are constructed and combined according to our specific rules for reorganizing the extracted objects in every image. We design an energy model to appropriately locate every object on the final canvas. Experiment results show that our method can produce competitive narrative collage result and works well on a wide range of image collections.

  19. STREAM PROCESSING ALGORITHMS FOR DYNAMIC 3D SCENE ANALYSIS

    Science.gov (United States)

    2018-02-15

    PROCESSING ALGORITHMS FOR DYNAMIC 3D SCENE ANALYSIS 5a. CONTRACT NUMBER FA8750-14-2-0072 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 62788F 6...of Figures 1 The 3D processing pipeline flowchart showing key modules. . . . . . . . . . . . . . . . . 12 2 Overall view (data flow) of the proposed...pipeline flowchart showing key modules. from motion and bundle adjustment algorithm. By fusion of depth masks of the scene obtained from 3D

  20. Ecotoxicology of waters under the influence of a petrochemical complex

    Energy Technology Data Exchange (ETDEWEB)

    Noll, R; Zandonai, V; Ries, M A [CORSAN-SITEL, Triunfo, RS (Brazil). Polo Petrquimico do Sul

    1994-12-31

    This work summarizes the regular monitoring and studies conducted by SITEL - The Integrated Wastewater Treatment System of South Petrochemical Complex (South Brazil) - in order to evaluate the full environmental impact on waters in the area of influence of the effluents of the above mentioned Complex. 8 refs., 1 fig., 5 tabs.

  1. Ecotoxicology of waters under the influence of a petrochemical complex

    Energy Technology Data Exchange (ETDEWEB)

    Noll, R.; Zandonai, V.; Ries, M.A. [CORSAN-SITEL, Triunfo, RS (Brazil). Polo Petrquimico do Sul

    1993-12-31

    This work summarizes the regular monitoring and studies conducted by SITEL - The Integrated Wastewater Treatment System of South Petrochemical Complex (South Brazil) - in order to evaluate the full environmental impact on waters in the area of influence of the effluents of the above mentioned Complex. 8 refs., 1 fig., 5 tabs.

  2. Socializing in an open drug scene: the relationship between access to private space and drug-related street disorder.

    Science.gov (United States)

    Debeck, Kora; Wood, Evan; Qi, Jiezhi; Fu, Eric; McArthur, Doug; Montaner, Julio; Kerr, Thomas

    2012-01-01

    Limited attention has been given to the potential role that the structure of housing available to people who are entrenched in street-based drug scenes may play in influencing the amount of time injection drug users (IDU) spend on public streets. We sought to examine the relationship between time spent socializing in Vancouver's drug scene and access to private space. Using multivariate logistic regression we evaluated factors associated with socializing (three+ hours each day) in Vancouver's open drug scene among a prospective cohort of IDU. We also assessed attitudes towards relocating socializing activities if greater access to private indoor space was provided. Among our sample of 1114 IDU, 43% fit our criteria for socializing in the open drug scene. In multivariate analysis, having limited access to private space was independently associated with socializing (adjusted odds ratio: 1.80, 95% confidence interval: 1.28-2.55). In further analysis, 65% of 'socializers' reported positive attitudes towards relocating socializing if they had greater access to private space. These findings suggest that providing IDU with greater access to private indoor space may reduce one component of drug-related street disorder. Low-threshold supportive housing based on the 'housing first' model that include safeguards to manage behaviors associated with illicit drug use appear to offer important opportunities to create the types of private spaces that could support a reduction in street disorder. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  3. Short report: the effect of expertise in hiking on recognition memory for mountain scenes.

    Science.gov (United States)

    Kawamura, Satoru; Suzuki, Sae; Morikawa, Kazunori

    2007-10-01

    The nature of an expert memory advantage that does not depend on stimulus structure or chunking was examined, using more ecologically valid stimuli in the context of a more natural activity than previously studied domains. Do expert hikers and novice hikers see and remember mountain scenes differently? In the present experiment, 18 novice hikers and 17 expert hikers were presented with 60 photographs of scenes from hiking trails. These scenes differed in the degree of functional aspects that implied some action possibilities or dangers. The recognition test revealed that the memory performance of experts was significantly superior to that of novices for scenes with highly functional aspects. The memory performance for the scenes with few functional aspects did not differ between novices and experts. These results suggest that experts pay more attention to, and thus remember better, scenes with functional meanings than do novices.

  4. OpenSceneGraph 3 Cookbook

    CERN Document Server

    Wang, Rui

    2012-01-01

    This is a cookbook full of recipes with practical examples enriched with code and the required screenshots for easy and quick comprehension. You should be familiar with the basic concepts of the OpenSceneGraph API and should be able to write simple programs. Some OpenGL and math knowledge will help a lot, too.

  5. Facial Mimicry and Emotion Consistency: Influences of Memory and Context.

    Science.gov (United States)

    Kirkham, Alexander J; Hayes, Amy E; Pawling, Ralph; Tipper, Steven P

    2015-01-01

    This study investigates whether mimicry of facial emotions is a stable response or can instead be modulated and influenced by memory of the context in which the emotion was initially observed, and therefore the meaning of the expression. The study manipulated emotion consistency implicitly, where a face expressing smiles or frowns was irrelevant and to be ignored while participants categorised target scenes. Some face identities always expressed emotions consistent with the scene (e.g., smiling with a positive scene), whilst others were always inconsistent (e.g., frowning with a positive scene). During this implicit learning of face identity and emotion consistency there was evidence for encoding of face-scene emotion consistency, with slower RTs, a reduction in trust, and inhibited facial EMG for faces expressing incompatible emotions. However, in a later task where the faces were subsequently viewed expressing emotions with no additional context, there was no evidence for retrieval of prior emotion consistency, as mimicry of emotion was similar for consistent and inconsistent individuals. We conclude that facial mimicry can be influenced by current emotion context, but there is little evidence of learning, as subsequent mimicry of emotionally consistent and inconsistent faces is similar.

  6. Facial Mimicry and Emotion Consistency: Influences of Memory and Context.

    Directory of Open Access Journals (Sweden)

    Alexander J Kirkham

    Full Text Available This study investigates whether mimicry of facial emotions is a stable response or can instead be modulated and influenced by memory of the context in which the emotion was initially observed, and therefore the meaning of the expression. The study manipulated emotion consistency implicitly, where a face expressing smiles or frowns was irrelevant and to be ignored while participants categorised target scenes. Some face identities always expressed emotions consistent with the scene (e.g., smiling with a positive scene, whilst others were always inconsistent (e.g., frowning with a positive scene. During this implicit learning of face identity and emotion consistency there was evidence for encoding of face-scene emotion consistency, with slower RTs, a reduction in trust, and inhibited facial EMG for faces expressing incompatible emotions. However, in a later task where the faces were subsequently viewed expressing emotions with no additional context, there was no evidence for retrieval of prior emotion consistency, as mimicry of emotion was similar for consistent and inconsistent individuals. We conclude that facial mimicry can be influenced by current emotion context, but there is little evidence of learning, as subsequent mimicry of emotionally consistent and inconsistent faces is similar.

  7. The Anthropo-scene: A guide for the perplexed.

    Science.gov (United States)

    Lorimer, Jamie

    2017-02-01

    The scientific proposal that the Earth has entered a new epoch as a result of human activities - the Anthropocene - has catalysed a flurry of intellectual activity. I introduce and review the rich, inchoate and multi-disciplinary diversity of this Anthropo-scene. I identify five ways in which the concept of the Anthropocene has been mobilized: scientific question, intellectual zeitgeist, ideological provocation, new ontologies and science fiction. This typology offers an analytical framework for parsing this diversity, for understanding the interactions between different ways of thinking in the Anthropo-scene, and thus for comprehending elements of its particular and peculiar sociabilities. Here I deploy this framework to situate Earth Systems Science within the Anthropo-scene, exploring both the status afforded science in discussions of this new epoch, and the various ways in which the other means of engaging with the concept come to shape the conduct, content and politics of this scientific enquiry. In conclusion the paper reflects on the potential of the Anthropocene for new modes of academic praxis.

  8. A Virtual Environments Editor for Driving Scenes

    Directory of Open Access Journals (Sweden)

    Ronald R. Mourant

    2003-12-01

    Full Text Available The goal of this project was to enable the rapid creation of three-dimensional virtual driving environments. We designed and implemented a high-level scene editor that allows a user to construct a driving environment by pasting icons that represent 1 road segments, 2 road signs, 3 trees and 4 buildings. These icons represent two- and three-dimensional objects that have been predesigned. Icons can be placed in the scene at specific locations (x, y, and z coordinates. The editor includes the capability of a user to "drive" a vehicle using a computer mouse for steering, accelerating and braking. At any time during the process of building a virtual environment, a user may switch to "Run Mode" and inspect the three-dimensional scene by "driving" through it using the mouse. Adjustments and additions can be made to the virtual environment by going back to "Build Mode". Once a user is satisfied with the threedimensional virtual environment, it can be saved in a file. The file can used with Java3D software that enables the traversing of three-dimensional environments. The process of building virtual environments from predesigned icons can be applied to many other application areas. It will enable novice computer users to rapidly construct and use three-dimensional virtual environments.

  9. Preliminary Investigation of Visual Attention to Human Figures in Photographs: Potential Considerations for the Design of Aided AAC Visual Scene Displays

    Science.gov (United States)

    Wilkinson, Krista M.; Light, Janice

    2011-01-01

    Purpose: Many individuals with complex communication needs may benefit from visual aided augmentative and alternative communication systems. In visual scene displays (VSDs), language concepts are embedded into a photograph of a naturalistic event. Humans play a central role in communication development and might be important elements in VSDs.…

  10. Single-View 3D Scene Reconstruction and Parsing by Attribute Grammar.

    Science.gov (United States)

    Liu, Xiaobai; Zhao, Yibiao; Zhu, Song-Chun

    2018-03-01

    In this paper, we present an attribute grammar for solving two coupled tasks: i) parsing a 2D image into semantic regions; and ii) recovering the 3D scene structures of all regions. The proposed grammar consists of a set of production rules, each describing a kind of spatial relation between planar surfaces in 3D scenes. These production rules are used to decompose an input image into a hierarchical parse graph representation where each graph node indicates a planar surface or a composite surface. Different from other stochastic image grammars, the proposed grammar augments each graph node with a set of attribute variables to depict scene-level global geometry, e.g., camera focal length, or local geometry, e.g., surface normal, contact lines between surfaces. These geometric attributes impose constraints between a node and its off-springs in the parse graph. Under a probabilistic framework, we develop a Markov Chain Monte Carlo method to construct a parse graph that optimizes the 2D image recognition and 3D scene reconstruction purposes simultaneously. We evaluated our method on both public benchmarks and newly collected datasets. Experiments demonstrate that the proposed method is capable of achieving state-of-the-art scene reconstruction of a single image.

  11. Typical Toddlers' Participation in “Just-in-Time” Programming of Vocabulary for Visual Scene Display Augmentative and Alternative Communication Apps on Mobile Technology: A Descriptive Study

    Science.gov (United States)

    Drager, Kathryn; Light, Janice; Caron, Jessica Gosnell

    2017-01-01

    Purpose Augmentative and alternative communication (AAC) promotes communicative participation and language development for young children with complex communication needs. However, the motor, linguistic, and cognitive demands of many AAC technologies restrict young children's operational use of and influence over these technologies. The purpose of the current study is to better understand young children's participation in programming vocabulary “just in time” on an AAC application with minimized demands. Method A descriptive study was implemented to highlight the participation of 10 typically developing toddlers (M age: 16 months, range: 10–22 months) in just-in-time vocabulary programming in an AAC app with visual scene displays. Results All 10 toddlers participated in some capacity in adding new visual scene displays and vocabulary to the app just in time. Differences in participation across steps were observed, suggesting variation in the developmental demands of controls involved in vocabulary programming. Conclusions Results from the current study provide clinical insights toward involving young children in AAC programming just in time and steps that may allow for more independent participation or require more scaffolding. Technology designed to minimize motor, cognitive, and linguistic demands may allow children to participate in programming devices at a younger age. PMID:28586825

  12. Overt attention in natural scenes: objects dominate features.

    Science.gov (United States)

    Stoll, Josef; Thrun, Michael; Nuthmann, Antje; Einhäuser, Wolfgang

    2015-02-01

    Whether overt attention in natural scenes is guided by object content or by low-level stimulus features has become a matter of intense debate. Experimental evidence seemed to indicate that once object locations in a scene are known, salience models provide little extra explanatory power. This approach has recently been criticized for using inadequate models of early salience; and indeed, state-of-the-art salience models outperform trivial object-based models that assume a uniform distribution of fixations on objects. Here we propose to use object-based models that take a preferred viewing location (PVL) close to the centre of objects into account. In experiment 1, we demonstrate that, when including this comparably subtle modification, object-based models again are at par with state-of-the-art salience models in predicting fixations in natural scenes. One possible interpretation of these results is that objects rather than early salience dominate attentional guidance. In this view, early-salience models predict fixations through the correlation of their features with object locations. To test this hypothesis directly, in two additional experiments we reduced low-level salience in image areas of high object content. For these modified stimuli, the object-based model predicted fixations significantly better than early salience. This finding held in an object-naming task (experiment 2) and a free-viewing task (experiment 3). These results provide further evidence for object-based fixation selection--and by inference object-based attentional guidance--in natural scenes. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Neural Correlates of Divided Attention in Natural Scenes.

    Science.gov (United States)

    Fagioli, Sabrina; Macaluso, Emiliano

    2016-09-01

    Individuals are able to split attention between separate locations, but divided spatial attention incurs the additional requirement of monitoring multiple streams of information. Here, we investigated divided attention using photos of natural scenes, where the rapid categorization of familiar objects and prior knowledge about the likely positions of objects in the real world might affect the interplay between these spatial and nonspatial factors. Sixteen participants underwent fMRI during an object detection task. They were presented with scenes containing either a person or a car, located on the left or right side of the photo. Participants monitored either one or both object categories, in one or both visual hemifields. First, we investigated the interplay between spatial and nonspatial attention by comparing conditions of divided attention between categories and/or locations. We then assessed the contribution of top-down processes versus stimulus-driven signals by separately testing the effects of divided attention in target and nontarget trials. The results revealed activation of a bilateral frontoparietal network when dividing attention between the two object categories versus attending to a single category but no main effect of dividing attention between spatial locations. Within this network, the left dorsal premotor cortex and the left intraparietal sulcus were found to combine task- and stimulus-related signals. These regions showed maximal activation when participants monitored two categories at spatially separate locations and the scene included a nontarget object. We conclude that the dorsal frontoparietal cortex integrates top-down and bottom-up signals in the presence of distractors during divided attention in real-world scenes.

  14. Gist in time: Scene semantics and structure enhance recall of searched objects.

    Science.gov (United States)

    Josephs, Emilie L; Draschkow, Dejan; Wolfe, Jeremy M; Võ, Melissa L-H

    2016-09-01

    Previous work has shown that recall of objects that are incidentally encountered as targets in visual search is better than recall of objects that have been intentionally memorized (Draschkow, Wolfe, & Võ, 2014). However, this counter-intuitive result is not seen when these tasks are performed with non-scene stimuli. The goal of the current paper is to determine what features of search in a scene contribute to higher recall rates when compared to a memorization task. In each of four experiments, we compare the free recall rate for target objects following a search to the rate following a memorization task. Across the experiments, the stimuli include progressively more scene-related information. Experiment 1 provides the spatial relations between objects. Experiment 2 adds relative size and depth of objects. Experiments 3 and 4 include scene layout and semantic information. We find that search leads to better recall than explicit memorization in cases where scene layout and semantic information are present, as long as the participant has ample time (2500ms) to integrate this information with knowledge about the target object (Exp. 4). These results suggest that the integration of scene and target information not only leads to more efficient search, but can also contribute to stronger memory representations than intentional memorization. Copyright © 2016 Elsevier B.V. All rights reserved.

  15. The development of brain systems associated with successful memory retrieval of scenes.

    Science.gov (United States)

    Ofen, Noa; Chai, Xiaoqian J; Schuil, Karen D I; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-07-18

    Neuroanatomical and psychological evidence suggests prolonged maturation of declarative memory systems in the human brain from childhood into young adulthood. Here, we examine functional brain development during successful memory retrieval of scenes in children, adolescents, and young adults ages 8-21 via functional magnetic resonance imaging. Recognition memory improved with age, specifically for accurate identification of studied scenes (hits). Successful retrieval (correct old-new decisions for studied vs unstudied scenes) was associated with activations in frontal, parietal, and medial temporal lobe (MTL) regions. Activations associated with successful retrieval increased with age in left parietal cortex (BA7), bilateral prefrontal, and bilateral caudate regions. In contrast, activations associated with successful retrieval did not change with age in the MTL. Psychophysiological interaction analysis revealed that there were, however, age-relate changes in differential connectivity for successful retrieval between MTL and prefrontal regions. These results suggest that neocortical regions related to attentional or strategic control show the greatest developmental changes for memory retrieval of scenes. Furthermore, these results suggest that functional interactions between MTL and prefrontal regions during memory retrieval also develop into young adulthood. The developmental increase of memory-related activations in frontal and parietal regions for retrieval of scenes and the absence of such an increase in MTL regions parallels what has been observed for memory encoding of scenes.

  16. Image Chunking: Defining Spatial Building Blocks for Scene Analysis.

    Science.gov (United States)

    1987-04-01

    mumgs0.USmusa 7.AUWOJO 4. CIUTAC Rm6ANT Wuugme*j James V/. Mlahoney DACA? 6-85-C-00 10 NOQ 1 4-85-K-O 124 Artificial Inteligence Laboratory US USS 545...0197 672 IMAGE CHUWING: DEINING SPATIAL UILDING PLOCKS FOR 142 SCENE ANRLYSIS(U) MASSACHUSETTS INST OF TECH CAIIAIDGE ARTIFICIAL INTELLIGENCE LAO J...Technical Report 980 F-Image Chunking: Defining Spatial Building Blocks for Scene DTm -Analysis S ELECTED James V. Mahoney’ MIT Artificial Intelligence

  17. Spherical photography and virtual tours for presenting crime scenes and forensic evidence in new zealand courtrooms.

    Science.gov (United States)

    Tung, Nicole D; Barr, Jason; Sheppard, Dion J; Elliot, Douglas A; Tottey, Leah S; Walsh, Kevan A J

    2015-05-01

    The delivery of forensic science evidence in a clear and understandable manner is an important aspect of a forensic scientist's role during expert witness delivery in a courtroom trial. This article describes an Integrated Evidence Platform (IEP) system based on spherical photography which allows the audience to view the crime scene via a virtual tour and view the forensic scientist's evidence and results in context. Equipment and software programmes used in the creation of the IEP include a Nikon DSLR camera, a Seitz Roundshot VR Drive, PTGui Pro, and Tourweaver Professional Edition. The IEP enables a clear visualization of the crime scene, with embedded information such as photographs of items of interest, complex forensic evidence, the results of laboratory analyses, and scientific opinion evidence presented in context. The IEP has resulted in significant improvements to the pretrial disclosure of forensic results, enhanced the delivery of evidence in court, and improved the jury's understanding of the spatial relationship between results. © 2015 American Academy of Forensic Sciences.

  18. Factors influencing efficient structure of fuel and energy complex

    Science.gov (United States)

    Sidorova, N. G.; Novikova, S. A.

    2017-10-01

    The development of the Russian fuel-energy complex is a priority for the national economic policy, and the Far East is a link between Russia and the Asia-Pacific region. Large-scale engineering of numerous resources of the Far East will force industrial development, increase living standard and strengthen Russia’s position in the global energy market. So, revealing the factors which influence rational structure of the fuel-energy complex is very urgent nowadays. With the use of depth analysis of development tendencies of the complex and its problems the authors show ways of its efficiency improvement.

  19. Ontology of a scene based on Java 3D architecture.

    Directory of Open Access Journals (Sweden)

    Rubén González Crespo

    2009-12-01

    Full Text Available The present article seeks to make an approach to the class hierarchy of a scene built with the architecture Java 3D, to develop an ontology of a scene as from the semantic essential components for the semantic structuring of the Web3D. Java was selected because the language recommended by the W3C Consortium for the Development of the Web3D oriented applications as from X3D standard is Xj3D which compositionof their Schemas is based the architecture of Java3D In first instance identifies the domain and scope of the ontology, defining classes and subclasses that comprise from Java3D architecture and the essential elements of a scene, as its point of origin, the field of rotation, translation The limitation of the scene and the definition of shaders, then define the slots that are declared in RDF as a framework for describing the properties of the classes established from identifying thedomain and range of each class, then develops composition of the OWL ontology on SWOOP Finally, be perform instantiations of the ontology building for a Iconosphere object as from class expressions defined.

  20. Modelling Technology for Building Fire Scene with Virtual Geographic Environment

    Science.gov (United States)

    Song, Y.; Zhao, L.; Wei, M.; Zhang, H.; Liu, W.

    2017-09-01

    Building fire is a risky activity that can lead to disaster and massive destruction. The management and disposal of building fire has always attracted much interest from researchers. Integrated Virtual Geographic Environment (VGE) is a good choice for building fire safety management and emergency decisions, in which a more real and rich fire process can be computed and obtained dynamically, and the results of fire simulations and analyses can be much more accurate as well. To modelling building fire scene with VGE, the application requirements and modelling objective of building fire scene were analysed in this paper. Then, the four core elements of modelling building fire scene (the building space environment, the fire event, the indoor Fire Extinguishing System (FES) and the indoor crowd) were implemented, and the relationship between the elements was discussed also. Finally, with the theory and framework of VGE, the technology of building fire scene system with VGE was designed within the data environment, the model environment, the expression environment, and the collaborative environment as well. The functions and key techniques in each environment are also analysed, which may provide a reference for further development and other research on VGE.

  1. Influence of Herbal Complexes Containing Licorice on Potassium Levels: A Retrospective Study

    Directory of Open Access Journals (Sweden)

    WooSang Jung

    2014-01-01

    Full Text Available To observe the influence of these complexes on potassium levels in a clinical setting, we investigated the influence of herbal complexes containing licorice on potassium levels. We retrospectively examined the medical records of patients treated with herbal complexes containing licorice from January 1, 2010, to December 31, 2010. We recorded the changes in the levels of potassium, creatinine, and blood urea nitrogen and examined the differences between before and after herbal complexes intake using a paired t-test. In addition, we investigated the prevalence of hypokalemia among these patients and reviewed such patients. We identified 360 patients who did not show significant changes in the levels of potassium and creatinine (P=0.815, 0.289. We observed hypokalemia in 6 patients. However, in 5 patients, the hypokalemia did not appear to be related to the licorice. Thus, we could suggest that herbal complexes containing licorice do not significantly influence the potassium levels in routine clinical herbal therapies. However, we propose that follow-up examination for potassium levels is required to prevent any unpredictable side effects of administration of licorice in routine herbal medicine care.

  2. Guidance of Attention to Objects and Locations by Long-Term Memory of Natural Scenes

    Science.gov (United States)

    Becker, Mark W.; Rasmussen, Ian P.

    2008-01-01

    Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants…

  3. Microdevelopment of Complex Featural and Spatial Integration with Contextual Support

    Directory of Open Access Journals (Sweden)

    Pamela L. Hirsch

    2015-01-01

    Full Text Available Complex spatial decisions involve the ability to combine featural and spatial information in a scene. In the present work, 4- through 9-year-old children completed a complex map-scene correspondence task under baseline and supported conditions. Children compared a photographed scene with a correct map and with map-foils that made salient an object feature or spatial property. Map-scene matches were analyzed for the effects of age and featural-spatial information on children’s selections. In both conditions children significantly favored maps that highlighted object detail and object perspective rather than color, landmark, and metric elements. Children’s correct performance did not differ by age and was suboptimal, but their ability to choose correct maps improved significantly when contextual support was provided. Strategy variability was prominent for all age groups, but at age 9 with support children were more likely to give up their focus on features and transition to the use of spatial strategies. These findings suggest the possibility of a U-shaped curve for children’s development of geometric knowledge: geometric coding is predominant early on, diminishes for a time in middle childhood in favor of a preference for features, and then reemerges along with the more advanced abilities to combine featural and spatial information.

  4. Semantic memory for contextual regularities within and across scene categories: evidence from eye movements.

    Science.gov (United States)

    Brockmole, James R; Le-Hoa Võ, Melissa

    2010-10-01

    When encountering familiar scenes, observers can use item-specific memory to facilitate the guidance of attention to objects appearing in known locations or configurations. Here, we investigated how memory for relational contingencies that emerge across different scenes can be exploited to guide attention. Participants searched for letter targets embedded in pictures of bedrooms. In a between-subjects manipulation, targets were either always on a bed pillow or randomly positioned. When targets were systematically located within scenes, search for targets became more efficient. Importantly, this learning transferred to bedrooms without pillows, ruling out learning that is based on perceptual contingencies. Learning also transferred to living room scenes, but it did not transfer to kitchen scenes, even though both scene types contained pillows. These results suggest that statistical regularities abstracted across a range of stimuli are governed by semantic expectations regarding the presence of target-predicting local landmarks. Moreover, explicit awareness of these contingencies led to a central tendency bias in recall memory for precise target positions that is similar to the spatial category effects observed in landmark memory. These results broaden the scope of conditions under which contextual cuing operates and demonstrate how semantic memory plays a causal and independent role in the learning of associations between objects in real-world scenes.

  5. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features

    Directory of Open Access Journals (Sweden)

    Linyi Li

    2017-01-01

    Full Text Available In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.

  6. Mirth and Murder: Crime Scene Investigation as a Work Context for Examining Humor Applications

    Science.gov (United States)

    Roth, Gene L.; Vivona, Brian

    2010-01-01

    Within work settings, humor is used by workers for a wide variety of purposes. This study examines humor applications of a specific type of worker in a unique work context: crime scene investigation. Crime scene investigators examine death and its details. Members of crime scene units observe death much more frequently than other police officers…

  7. Smoking scenes in popular Japanese serial television dramas: descriptive analysis during the same 3-month period in two consecutive years.

    Science.gov (United States)

    Kanda, Hideyuki; Okamura, Tomonori; Turin, Tanvir Chowdhury; Hayakawa, Takehito; Kadowaki, Takashi; Ueshima, Hirotsugu

    2006-06-01

    Japanese serial television dramas are becoming very popular overseas, particularly in other Asian countries. Exposure to smoking scenes in movies and television dramas has been known to trigger initiation of habitual smoking in young people. Smoking scenes in Japanese dramas may affect the smoking behavior of many young Asians. We examined smoking scenes and smoking-related items in serial television dramas targeting young audiences in Japan during the same season in two consecutive years. Fourteen television dramas targeting the young audience broadcast between July and September in 2001 and 2002 were analyzed. A total of 136 h 42 min of television programs were divided into unit scenes of 3 min (a total of 2734 unit scenes). All the unit scenes were reviewed for smoking scenes and smoking-related items. Of the 2734 3-min unit scenes, 205 (7.5%) were actual smoking scenes and 387 (14.2%) depicted smoking environments with the presence of smoking-related items, such as ash trays. In 185 unit scenes (90.2% of total smoking scenes), actors were shown smoking. Actresses were less frequently shown smoking (9.8% of total smoking scenes). Smoking characters in dramas were in the 20-49 age group in 193 unit scenes (94.1% of total smoking scenes). In 96 unit scenes (46.8% of total smoking scenes), at least one non-smoker was present in the smoking scenes. The smoking locations were mainly indoors, including offices, restaurants and homes (122 unit scenes, 59.6%). The most common smoking-related items shown were ash trays (in 45.5% of smoking-item-related scenes) and cigarettes (in 30.2% of smoking-item-related scenes). Only 3 unit scenes (0.1 % of all scenes) promoted smoking prohibition. This was a descriptive study to examine the nature of smoking scenes observed in Japanese television dramas from a public health perspective.

  8. Technicolor/INRIA team at the MediaEval 2013 Violent Scenes Detection Task

    OpenAIRE

    Penet , Cédric; Demarty , Claire-Hélène; Gravier , Guillaume; Gros , Patrick

    2013-01-01

    International audience; This paper presents the work done at Technicolor and INRIA regarding the MediaEval 2013 Violent Scenes Detection task, which aims at detecting violent scenes in movies. We participated in both the objective and the subjective subtasks.

  9. Multiple vehicle routing and dispatching to an emergency scene

    OpenAIRE

    M S Daskin; A Haghani

    1984-01-01

    A model of the distribution of arrival time at the scene of an emergency for the first of many vehicles is developed for the case in which travel times on the links of the network are normally distributed and the path travel times of different vehicles are correlated. The model suggests that the probability that the first vehicle arrives at the scene within a given time may be increased by reducing the path time correlations, even if doing so necessitates increasing the mean path travel time ...

  10. Influence of Introduced Substituents on the Anion-selectivity of [14]Tetraazaannulene Complexes.

    Science.gov (United States)

    Moriuchi-Kawakami, Takayo; Obita, Minako; Tsujinaka, Toshiki; Shibutani, Yasuhiko

    2015-01-01

    Nickel(II) complexes of [14]tetraazaannulene derivatives incorporating aromatic rings into their azaannulene framework were synthesized, and the anion-selectivity of the [14]tetraazaannulene nickel complexes 1 - 4 was evaluated by potentiometric measurements with solvent polymeric membrane electrodes. All of the [14]Tetraazaannulene nickel complexes, except 3, were found to exhibit high selectivity for the I(-) ion over the SCN(-) ion, although considerable interference of the ClO4(-) ion was observed in all 1 - 4 complexes. Concerning the anion-selectivities of 1 and 4, the incorporation of naphthalene rings into the azaannulene framework decreased not only the interference of the ClO4(-) ion but also the I(-) ion-selectivity over the SCN(-) ion. Comparison studies between the dibenzotetraaza[14]annulene nickel complexes 1 - 3 indicated that differences in the attached substituents of the [14]tetraazaannulene nickel complexes greatly influenced the ion-selectivity as ionophores. According to our computational results, the ionophoric properties of [14]tetraazaannulene nickel complexes 1 - 4 were influenced by their electrostatic properties rather than their topological properties.

  11. Complexity Control of Fast Motion Estimation in H.264/MPEG-4 AVC with Rate-Distortion-Complexity optimization

    DEFF Research Database (Denmark)

    Wu, Mo; Forchhammer, Søren; Aghito, Shankar Manuel

    2007-01-01

    A complexity control algorithm for H.264 advanced video coding is proposed. The algorithm can control the complexity of integer inter motion estimation for a given target complexity. The Rate-Distortion-Complexity performance is improved by a complexity prediction model, simple analysis of the pa...... statistics and a control scheme. The algorithm also works well for scene change condition. Test results for coding interlaced video (720x576 PAL) are reported.......A complexity control algorithm for H.264 advanced video coding is proposed. The algorithm can control the complexity of integer inter motion estimation for a given target complexity. The Rate-Distortion-Complexity performance is improved by a complexity prediction model, simple analysis of the past...

  12. Negotiating place and gendered violence in Canada's largest open drug scene.

    Science.gov (United States)

    McNeil, Ryan; Shannon, Kate; Shaver, Laura; Kerr, Thomas; Small, Will

    2014-05-01

    Vancouver's Downtown Eastside is home to Canada's largest street-based drug scene and only supervised injection facility (Insite). High levels of violence among men and women have been documented in this neighbourhood. This study was undertaken to explore the role of violence in shaping the socio-spatial relations of women and 'marginal men' (i.e., those occupying subordinate positions within the drug scene) in the Downtown Eastside, including access to Insite. Semi-structured qualitative interviews were conducted with 23 people who inject drugs (PWID) recruited through the Vancouver Area Network of Drug Users, a local drug user organization. Interviews included a mapping exercise. Interview transcripts and maps were analyzed thematically, with an emphasis on how gendered violence shaped participants' spatial practices. Hegemonic forms of masculinity operating within the Downtown Eastside framed the everyday violence experienced by women and marginal men. This violence shaped the spatial practices of women and marginal men, in that they avoided drug scene milieus where they had experienced violence or that they perceived to be dangerous. Some men linked their spatial restrictions to the perceived 'dope quality' of neighbourhood drug dealers to maintain claims to dominant masculinities while enacting spatial strategies to promote safety. Environmental supports provided by health and social care agencies were critical in enabling women and marginal men to negotiate place and survival within the context of drug scene violence. Access to Insite did not motivate participants to enter into "dangerous" drug scene milieus but they did venture into these areas if necessary to obtain drugs or generate income. Gendered violence is critical in restricting the geographies of men and marginal men within the street-based drug scene. There is a need to scale up existing environmental interventions, including supervised injection services, to minimize violence and potential drug

  13. Influence maximization in complex networks through optimal percolation

    Science.gov (United States)

    Morone, Flaviano; Makse, Hernan; CUNY Collaboration; CUNY Collaboration

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. Reference: F. Morone, H. A. Makse, Nature 524,65-68 (2015)

  14. From Theatre Improvisation To Video Scenes

    DEFF Research Database (Denmark)

    Larsen, Henry; Hvidt, Niels Christian; Friis, Preben

    2018-01-01

    At Sygehus Lillebaelt, a Danish hospital, there has been a focus for several years on patient communi- cation. This paper reflects on a course focusing on engaging with the patient’s existential themes in particular the negotiations around the creation of video scenes. In the initial workshops, w...

  15. Scene independent real-time indirect illumination

    DEFF Research Database (Denmark)

    Frisvad, Jeppe Revall; Christensen, Niels Jørgen; Falster, Peter

    2005-01-01

    A novel method for real-time simulation of indirect illumination is presented in this paper. The method, which we call Direct Radiance Mapping (DRM), is based on basal radiance calculations and does not impose any restrictions on scene geometry or dynamics. This makes the method tractable for rea...

  16. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Directory of Open Access Journals (Sweden)

    Jianfang Cao

    2015-01-01

    Full Text Available With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  17. Error Detection, Factorization and Correction for Multi-View Scene Reconstruction from Aerial Imagery

    Energy Technology Data Exchange (ETDEWEB)

    Hess-Flores, Mauricio [Univ. of California, Davis, CA (United States)

    2011-11-10

    Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in

  18. Unplanned Complex Suicide-A Consideration of Multiple Methods.

    Science.gov (United States)

    Ateriya, Navneet; Kanchan, Tanuj; Shekhawat, Raghvendra Singh; Setia, Puneet; Saraf, Ashish

    2018-05-01

    Detailed death investigations are mandatory to find out the exact cause and manner in non-natural deaths. In this reference, use of multiple methods in suicide poses a challenge for the investigators especially when the choice of methods to cause death is unplanned. There is an increased likelihood that doubts of homicide are raised in cases of unplanned complex suicides. A case of complex suicide is reported where the victim resorted to multiple methods to end his life, and what appeared to be an unplanned variant based on the death scene investigations. A meticulous crime scene examination, interviews of the victim's relatives and other witnesses, and a thorough autopsy are warranted to conclude on the cause and manner of death in all such cases. © 2017 American Academy of Forensic Sciences.

  19. Scene recognition based on integrating active learning with dictionary learning

    Science.gov (United States)

    Wang, Chengxi; Yin, Xueyan; Yang, Lin; Gong, Chengrong; Zheng, Caixia; Yi, Yugen

    2018-04-01

    Scene recognition is a significant topic in the field of computer vision. Most of the existing scene recognition models require a large amount of labeled training samples to achieve a good performance. However, labeling image manually is a time consuming task and often unrealistic in practice. In order to gain satisfying recognition results when labeled samples are insufficient, this paper proposed a scene recognition algorithm named Integrating Active Learning and Dictionary Leaning (IALDL). IALDL adopts projective dictionary pair learning (DPL) as classifier and introduces active learning mechanism into DPL for improving its performance. When constructing sampling criterion in active learning, IALDL considers both the uncertainty and representativeness as the sampling criteria to effectively select the useful unlabeled samples from a given sample set for expanding the training dataset. Experiment results on three standard databases demonstrate the feasibility and validity of the proposed IALDL.

  20. Mining Very High Resolution INSAR Data Based On Complex-GMRF Cues And Relevance Feedback

    Science.gov (United States)

    Singh, Jagmal; Popescu, Anca; Soccorsi, Matteo; Datcu, Mihai

    2012-01-01

    With the increase in number of remote sensing satellites, the number of image-data scenes in our repositories is also increasing and a large quantity of these scenes are never received and used. Thus automatic retrieval of de- sired image-data using query by image content to fully utilize the huge repository volume is becoming of great interest. Generally different users are interested in scenes containing different kind of objects and structures. So its important to analyze all the image information mining (IIM) methods so that its easier for user to select a method depending upon his/her requirement. We concentrate our study only on high-resolution SAR images and we propose to use InSAR observations instead of only one single look complex (SLC) images for mining scenes containing coherent objects such as high-rise buildings. However in case of objects with less coherence like areas with vegetation cover, SLC images exhibits better performance. We demonstrate IIM performance comparison using complex-Gauss Markov Random Fields as texture descriptor for image patches and SVM relevance- feedback.

  1. Developing Scene Understanding Neural Software for Realistic Autonomous Outdoor Missions

    Science.gov (United States)

    2017-09-01

    computer using a single graphics processing unit (GPU). To the best of our knowledge, an implementation of the open-source Python -based AlexNet CNN on...1. Introduction Neurons in the brain enable us to understand scenes by assessing the spatial, temporal, and feature relations of objects in the...effort to use computer neural networks to augment human neural intelligence to improve our scene understanding (Krizhevsky et al. 2012; Zhou et al

  2. Three-dimensional model-based object recognition and segmentation in cluttered scenes.

    Science.gov (United States)

    Mian, Ajmal S; Bennamoun, Mohammed; Owens, Robyn

    2006-10-01

    Viewpoint independent recognition of free-form objects and their segmentation in the presence of clutter and occlusions is a challenging task. We present a novel 3D model-based algorithm which performs this task automatically and efficiently. A 3D model of an object is automatically constructed offline from its multiple unordered range images (views). These views are converted into multidimensional table representations (which we refer to as tensors). Correspondences are automatically established between these views by simultaneously matching the tensors of a view with those of the remaining views using a hash table-based voting scheme. This results in a graph of relative transformations used to register the views before they are integrated into a seamless 3D model. These models and their tensor representations constitute the model library. During online recognition, a tensor from the scene is simultaneously matched with those in the library by casting votes. Similarity measures are calculated for the model tensors which receive the most votes. The model with the highest similarity is transformed to the scene and, if it aligns accurately with an object in the scene, that object is declared as recognized and is segmented. This process is repeated until the scene is completely segmented. Experiments were performed on real and synthetic data comprised of 55 models and 610 scenes and an overall recognition rate of 95 percent was achieved. Comparison with the spin images revealed that our algorithm is superior in terms of recognition rate and efficiency.

  3. The influence of advertisements on the conspicuity of routing information.

    Science.gov (United States)

    Boersema, T; Zwaga, H J

    1985-12-01

    An experiment is described in which the influence of advertisements on the conspicuity of routing information was investigated. Stimulus material consisted of colour slides of 12 railway station scenes. In two of these scenes, number and size of advertisements were systematically varied. Subjects were instructed to locate routing signs in the scenes. Performance on the location task was used as a measure of the routing sign conspicuity. The results show that inserting an advertisement lessens the conspicuity of the routing information. This effect becomes stronger if more or larger advertisements are added.

  4. Automatic decomposition of a complex hologram based on the virtual diffraction plane framework

    International Nuclear Information System (INIS)

    Jiao, A S M; Tsang, P W M; Lam, Y K; Poon, T-C; Liu, J-P; Lee, C-C

    2014-01-01

    Holography is a technique for capturing the hologram of a three-dimensional scene. In many applications, it is often pertinent to retain specific items of interest in the hologram, rather than retaining the full information, which may cause distraction in the analytical process that follows. For a real optical image that is captured with a camera or scanner, this process can be realized by applying image segmentation algorithms to decompose an image into its constituent entities. However, because it is different from an optical image, classic image segmentation methods cannot be applied directly to a hologram, as each pixel in the hologram carries holistic, rather than local, information of the object scene. In this paper, we propose a method to perform automatic decomposition of a complex hologram based on a recently proposed technique called the virtual diffraction plane (VDP) framework. Briefly, a complex hologram is back-propagated to a hypothetical plane known as the VDP. Next, the image on the VDP is automatically decomposed, through the use of the segmentation on the magnitude of the VDP image, into multiple sub-VDP images, each representing the diffracted waves of an isolated entity in the scene. Finally, each sub-VDP image is reverted back to a hologram. As such, a complex hologram can be decomposed into a plurality of subholograms, each representing a discrete object in the scene. We have demonstrated the successful performance of our proposed method by decomposing a complex hologram that is captured through the optical scanning holography (OSH) technique. (papers)

  5. A semi-interactive panorama based 3D reconstruction framework for indoor scenes

    NARCIS (Netherlands)

    Dang, T.K.; Worring, M.; Bui, T.D.

    2011-01-01

    We present a semi-interactive method for 3D reconstruction specialized for indoor scenes which combines computer vision techniques with efficient interaction. We use panoramas, popularly used for visualization of indoor scenes, but clearly not able to show depth, for their great field of view, as

  6. 3D Aware Correction and Completion of Depth Maps in Piecewise Planar Scenes

    KAUST Repository

    Thabet, Ali Kassem; Lahoud, Jean; Asmar, Daniel; Ghanem, Bernard

    2015-01-01

    RGB-D sensors are popular in the computer vision community, especially for problems of scene understanding, semantic scene labeling, and segmentation. However, most of these methods depend on reliable input depth measurements, while discarding

  7. Higher-order scene statistics of breast images

    Science.gov (United States)

    Abbey, Craig K.; Sohl-Dickstein, Jascha N.; Olshausen, Bruno A.; Eckstein, Miguel P.; Boone, John M.

    2009-02-01

    Researchers studying human and computer vision have found description and construction of these systems greatly aided by analysis of the statistical properties of naturally occurring scenes. More specifically, it has been found that receptive fields with directional selectivity and bandwidth properties similar to mammalian visual systems are more closely matched to the statistics of natural scenes. It is argued that this allows for sparse representation of the independent components of natural images [Olshausen and Field, Nature, 1996]. These theories have important implications for medical image perception. For example, will a system that is designed to represent the independent components of natural scenes, where objects occlude one another and illumination is typically reflected, be appropriate for X-ray imaging, where features superimpose on one another and illumination is transmissive? In this research we begin to examine these issues by evaluating higher-order statistical properties of breast images from X-ray projection mammography (PM) and dedicated breast computed tomography (bCT). We evaluate kurtosis in responses of octave bandwidth Gabor filters applied to PM and to coronal slices of bCT scans. We find that kurtosis in PM rises and quickly saturates for filter center frequencies with an average value above 0.95. By contrast, kurtosis in bCT peaks near 0.20 cyc/mm with kurtosis of approximately 2. Our findings suggest that the human visual system may be tuned to represent breast tissue more effectively in bCT over a specific range of spatial frequencies.

  8. The singular nature of auditory and visual scene analysis in autism.

    Science.gov (United States)

    Lin, I-Fan; Shirama, Aya; Kato, Nobumasa; Kashino, Makio

    2017-02-19

    Individuals with autism spectrum disorder often have difficulty acquiring relevant auditory and visual information in daily environments, despite not being diagnosed as hearing impaired or having low vision. Resent psychophysical and neurophysiological studies have shown that autistic individuals have highly specific individual differences at various levels of information processing, including feature extraction, automatic grouping and top-down modulation in auditory and visual scene analysis. Comparison of the characteristics of scene analysis between auditory and visual modalities reveals some essential commonalities, which could provide clues about the underlying neural mechanisms. Further progress in this line of research may suggest effective methods for diagnosing and supporting autistic individuals.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  9. Study on general design of dual-DMD based infrared two-band scene simulation system

    Science.gov (United States)

    Pan, Yue; Qiao, Yang; Xu, Xi-ping

    2017-02-01

    Mid-wave infrared(MWIR) and long-wave infrared(LWIR) two-band scene simulation system is a kind of testing equipment that used for infrared two-band imaging seeker. Not only it would be qualified for working waveband, but also realize the essence requests that infrared radiation characteristics should correspond to the real scene. Past single-digital micromirror device (DMD) based infrared scene simulation system does not take the huge difference between targets and background radiation into account, and it cannot realize the separated modulation to two-band light beam. Consequently, single-DMD based infrared scene simulation system cannot accurately express the thermal scene model that upper-computer built, and it is not that practical. To solve the problem, we design a dual-DMD based, dual-channel, co-aperture, compact-structure infrared two-band scene simulation system. The operating principle of the system is introduced in detail, and energy transfer process of the hardware-in-the-loop simulation experiment is analyzed as well. Also, it builds the equation about the signal-to-noise ratio of infrared detector in the seeker, directing the system overall design. The general design scheme of system is given, including the creation of infrared scene model, overall control, optical-mechanical structure design and image registration. By analyzing and comparing the past designs, we discuss the arrangement of optical engine framework in the system. According to the main content of working principle and overall design, we summarize each key techniques in the system.

  10. A spectral image processing algorithm for evaluating the influence of the illuminants on the reconstructed reflectance

    Science.gov (United States)

    Toadere, Florin

    2017-12-01

    A spectral image processing algorithm that allows the illumination of the scene with different illuminants together with the reconstruction of the scene's reflectance is presented. Color checker spectral image and CIE A (warm light 2700 K), D65 (cold light 6500 K) and Cree TW Series LED T8 (4000 K) are employed for scene illumination. Illuminants used in the simulations have different spectra and, as a result of their illumination, the colors of the scene change. The influence of the illuminants on the reconstruction of the scene's reflectance is estimated. Demonstrative images and reflectance showing the operation of the algorithm are illustrated.

  11. Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis

    Science.gov (United States)

    Büchler, Michael; Allegro, Silvia; Launer, Stefan; Dillier, Norbert

    2005-12-01

    A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.

  12. Disentangling the effects of spatial inconsistency of targets and distractors when searching in realistic scenes.

    Science.gov (United States)

    Spotorno, Sara; Malcolm, George L; Tatler, Benjamin W

    2015-02-10

    Previous research has suggested that correctly placed objects facilitate eye guidance, but also that objects violating spatial associations within scenes may be prioritized for selection and subsequent inspection. We analyzed the respective eye guidance of spatial expectations and target template (precise picture or verbal label) in visual search, while taking into account any impact of object spatial inconsistency on extrafoveal or foveal processing. Moreover, we isolated search disruption due to misleading spatial expectations about the target from the influence of spatial inconsistency within the scene upon search behavior. Reliable spatial expectations and precise target template improved oculomotor efficiency across all search phases. Spatial inconsistency resulted in preferential saccadic selection when guidance by template was insufficient to ensure effective search from the outset and the misplaced object was bigger than the objects consistently placed in the same scene region. This prioritization emerged principally during early inspection of the region, but the inconsistent object also tended to be preferentially fixated overall across region viewing. These results suggest that objects are first selected covertly on the basis of their relative size and that subsequent overt selection is made considering object-context associations processed in extrafoveal vision. Once the object was fixated, inconsistency resulted in longer first fixation duration and longer total dwell time. As a whole, our findings indicate that observed impairment of oculomotor behavior when searching for an implausibly placed target is the combined product of disruption due to unreliable spatial expectations and prioritization of inconsistent objects before and during object fixation. © 2015 ARVO.

  13. Wall grid structure for interior scene synthesis

    KAUST Repository

    Xu, Wenzhuo; Wang, Bin; Yan, Dongming

    2015-01-01

    We present a system for automatically synthesizing a diverse set of semantically valid, and well-arranged 3D interior scenes for a given empty room shape. Unlike existing work on layout synthesis, that typically knows potentially needed 3D models

  14. Popular music scenes and aging bodies.

    Science.gov (United States)

    Bennett, Andy

    2018-06-01

    During the last two decades there has been increasing interest in the phenomenon of the aging popular music audience (Bennett & Hodkinson, 2012). Although the specter of the aging fan is by no means new, the notion of, for example, the aging rocker or the aging punk has attracted significant sociological attention, not least of all because of what this says about the shifting socio-cultural significance of rock and punk and similar genres - which at the time of their emergence were inextricably tied to youth and vociferously marketed as "youth musics". As such, initial interpretations of aging music fans tended to paint a somewhat negative picture, suggesting a sense in which such fans were cultural misfits (Ross, 1994). In more recent times, however, work informed by cultural aging perspectives has begun to consider how so-called "youth cultural" identities may in fact provide the basis of more stable and evolving identities over the life course (Bennett, 2013). Starting from this position, the purpose of this article is to critically examine how aging members of popular music scenes might be recast as a salient example of the more pluralistic fashion in which aging is anticipated, managed and articulated in contemporary social settings. The article then branches out to consider two ways that aging members of music scenes continue their scene involvement. The first focuses on evolving a series of discourses that legitimately position them as aging bodies in cultural spaces that also continue to be inhabited by significant numbers of people in their teens, twenties and thirties. The second sees aging fans taking advantage of new opportunities for consuming live music including winery concerts and dinner and show events. Copyright © 2018 Elsevier Inc. All rights reserved.

  15. Eye Movements when Looking at Unusual/Weird Scenes: Are There Cultural Differences?

    Science.gov (United States)

    Rayner, Keith; Castelhano, Monica S.; Yang, Jinmian

    2009-01-01

    Recent studies have suggested that eye movement patterns while viewing scenes differ for people from different cultural backgrounds and that these differences in how scenes are viewed are due to differences in the prioritization of information (background or foreground). The current study examined whether there are cultural differences in how…

  16. The elephant in the room: inconsistency in scene viewing and representation

    OpenAIRE

    Spotorno, Sara; Tatler, Benjamin W.

    2017-01-01

    We examined the extent to which semantic informativeness, consistency with expectations and perceptual salience contribute to object prioritization in scene viewing and representation. In scene viewing (Experiments 1–2), semantic guidance overshadowed perceptual guidance in determining fixation order, with the greatest prioritization for objects that were diagnostic of the scene’s depicted event. Perceptual properties affected selection of consistent objects (regardless of their informativene...

  17. Z-depth integration: a new technique for manipulating z-depth properties in composited scenes

    Science.gov (United States)

    Steckel, Kayla; Whittinghill, David

    2014-02-01

    This paper presents a new technique in the production pipeline of asset creation for virtual environments called Z-Depth Integration (ZeDI). ZeDI is intended to reduce the time required to place elements at the appropriate z-depth within a scene. Though ZeDI is intended for use primarily in two-dimensional scene composition, depth-dependent "flat" animated objects are often critical elements of augmented and virtual reality applications (AR/VR). ZeDI is derived from "deep image compositing", a capacity implemented within the OpenEXR file format. In order to trick the human eye into perceiving overlapping scene elements as being in front of or behind one another, the developer must manually manipulate which pixels of an element are visible in relation to other objects embedded within the environment's image sequence. ZeDI improves on this process by providing a means for interacting with procedurally extracted z-depth data from a virtual environment scene. By streamlining the process of defining objects' depth characteristics, it is expected that the time and energy required for developers to create compelling AR/VR scenes will be reduced. In the proof of concept presented in this manuscript, ZeDI is implemented for pre-rendered virtual scene construction via an AfterEffects software plug-in.

  18. Special effects used in creating 3D animated scenes-part 1

    Science.gov (United States)

    Avramescu, A. M.

    2015-11-01

    In present, with the help of computer, we can create special effects that look so real that we almost don't perceive them as being different. These special effects are somehow hard to differentiate from the real elements like those on the screen. With the increasingly accesible 3D field that has more and more areas of application, the 3D technology goes easily from architecture to product designing. Real like 3D animations are used as means of learning, for multimedia presentations of big global corporations, for special effects and even for virtual actors in movies. Technology, as part of the movie art, is considered a prerequisite but the cinematography is the first art that had to wait for the correct intersection of technological development, innovation and human vision in order to attain full achievement. Increasingly more often, the majority of industries is using 3D sequences (three dimensional). 3D represented graphics, commercials and special effects from movies are all designed in 3D. The key for attaining real visual effects is to successfully combine various distinct elements: characters, objects, images and video scenes; like all these elements represent a whole that works in perfect harmony. This article aims to exhibit a game design from these days. Considering the advanced technology and futuristic vision of designers, nowadays we have different and multifarious game models. Special effects are decisively contributing in the creation of a realistic three-dimensional scene. These effects are essential for transmitting the emotional state of the scene. Creating the special effects is a work of finesse in order to achieve high quality scenes. Special effects can be used to get the attention of the onlooker on an object from a scene. Out of the conducted study, the best-selling game of the year 2010 was Call of Duty: Modern Warfare 2. This way, the article aims for the presented scene to be similar with many locations from this type of games, more

  19. Route complexity and simulated physical ageing negatively influence wayfinding

    NARCIS (Netherlands)

    Zijlstra, Emma; Hagedoorn, Mariet; Krijnen, Wim P.; Schans, van der Cornelis; Mobach, Mark P.

    The aim of this age-simulation field experiment was to assess the influence of route complexity and physical ageing on wayfinding. Seventy-five people (aged 18-28) performed a total of 108 wayfinding tasks (i.e., 42 participants performed two wayfinding tasks and 33 performed one wayfinding task),

  20. Non-uniform crosstalk reduction for dynamic scenes

    NARCIS (Netherlands)

    Smit, F.A.; Liere, van R.; Fröhlich, B.

    2007-01-01

    Stereo displays suffer from crosstalk, an effect that reduces or even inhibits the viewer's ability to correctly perceive depth. Previous work on software crosstalk reduction focussed on the preprocessing of static scenes which are viewed from a fixed viewpoint. However, in virtual environments

  1. Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior.

    Science.gov (United States)

    Groen, Iris Ia; Greene, Michelle R; Baldassano, Christopher; Fei-Fei, Li; Beck, Diane M; Baker, Chris I

    2018-03-07

    Inherent correlations between visual and semantic features in real-world scenes make it difficult to determine how different scene properties contribute to neural representations. Here, we assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection. Behavioral assessments of scene similarity reflected unique contributions from a functional feature model indicating potential actions in scenes as well as high-level visual features from a deep neural network (DNN). In contrast, similarity of cortical responses in scene-selective areas was uniquely explained by mid- and high-level DNN features only, while an object label model did not contribute uniquely to either domain. The striking dissociation between functional and DNN features in their contribution to behavioral and brain representations of scenes indicates that scene-selective cortex represents only a subset of behaviorally relevant scene information.

  2. Anticipatory Scene Representation in Preschool Children's Recall and Recognition Memory

    Science.gov (United States)

    Kreindel, Erica; Intraub, Helene

    2017-01-01

    Behavioral and neuroscience research on boundary extension (false memory beyond the edges of a view of a scene) has provided new insights into the constructive nature of scene representation, and motivates questions about development. Early research with children (as young as 6-7 years) was consistent with boundary extension, but relied on an…

  3. Object Attention Patches for Text Detection and Recognition in Scene Images using SIFT

    NARCIS (Netherlands)

    Sriman, Bowornrat; Schomaker, Lambertus; De Marsico, Maria; Figueiredo, Mário; Fred, Ana

    2015-01-01

    Natural urban scene images contain many problems for character recognition such as luminance noise, varying font styles or cluttered backgrounds. Detecting and recognizing text in a natural scene is a difficult problem. Several techniques have been proposed to overcome these problems. These are,

  4. Oxytocin increases amygdala reactivity to threatening scenes in females.

    Science.gov (United States)

    Lischke, Alexander; Gamer, Matthias; Berger, Christoph; Grossmann, Annette; Hauenstein, Karlheinz; Heinrichs, Markus; Herpertz, Sabine C; Domes, Gregor

    2012-09-01

    The neuropeptide oxytocin (OT) is well known for its profound effects on social behavior, which appear to be mediated by an OT-dependent modulation of amygdala activity in the context of social stimuli. In humans, OT decreases amygdala reactivity to threatening faces in males, but enhances amygdala reactivity to similar faces in females, suggesting sex-specific differences in OT-dependent threat-processing. To further explore whether OT generally enhances amygdala-dependent threat-processing in females, we used functional magnetic resonance imaging (fMRI) in a randomized within-subject crossover design to measure amygdala activity in response to threatening and non-threatening scenes in 14 females following intranasal administration of OT or placebo. Participants' eye movements were recorded to investigate whether an OT-dependent modulation of amygdala activity is accompanied by enhanced exploration of salient scene features. Although OT had no effect on participants' gazing behavior, it increased amygdala reactivity to scenes depicting social and non-social threat. In females, OT may, thus, enhance the detection of threatening stimuli in the environment, potentially by interacting with gonadal steroids, such as progesterone and estrogen. Copyright © 2012 Elsevier Ltd. All rights reserved.

  5. A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images

    Directory of Open Access Journals (Sweden)

    David Vázquez

    2017-01-01

    Full Text Available Colorectal cancer (CRC is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs. We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.

  6. Influence of visual clutter on the effect of navigated safety inspection: a case study on elevator installation.

    Science.gov (United States)

    Liao, Pin-Chao; Sun, Xinlu; Liu, Mei; Shih, Yu-Nien

    2018-01-11

    Navigated safety inspection based on task-specific checklists can increase the hazard detection rate, theoretically with interference from scene complexity. Visual clutter, a proxy of scene complexity, can theoretically impair visual search performance, but its impact on the effect of safety inspection performance remains to be explored for the optimization of navigated inspection. This research aims to explore whether the relationship between working memory and hazard detection rate is moderated by visual clutter. Based on a perceptive model of hazard detection, we: (a) developed a mathematical influence model for construction hazard detection; (b) designed an experiment to observe the performance of hazard detection rate with adjusted working memory under different levels of visual clutter, while using an eye-tracking device to observe participants' visual search processes; (c) utilized logistic regression to analyze the developed model under various visual clutter. The effect of a strengthened working memory on the detection rate through increased search efficiency is more apparent in high visual clutter. This study confirms the role of visual clutter in construction-navigated inspections, thus serving as a foundation for the optimization of inspection planning.

  7. Face, Body, and Center of Gravity Mediate Person Detection in Natural Scenes

    Science.gov (United States)

    Bindemann, Markus; Scheepers, Christoph; Ferguson, Heather J.; Burton, A. Mike

    2010-01-01

    Person detection is an important prerequisite of social interaction, but is not well understood. Following suggestions that people in the visual field can capture a viewer's attention, this study examines the role of the face and the body for person detection in natural scenes. We observed that viewers tend first to look at the center of a scene,…

  8. Influence of complexing on physicochemical properties of polymer-salt solutions

    International Nuclear Information System (INIS)

    Ostroushko, A.A.; Yushkova, S.M.; Koridze, N.V.; Skobkoreva, N.V.; Zhuravleva, L.I.; Palitskaya, T.A.; Antropova, S.V.; Ostroushko, I.P.; AN SSSR, Moscow

    1993-01-01

    Using the methods of spectrophotometry, viscosimetry, conductometry the influence of salt-polymer complexing processes on physicochemical prperties of aqueous solutions of yttrium, barium, copper nitrates and formates with polyvinyl alcohol was studied. Change of dynamic viscosity, specific electric conductivity of solutions in the process of complexing was shown. Thermal effects of salt-polymer interaction were measured. It is shown that decrease of transition temperature of polymer to plastic state in films, temperature and effective activation energy of salt decomposition is also connected with complexing. Effective values of surface tension on the boundary with air are measured. Coefficients of cation diffusion in polymer-salt solutions are estimated

  9. Virtual Relighting of a Virtualized Scene by Estimating Surface Reflectance Properties

    OpenAIRE

    福富, 弘敦; 町田, 貴史; 横矢, 直和

    2011-01-01

    In mixed reality that merges real and virtual worlds, it is required to interactively manipulate the illumination conditions in a virtualized space. In general, specular reflections in a scene make it difficult to interactively manipulate the illumination conditions. Our goal is to provide an opportunity to simulate the original scene, including diffuse and specular relfections, with novel viewpoints and illumination conditions. Thus, we propose a new method for estimating diffuse and specula...

  10. Efficient 3D scene modeling and mosaicing

    CERN Document Server

    Nicosevici, Tudor

    2013-01-01

    This book proposes a complete pipeline for monocular (single camera) based 3D mapping of terrestrial and underwater environments. The aim is to provide a solution to large-scale scene modeling that is both accurate and efficient. To this end, we have developed a novel Structure from Motion algorithm that increases mapping accuracy by registering camera views directly with the maps. The camera registration uses a dual approach that adapts to the type of environment being mapped.   In order to further increase the accuracy of the resulting maps, a new method is presented, allowing detection of images corresponding to the same scene region (crossovers). Crossovers then used in conjunction with global alignment methods in order to highly reduce estimation errors, especially when mapping large areas. Our method is based on Visual Bag of Words paradigm (BoW), offering a more efficient and simpler solution by eliminating the training stage, generally required by state of the art BoW algorithms.   Also, towards dev...

  11. 3D terrestrial lidar data classification of complex natural scenes using a multi-scale dimensionality criterion: Applications in geomorphology

    Science.gov (United States)

    Brodu, N.; Lague, D.

    2012-03-01

    3D point clouds of natural environments relevant to problems in geomorphology (rivers, coastal environments, cliffs, …) often require classification of the data into elementary relevant classes. A typical example is the separation of riparian vegetation from ground in fluvial environments, the distinction between fresh surfaces and rockfall in cliff environments, or more generally the classification of surfaces according to their morphology (e.g. the presence of bedforms or by grain size). Natural surfaces are heterogeneous and their distinctive properties are seldom defined at a unique scale, prompting the use of multi-scale criteria to achieve a high degree of classification success. We have thus defined a multi-scale measure of the point cloud dimensionality around each point. The dimensionality characterizes the local 3D organization of the point cloud within spheres centered on the measured points and varies from being 1D (points set along a line), 2D (points forming a plane) to the full 3D volume. By varying the diameter of the sphere, we can thus monitor how the local cloud geometry behaves across scales. We present the technique and illustrate its efficiency in separating riparian vegetation from ground and classifying a mountain stream as vegetation, rock, gravel or water surface. In these two cases, separating the vegetation from ground or other classes achieve accuracy larger than 98%. Comparison with a single scale approach shows the superiority of the multi-scale analysis in enhancing class separability and spatial resolution of the classification. Scenes between 10 and one hundred million points can be classified on a common laptop in a reasonable time. The technique is robust to missing data, shadow zones and changes in point density within the scene. The classification is fast and accurate and can account for some degree of intra-class morphological variability such as different vegetation types. A probabilistic confidence in the classification

  12. Effect of Smoking Scenes in Films on Immediate Smoking

    Science.gov (United States)

    Shmueli, Dikla; Prochaska, Judith J.; Glantz, Stanton A.

    2010-01-01

    Background The National Cancer Institute has concluded that exposure to smoking in movies causes adolescent smoking and there are similar results for young adults. Purpose This study investigated whether exposure of young adult smokers to images of smoking in films stimulated smoking behavior. Methods 100 cigarette smokers aged 18–25 years were randomly assigned to watch a movie montage composed with or without smoking scenes and paraphernalia followed by a10-minute recess. The outcome was whether or not participants smoked during the recess. Data were collected and analyzed in 2008 and 2009. Results Smokers who watched the smoking scenes were more likely to smoke during the break (OR3.06, 95% CI=1.01, 9.29). In addition to this acute effect of exposure, smokers who had seen more smoking in movies before the day of the experiment were more likely to smoke during the break (OR 6.73; 1.00–45.25 comparing the top to bottom percentiles of exposure) were more likely to smoke during the break. Level of nicotine dependence (OR 1.71; 1.27–2.32 per point on the FTND scale), “contemplation” (OR 9.07; 1.71–47.99) and “precontemplation” (OR 7.30; 1.39–38.36) stages of change, and impulsivity (OR 1.21; 1.03–1.43), were also associated with smoking during the break. Participants who watched the montage with smoking scenes and those with a higher level of nicotine dependence were also more likely to have smoked within 30 minutes after the study. Conclusions There is a direct link between viewing smoking scenes and immediate subsequent smoking behavior. This finding suggests that individuals attempting to limit or quit smoking should be advised to refrain from or reduce their exposure to movies that contain smoking. PMID:20307802

  13. Sensor-Topology Based Simplicial Complex Reconstruction from Mobile Laser Scanning

    Science.gov (United States)

    Guinard, S.; Vallet, B.

    2018-05-01

    We propose a new method for the reconstruction of simplicial complexes (combining points, edges and triangles) from 3D point clouds from Mobile Laser Scanning (MLS). Our main goal is to produce a reconstruction of a scene that is adapted to the local geometry of objects. Our method uses the inherent topology of the MLS sensor to define a spatial adjacency relationship between points. We then investigate each possible connexion between adjacent points and filter them by searching collinear structures in the scene, or structures perpendicular to the laser beams. Next, we create triangles for each triplet of self-connected edges. Last, we improve this method with a regularization based on the co-planarity of triangles and collinearity of remaining edges. We compare our results to a naive simplicial complexes reconstruction based on edge length.

  14. Rheological properties of wheat starch influenced by amylose-lysophosphatidylcholine complexation at different gelation phases

    NARCIS (Netherlands)

    Ahmadiabhari, Salomeh; Woortman, Albert; Hamer, Rob; Loos, Katja

    2015-01-01

    Amylose is able to form helical inclusion complexes with lysophosphatidylcholine (LPC). This complexation influences the functional and rheological properties of wheat starch; however it is well known that the formation of these complexes lead the starchy systems to a slower enzymatic hydrolysis.

  15. Rheological properties of wheat starch influenced by amylose-lysophosphatidylcholine complexation at different gelation phases

    NARCIS (Netherlands)

    Ahmadi-Abhari, S.; Woortman, A.J.J.; Hamer, R.J.; Loos, K.

    2015-01-01

    Amylose is able to form helical inclusion complexes with lysophosphatidylcholine (LPC). This complexation influences the functional and rheological properties of wheat starch; however it is well known that the formation of these complexes lead the starchy systems to a slower enzymatic hydrolysis.

  16. Effects of varying presentation time on long-term recognition memory for scenes: Verbatim and gist representations.

    Science.gov (United States)

    Ahmad, Fahad N; Moscovitch, Morris; Hockley, William E

    2017-04-01

    Konkle, Brady, Alvarez and Oliva (Psychological Science, 21, 1551-1556, 2010) showed that participants have an exceptional long-term memory (LTM) for photographs of scenes. We examined to what extent participants' exceptional LTM for scenes is determined by presentation time during encoding. In addition, at retrieval, we varied the nature of the lures in a forced-choice recognition task so that they resembled the target in gist (i.e., global or categorical) information, but were distinct in verbatim information (e.g., an "old" beach scene and a similar "new" beach scene; exemplar condition) or vice versa (e.g., a beach scene and a new scene from a novel category; novel condition). In Experiment 1, half of the list of scenes was presented for 1 s, whereas the other half was presented for 4 s. We found lower performance for shorter study presentation time in the exemplar test condition and similar performance for both study presentation times in the novel test condition. In Experiment 2, participants showed similar performance in an exemplar test for which the lure was of a different category but a category that was used at study. In Experiment 3, when presentation time was lowered to 500 ms, recognition accuracy was reduced in both novel and exemplar test conditions. A less detailed memorial representation of the studied scene containing more gist (i.e., meaning) than verbatim (i.e., surface or perceptual details) information is retrieved from LTM after a short compared to a long study presentation time. We conclude that our findings support fuzzy-trace theory.

  17. The Role of Binocular Disparity in Rapid Scene and Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Matteo Valsecchi

    2013-04-01

    Full Text Available We investigated the contribution of binocular disparity to the rapid recognition of scenes and simpler spatial patterns using a paradigm combining backward masked stimulus presentation and short-term match-to-sample recognition. First, we showed that binocular disparity did not contribute significantly to the recognition of briefly presented natural and artificial scenes, even when the availability of monocular cues was reduced. Subsequently, using dense random dot stereograms as stimuli, we showed that observers were in principle able to extract spatial patterns defined only by disparity under brief, masked presentations. Comparing our results with the predictions from a cue-summation model, we showed that combining disparity with luminance did not per se disrupt the processing of disparity. Our results suggest that the rapid recognition of scenes is mediated mostly by a monocular comparison of the images, although we can rely on stereo in fast pattern recognition.

  18. Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification

    Science.gov (United States)

    Anwer, Rao Muhammad; Khan, Fahad Shahbaz; van de Weijer, Joost; Molinier, Matthieu; Laaksonen, Jorma

    2018-04-01

    Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification.

  19. The effects of alcohol intoxication on attention and memory for visual scenes.

    Science.gov (United States)

    Harvey, Alistair J; Kneller, Wendy; Campbell, Alison C

    2013-01-01

    This study tests the claim that alcohol intoxication narrows the focus of visual attention on to the more salient features of a visual scene. A group of alcohol intoxicated and sober participants had their eye movements recorded as they encoded a photographic image featuring a central event of either high or low salience. All participants then recalled the details of the image the following day when sober. We sought to determine whether the alcohol group would pay less attention to the peripheral features of the encoded scene than their sober counterparts, whether this effect of attentional narrowing was stronger for the high-salience event than for the low-salience event, and whether it would lead to a corresponding deficit in peripheral recall. Alcohol was found to narrow the focus of foveal attention to the central features of both images but did not facilitate recall from this region. It also reduced the overall amount of information accurately recalled from each scene. These findings demonstrate that the concept of alcohol myopia originally posited to explain the social consequences of intoxication (Steele & Josephs, 1990) may be extended to explain the relative neglect of peripheral information during the processing of visual scenes.

  20. Camera pose estimation for augmented reality in a small indoor dynamic scene

    Science.gov (United States)

    Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad

    2017-09-01

    Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.

  1. Relationship between Childhood Meal Scenes at Home Remembered by University Students and their Current Personality

    OpenAIRE

    恩村, 咲希; Onmura, Saki

    2013-01-01

    This study examines the relationship between childhood meal scenes at home that are remembered by university students and their current personality. The meal scenes are analyzed in terms of companions, conversation content, conversation frequency, atmosphere, and consideration of meals. The scale of the conversation content in childhood meal scenes was prepared on the basis of the results of a preliminary survey. The result showed that a relationship was found between personality traits and c...

  2. The Influence of Information Acquisition on the Complex Dynamics of Market Competition

    Science.gov (United States)

    Guo, Zhanbing; Ma, Junhai

    In this paper, we build a dynamical game model with three bounded rational players (firms) to study the influence of information on the complex dynamics of market competition, where useful information is about rival’s real decision. In this dynamical game model, one information-sharing team is composed of two firms, they acquire and share the information about their common competitor, however, they make their own decisions separately, where the amount of information acquired by this information-sharing team will determine the estimation accuracy about the rival’s real decision. Based on this dynamical game model and some creative 3D diagrams, the influence of the amount of information on the complex dynamics of market competition such as local dynamics, global dynamics and profits is studied. These results have significant theoretical and practical values to realize the influence of information.

  3. Automatic Ship Detection in Remote Sensing Images from Google Earth of Complex Scenes Based on Multiscale Rotation Dense Feature Pyramid Networks

    Directory of Open Access Journals (Sweden)

    Xue Yang

    2018-01-01

    Full Text Available Ship detection has been playing a significant role in the field of remote sensing for a long time, but it is still full of challenges. The main limitations of traditional ship detection methods usually lie in the complexity of application scenarios, the difficulty of intensive object detection, and the redundancy of the detection region. In order to solve these problems above, we propose a framework called Rotation Dense Feature Pyramid Networks (R-DFPN which can effectively detect ships in different scenes including ocean and port. Specifically, we put forward the Dense Feature Pyramid Network (DFPN, which is aimed at solving problems resulting from the narrow width of the ship. Compared with previous multiscale detectors such as Feature Pyramid Network (FPN, DFPN builds high-level semantic feature-maps for all scales by means of dense connections, through which feature propagation is enhanced and feature reuse is encouraged. Additionally, in the case of ship rotation and dense arrangement, we design a rotation anchor strategy to predict the minimum circumscribed rectangle of the object so as to reduce the redundant detection region and improve the recall. Furthermore, we also propose multiscale region of interest (ROI Align for the purpose of maintaining the completeness of the semantic and spatial information. Experiments based on remote sensing images from Google Earth for ship detection show that our detection method based on R-DFPN representation has state-of-the-art performance.

  4. Influence maximization in complex networks through optimal percolation

    Science.gov (United States)

    Morone, Flaviano; Makse, Hernán A.

    2015-08-01

    The whole frame of interconnections in complex networks hinges on a specific set of structural nodes, much smaller than the total size, which, if activated, would cause the spread of information to the whole network, or, if immunized, would prevent the diffusion of a large scale epidemic. Localizing this optimal, that is, minimal, set of structural nodes, called influencers, is one of the most important problems in network science. Despite the vast use of heuristic strategies to identify influential spreaders, the problem remains unsolved. Here we map the problem onto optimal percolation in random networks to identify the minimal set of influencers, which arises by minimizing the energy of a many-body system, where the form of the interactions is fixed by the non-backtracking matrix of the network. Big data analyses reveal that the set of optimal influencers is much smaller than the one predicted by previous heuristic centralities. Remarkably, a large number of previously neglected weakly connected nodes emerges among the optimal influencers. These are topologically tagged as low-degree nodes surrounded by hierarchical coronas of hubs, and are uncovered only through the optimal collective interplay of all the influencers in the network. The present theoretical framework may hold a larger degree of universality, being applicable to other hard optimization problems exhibiting a continuous transition from a known phase.

  5. Binocular contrast-gain control for natural scenes: Image structure and phase alignment.

    Science.gov (United States)

    Huang, Pi-Chun; Dai, Yu-Ming

    2018-05-01

    In the context of natural scenes, we applied the pattern-masking paradigm to investigate how image structure and phase alignment affect contrast-gain control in binocular vision. We measured the discrimination thresholds of bandpass-filtered natural-scene images (targets) under various types of pedestals. Our first experiment had four pedestal types: bandpass-filtered pedestals, unfiltered pedestals, notch-filtered pedestals (which enabled removal of the spatial frequency), and misaligned pedestals (which involved rotation of unfiltered pedestals). Our second experiment featured six types of pedestals: bandpass-filtered, unfiltered, and notch-filtered pedestals, and the corresponding phase-scrambled pedestals. The thresholds were compared for monocular, binocular, and dichoptic viewing configurations. The bandpass-filtered pedestal and unfiltered pedestals showed classic dipper shapes; the dipper shapes of the notch-filtered, misaligned, and phase-scrambled pedestals were weak. We adopted a two-stage binocular contrast-gain control model to describe our results. We deduced that the phase-alignment information influenced the contrast-gain control mechanism before the binocular summation stage and that the phase-alignment information and structural misalignment information caused relatively strong divisive inhibition in the monocular and interocular suppression stages. When the pedestals were phase-scrambled, the elimination of the interocular suppression processing was the most convincing explanation of the results. Thus, our results indicated that both phase-alignment information and similar image structures cause strong interocular suppression. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Integration of an open interface PC scene generator using COTS DVI converter hardware

    Science.gov (United States)

    Nordland, Todd; Lyles, Patrick; Schultz, Bret

    2006-05-01

    Commercial-Off-The-Shelf (COTS) personal computer (PC) hardware is increasingly capable of computing high dynamic range (HDR) scenes for military sensor testing at high frame rates. New electro-optical and infrared (EO/IR) scene projectors feature electrical interfaces that can accept the DVI output of these PC systems. However, military Hardware-in-the-loop (HWIL) facilities such as those at the US Army Aviation and Missile Research Development and Engineering Center (AMRDEC) utilize a sizeable inventory of existing projection systems that were designed to use the Silicon Graphics Incorporated (SGI) digital video port (DVP, also known as DVP2 or DD02) interface. To mate the new DVI-based scene generation systems to these legacy projection systems, CG2 Inc., a Quantum3D Company (CG2), has developed a DVI-to-DVP converter called Delta DVP. This device takes progressive scan DVI input, converts it to digital parallel data, and combines and routes color components to derive a 16-bit wide luminance channel replicated on a DVP output interface. The HWIL Functional Area of AMRDEC has developed a suite of modular software to perform deterministic real-time, wave band-specific rendering of sensor scenes, leveraging the features of commodity graphics hardware and open source software. Together, these technologies enable sensor simulation and test facilities to integrate scene generation and projection components with diverse pedigrees.

  7. History of Reading Struggles Linked to Enhanced Learning in Low Spatial Frequency Scenes

    Science.gov (United States)

    Schneps, Matthew H.; Brockmole, James R.; Sonnert, Gerhard; Pomplun, Marc

    2012-01-01

    People with dyslexia, who face lifelong struggles with reading, exhibit numerous associated low-level sensory deficits including deficits in focal attention. Countering this, studies have shown that struggling readers outperform typical readers in some visual tasks that integrate distributed information across an expanse. Though such abilities would be expected to facilitate scene memory, prior investigations using the contextual cueing paradigm failed to find corresponding advantages in dyslexia. We suggest that these studies were confounded by task-dependent effects exaggerating known focal attention deficits in dyslexia, and that, if natural scenes were used as the context, advantages would emerge. Here, we investigate this hypothesis by comparing college students with histories of severe lifelong reading difficulties (SR) and typical readers (TR) in contexts that vary attention load. We find no differences in contextual-cueing when spatial contexts are letter-like objects, or when contexts are natural scenes. However, the SR group significantly outperforms the TR group when contexts are low-pass filtered natural scenes [F(3, 39) = 3.15, p<.05]. These findings suggest that perception or memory for low spatial frequency components in scenes is enhanced in dyslexia. These findings are important because they suggest strengths for spatial learning in a population otherwise impaired, carrying implications for the education and support of students who face challenges in school. PMID:22558210

  8. History of reading struggles linked to enhanced learning in low spatial frequency scenes.

    Directory of Open Access Journals (Sweden)

    Matthew H Schneps

    Full Text Available People with dyslexia, who face lifelong struggles with reading, exhibit numerous associated low-level sensory deficits including deficits in focal attention. Countering this, studies have shown that struggling readers outperform typical readers in some visual tasks that integrate distributed information across an expanse. Though such abilities would be expected to facilitate scene memory, prior investigations using the contextual cueing paradigm failed to find corresponding advantages in dyslexia. We suggest that these studies were confounded by task-dependent effects exaggerating known focal attention deficits in dyslexia, and that, if natural scenes were used as the context, advantages would emerge. Here, we investigate this hypothesis by comparing college students with histories of severe lifelong reading difficulties (SR and typical readers (TR in contexts that vary attention load. We find no differences in contextual-cueing when spatial contexts are letter-like objects, or when contexts are natural scenes. However, the SR group significantly outperforms the TR group when contexts are low-pass filtered natural scenes [F(3, 39 = 3.15, p<.05]. These findings suggest that perception or memory for low spatial frequency components in scenes is enhanced in dyslexia. These findings are important because they suggest strengths for spatial learning in a population otherwise impaired, carrying implications for the education and support of students who face challenges in school.

  9. The Hip-Hop club scene: Gender, grinding and sex.

    Science.gov (United States)

    Muñoz-Laboy, Miguel; Weinstein, Hannah; Parker, Richard

    2007-01-01

    Hip-Hop culture is a key social medium through which many young men and women from communities of colour in the USA construct their gender. In this study, we focused on the Hip-Hop club scene in New York City with the intention of unpacking narratives of gender dynamics from the perspective of young men and women, and how these relate to their sexual experiences. We conducted a three-year ethnographic study that included ethnographic observations of Hip-Hop clubs and their social scene, and in-depth interviews with young men and young women aged 15-21. This paper describes how young people negotiate gender relations on the dance floor of Hip-Hop clubs. The Hip-Hop club scene represents a context or setting where young men's masculinities are contested by the social environment, where women challenge hypermasculine privilege and where young people can set the stage for what happens next in their sexual and emotional interactions. Hip-Hop culture therefore provides a window into the gender and sexual scripts of many urban minority youth. A fuller understanding of these patterns can offer key insights into the social construction of sexual risk, as well as the possibilities for sexual health promotion, among young people in urban minority populations.

  10. Scene Classification Using High Spatial Resolution Multispectral Data

    National Research Council Canada - National Science Library

    Garner, Jamada

    2002-01-01

    ...), High-spatial resolution (8-meter), 4-color MSI data from IKONOS provide a new tool for scene classification, The utility of these data are studied for the purpose of classifying the Elkhorn Slough and surrounding wetlands in central...

  11. NEGOTIATING PLACE AND GENDERED VIOLENCE IN CANADA’S LARGEST OPEN DRUG SCENE

    Science.gov (United States)

    McNeil, Ryan; Shannon, Kate; Shaver, Laura; Kerr, Thomas; Small, Will

    2014-01-01

    Background Vancouver’s Downtown Eastside is home to Canada’s largest street-based drug scene and only supervised injection facility (Insite). High levels of violence among men and women have been documented in this neighbourhood. This study was undertaken to explore the role of violence in shaping the socio-spatial relations of women and ‘marginal men’ (i.e., those occupying subordinate positions within the drug scene) in the Downtown Eastside, including access to Insite. Methods Semi-structured qualitative interviews were conducted with 23 people who inject drugs (PWID) recruited through the Vancouver Area Network of Drug Users, a local drug user organization. Interviews included a mapping exercise. Interview transcripts and maps were analyzed thematically, with an emphasis on how gendered violence shaped participants’ spatial practices. Results Hegemonic forms of masculinity operating within the Downtown Eastside framed the everyday violence experienced by women and marginal men. This violence shaped the spatial practices of women and marginal men, in that they avoided drug scene milieus where they had experienced violence or that they perceived to be dangerous. Some men linked their spatial restrictions to the perceived 'dope quality' of neighbourhood drug dealers to maintain claims to dominant masculinities while enacting spatial strategies to promote safety. Environmental supports provided by health and social care agencies were critical in enabling women and marginal men to negotiate place and survival within the context of drug scene violence. Access to Insite did not motivate participants to enter into “dangerous” drug scene milieus but they did venture into these areas if necessary to obtain drugs or generate income. Conclusion Gendered violence is critical in restricting the geographies of men and marginal men within the street-based drug scene. There is a need to scale up existing environmental interventions, including supervised injection

  12. Spectral feature characterization methods for blood stain detection in crime scene backgrounds

    Science.gov (United States)

    Yang, Jie; Mathew, Jobin J.; Dube, Roger R.; Messinger, David W.

    2016-05-01

    Blood stains are one of the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Blood spectral signatures containing unique reflectance or absorption features are important both for forensic on-site investigation and laboratory testing. They can be used for target detection and identification applied to crime scene hyperspectral imagery, and also be utilized to analyze the spectral variation of blood on various backgrounds. Non-blood stains often mislead the detection and can generate false alarms at a real crime scene, especially for dark and red backgrounds. This paper measured the reflectance of liquid blood and 9 kinds of non-blood samples in the range of 350 nm - 2500 nm in various crime scene backgrounds, such as pure samples contained in petri dish with various thicknesses, mixed samples with different colors and materials of fabrics, and mixed samples with wood, all of which are examined to provide sub-visual evidence for detecting and recognizing blood from non-blood samples in a realistic crime scene. The spectral difference between blood and non-blood samples are examined and spectral features such as "peaks" and "depths" of reflectance are selected. Two blood stain detection methods are proposed in this paper. The first method uses index to denote the ratio of "depth" minus "peak" over"depth" add"peak" within a wavelength range of the reflectance spectrum. The second method uses relative band depth of the selected wavelength ranges of the reflectance spectrum. Results show that the index method is able to discriminate blood from non-blood samples in most tested crime scene backgrounds, but is not able to detect it from black felt. Whereas the relative band depth method is able to discriminate blood from non-blood samples on all of the tested background material types and colors.

  13. CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes

    OpenAIRE

    Li, Yuhong; Zhang, Xiaofan; Chen, Deming

    2018-01-01

    We propose a network for Congested Scene Recognition called CSRNet to provide a data-driven and deep learning method that can understand highly congested scenes and perform accurate count estimation as well as present high-quality density maps. The proposed CSRNet is composed of two major components: a convolutional neural network (CNN) as the front-end for 2D feature extraction and a dilated CNN for the back-end, which uses dilated kernels to deliver larger reception fields and to replace po...

  14. Effect of pictorial depth cues, binocular disparity cues and motion parallax depth cues on lightness perception in three-dimensional virtual scenes.

    Directory of Open Access Journals (Sweden)

    Michiteru Kitazaki

    2008-09-01

    Full Text Available Surface lightness perception is affected by scene interpretation. There is some experimental evidence that perceived lightness under bi-ocular viewing conditions is different from perceived lightness in actual scenes but there are also reports that viewing conditions have little or no effect on perceived color. We investigated how mixes of depth cues affect perception of lightness in three-dimensional rendered scenes containing strong gradients of illumination in depth.Observers viewed a virtual room (4 m width x 5 m height x 17.5 m depth with checkerboard walls and floor. In four conditions, the room was presented with or without binocular disparity (BD depth cues and with or without motion parallax (MP depth cues. In all conditions, observers were asked to adjust the luminance of a comparison surface to match the lightness of test surfaces placed at seven different depths (8.5-17.5 m in the scene. We estimated lightness versus depth profiles in all four depth cue conditions. Even when observers had only pictorial depth cues (no MP, no BD, they partially but significantly discounted the illumination gradient in judging lightness. Adding either MP or BD led to significantly greater discounting and both cues together produced the greatest discounting. The effects of MP and BD were approximately additive. BD had greater influence at near distances than far.These results suggest the surface lightness perception is modulated by three-dimensional perception/interpretation using pictorial, binocular-disparity, and motion-parallax cues additively. We propose a two-stage (2D and 3D processing model for lightness perception.

  15. Behind the scenes at the LHC inauguration

    CERN Document Server

    2008-01-01

    On 21 October the LHC inauguration ceremony will take place and people from all over CERN have been busy preparing. With delegations from 38 countries attending, including ministers and heads of state, the Bulletin has gone behind the scenes to see what it takes to put together an event of this scale.

  16. Desirable and undesirable future thoughts call for different scene construction processes.

    Science.gov (United States)

    de Vito, S; Neroni, M A; Gamboz, N; Della Sala, S; Brandimonte, M A

    2015-01-01

    Despite the growing interest in the ability of foreseeing (episodic future thinking), it is still unclear how healthy people construct possible future scenarios. We suggest that different future thoughts require different processes of scene construction. Thirty-five participants were asked to imagine desirable and less desirable future events. Imagining desirable events increased the ease of scene construction, the frequency of life scripts, the number of internal details, and the clarity of sensorial and spatial temporal information. The initial description of general personal knowledge lasted longer in undesirable than in desirable anticipations. Finally, participants were more prone to explicitly indicate autobiographical memory as the main source of their simulations of undesirable episodes, whereas they equally related the simulations of desirable events to autobiographical events or semantic knowledge. These findings show that desirable and undesirable scenarios call for different mechanisms of scene construction. The present study emphasizes that future thinking cannot be considered as a monolithic entity.

  17. Application of composite small calibration objects in traffic accident scene photogrammetry.

    Science.gov (United States)

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.

  18. Application of composite small calibration objects in traffic accident scene photogrammetry.

    Directory of Open Access Journals (Sweden)

    Qiang Chen

    Full Text Available In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.

  19. Number 13 / Part I. Music. 3. Mad Scenes: A Warning against Overwhelming Passions

    Directory of Open Access Journals (Sweden)

    Marisi Rossella

    2017-03-01

    Full Text Available This study focuses on mad scenes in poetry and musical theatre, stressing that, according to Aristotle’s theory on catharsis and the Affektenlehre, they had a pedagogical role on the audience. Some mad scenes by J.S. Bach, Handel and Mozart are briefly analyzed, highlighting their most relevant textual and musical characteristics.

  20. Noise-robust cortical tracking of attended speech in real-world acoustic scenes

    DEFF Research Database (Denmark)

    Fuglsang, Søren; Dau, Torsten; Hjortkjær, Jens

    2017-01-01

    Selectively attending to one speaker in a multi-speaker scenario is thought to synchronize low-frequency cortical activity to the attended speech signal. In recent studies, reconstruction of speech from single-trial electroencephalogram (EEG) data has been used to decode which talker a listener...... is attending to in a two-talker situation. It is currently unclear how this generalizes to more complex sound environments. Behaviorally, speech perception is robust to the acoustic distortions that listeners typically encounter in everyday life, but it is unknown whether this is mirrored by a noise......-robust neural tracking of attended speech. Here we used advanced acoustic simulations to recreate real-world acoustic scenes in the laboratory. In virtual acoustic realities with varying amounts of reverberation and number of interfering talkers, listeners selectively attended to the speech stream...

  1. Improving Remote Sensing Scene Classification by Integrating Global-Context and Local-Object Features

    Directory of Open Access Journals (Sweden)

    Dan Zeng

    2018-05-01

    Full Text Available Recently, many researchers have been dedicated to using convolutional neural networks (CNNs to extract global-context features (GCFs for remote-sensing scene classification. Commonly, accurate classification of scenes requires knowledge about both the global context and local objects. However, unlike the natural images in which the objects cover most of the image, objects in remote-sensing images are generally small and decentralized. Thus, it is hard for vanilla CNNs to focus on both global context and small local objects. To address this issue, this paper proposes a novel end-to-end CNN by integrating the GCFs and local-object-level features (LOFs. The proposed network includes two branches, the local object branch (LOB and global semantic branch (GSB, which are used to generate the LOFs and GCFs, respectively. Then, the concatenation of features extracted from the two branches allows our method to be more discriminative in scene classification. Three challenging benchmark remote-sensing datasets were extensively experimented on; the proposed approach outperformed the existing scene classification methods and achieved state-of-the-art results for all three datasets.

  2. Ligand influences on properties of uranium coordination complexes. Structure, reactivity, and spectroscopy

    International Nuclear Information System (INIS)

    Kosog, Boris

    2012-01-01

    In this thesis several different aspects of uranium chemistry are presented. It was shown that terminal uranium(V) oxo and imido complexes [(( R ArO) 3 tacn)U V (O)] and [(( R ArO) 3 tacn)U V (NSiMe 3 )] (R = t Bu, Ad) can be oxidized by silver(I) hexafluoro-antimonate to the uranium(VI) oxo and imido complexes [(( R ArO) 3 tacn)U VI (O)]SbF 6 and [(( R ArO) 3 tacn)U VI (NSiMe 3 )]SbF 6 . While for the t Bu-derivative of the oxo complex an equatorial coordination is observed due to stabilization by the inverse trans-influence, normal axial coordination is observed for the Ad-derivative and both imido complexes. The inverse trans-influence was thus proven to be a key factor for the coordination mode of a terminal ligand on high valent uranium complexes. L III XANES was shown to be a great tool for the determination of oxidation states of uranium complexes. Therefore, a series of uranium complexes in all stable oxidation states for uranium, +III to +VI was prepared, and their spectra analyzed. All compounds bear only O-donor ligands in addition to the chlating trisaryloxide-tacn-ligand. A separation of 1.5 to 3 eV in the white line energy is observed between the different oxidation states. This series can be used as reference for compounds, where oxidation state assignment is not obvious, such as a ketyl radical complex [(( t-Bu Ar)O 3 tacn)U(O-C( t-Bu Ph) 2 .- )]. For this complex, the oxidation state of +IV could be assigned. Moreover, a series of isostructural uranium(IV) complexes was prepared. The influence of different ligands according to the spectrochemical series on the electronic and magnetic properties could be shown using UV/vis/NIR spectroscopy and variable temperature SQUID measurements. Calculations of uranium L III XANES spectra show a variation in the shape of the spectra and thus high resolution PFY-XANES would be a great tool to determine the electronic influence of these different axial ligands. Using the single N-anchored ligand system, the first

  3. Plastic influence functions for calculating J-integral of complex-cracks in pipe

    International Nuclear Information System (INIS)

    Jeong, Jae-Uk; Choi, Jae-Boong; Kim, Moon-Ki; Huh, Nam-Su; Kim, Yun-Jae

    2016-01-01

    In this study, the plastic influence functions, h_1, for estimates of J-integral of a pipe with a complex crack were newly proposed based on the systematic 3-dimensional (3-D) elastic-plastic finite element (FE) analyses by using Ramberg-Osgood (R-O) relation, in which global bending moment, axial tension and internal pressure were considered as loading conditions. Based on the present plastic influence functions, the GE/EPRI-type J-estimation scheme for complex-cracked pipes was suggested, and the results from the proposed J-estimation were compared with the FE results using both R-O fit parameters and actual tensile data of SA376 TP304 stainless steel. The comparison results demonstrate that although the proposed scheme provided sensitive J estimations according to fitting ranges of R-O parameters, it showed overall good agreements with the FE results using R-O relation. Thus, the proposed engineering J prediction method can be utilized to assess instability of a complex crack in pipes for R-O material. - Highlights: • New h_1values of GE/EPRI method for complex-cracked pipes are proposed. • The plastic limit loads of complex-cracked pipes using Mises yield criterion are provided. • The new J estimates of complex-cracked pipes are proposed based on GE/EPRI concept. • The proposed estimates for J are validated against 3-D finite element results.

  4. The Interplay of Episodic and Semantic Memory in Guiding Repeated Search in Scenes

    Science.gov (United States)

    Vo, Melissa L.-H.; Wolfe, Jeremy M.

    2013-01-01

    It seems intuitive to think that previous exposure or interaction with an environment should make it easier to search through it and, no doubt, this is true in many real-world situations. However, in a recent study, we demonstrated that previous exposure to a scene does not necessarily speed search within that scene. For instance, when observers…

  5. Tachistoscopic illumination and masking of real scenes.

    Science.gov (United States)

    Chichka, David; Philbeck, John W; Gajewski, Daniel A

    2015-03-01

    Tachistoscopic presentation of scenes has been valuable for studying the emerging properties of visual scene representations. The spatial aspects of this work have generally been focused on the conceptual locations (e.g., next to the refrigerator) and directional locations of objects in 2-D arrays and/or images. Less is known about how the perceived egocentric distance of objects develops. Here we describe a novel system for presenting brief glimpses of a real-world environment, followed by a mask. The system includes projectors with mechanical shutters for projecting the fixation and masking images, a set of LED floodlights for illuminating the environment, and computer-controlled electronics to set the timing and initiate the process. Because a real environment is used, most visual distance and depth cues can be manipulated using traditional methods. The system is inexpensive, robust, and its components are readily available in the marketplace. This article describes the system and the timing characteristics of each component. We verified the system's ability to control exposure to time scales as low as a few milliseconds.

  6. Range and intensity vision for rock-scene segmentation

    CSIR Research Space (South Africa)

    Mkwelo, SG

    2007-11-01

    Full Text Available This paper presents another approach to segmenting a scene of rocks on a conveyor belt for the purposes of measuring rock size. Rock size estimation instruments are used to monitor, optimize and control milling and crushing in the mining industry...

  7. Evaluating Color Descriptors for Object and Scene Recognition

    NARCIS (Netherlands)

    van de Sande, K.E.A.; Gevers, T.; Snoek, C.G.M.

    2010-01-01

    Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been

  8. Falling out of time: enhanced memory for scenes presented at behaviorally irrelevant points in time in posttraumatic stress disorder (PTSD).

    Science.gov (United States)

    Levy-Gigi, Einat; Kéri, Szabolcs

    2012-01-01

    Spontaneous encoding of the visual environment depends on the behavioral relevance of the task performed simultaneously. If participants identify target letters or auditory tones while viewing a series of briefly presented natural and urban scenes, they demonstrate effective scene recognition only when a target, but not a behaviorally irrelevant distractor, appears together with the scene. Here, we show that individuals with posttraumatic stress disorder (PTSD), who witnessed the red sludge disaster in Hungary, show the opposite pattern of performance: enhanced recognition of scenes presented together with distractors and deficient recognition of scenes presented with targets. The recognition of trauma-related and neutral scenes was not different in individuals with PTSD. We found a positive correlation between memory for scenes presented with auditory distractors and re-experiencing symptoms (memory intrusions and flashbacks). These results suggest that abnormal encoding of visual scenes at behaviorally irrelevant events might be associated with intrusive experiences by disrupting the flow of time.

  9. Falling out of time: enhanced memory for scenes presented at behaviorally irrelevant points in time in posttraumatic stress disorder (PTSD.

    Directory of Open Access Journals (Sweden)

    Einat Levy-Gigi

    Full Text Available Spontaneous encoding of the visual environment depends on the behavioral relevance of the task performed simultaneously. If participants identify target letters or auditory tones while viewing a series of briefly presented natural and urban scenes, they demonstrate effective scene recognition only when a target, but not a behaviorally irrelevant distractor, appears together with the scene. Here, we show that individuals with posttraumatic stress disorder (PTSD, who witnessed the red sludge disaster in Hungary, show the opposite pattern of performance: enhanced recognition of scenes presented together with distractors and deficient recognition of scenes presented with targets. The recognition of trauma-related and neutral scenes was not different in individuals with PTSD. We found a positive correlation between memory for scenes presented with auditory distractors and re-experiencing symptoms (memory intrusions and flashbacks. These results suggest that abnormal encoding of visual scenes at behaviorally irrelevant events might be associated with intrusive experiences by disrupting the flow of time.

  10. Repfinder: Finding approximately repeated scene elements for image editing

    KAUST Repository

    Cheng, Ming-Ming

    2010-07-26

    Repeated elements are ubiquitous and abundant in both manmade and natural scenes. Editing such images while preserving the repetitions and their relations is nontrivial due to overlap, missing parts, deformation across instances, illumination variation, etc. Manually enforcing such relations is laborious and error-prone. We propose a novel framework where user scribbles are used to guide detection and extraction of such repeated elements. Our detection process, which is based on a novel boundary band method, robustly extracts the repetitions along with their deformations. The algorithm only considers the shape of the elements, and ignores similarity based on color, texture, etc. We then use topological sorting to establish a partial depth ordering of overlapping repeated instances. Missing parts on occluded instances are completed using information from other instances. The extracted repeated instances can then be seamlessly edited and manipulated for a variety of high level tasks that are otherwise difficult to perform. We demonstrate the versatility of our framework on a large set of inputs of varying complexity, showing applications to image rearrangement, edit transfer, deformation propagation, and instance replacement. © 2010 ACM.

  11. SENSOR-TOPOLOGY BASED SIMPLICIAL COMPLEX RECONSTRUCTION FROM MOBILE LASER SCANNING

    Directory of Open Access Journals (Sweden)

    S. Guinard

    2018-05-01

    Full Text Available We propose a new method for the reconstruction of simplicial complexes (combining points, edges and triangles from 3D point clouds from Mobile Laser Scanning (MLS. Our main goal is to produce a reconstruction of a scene that is adapted to the local geometry of objects. Our method uses the inherent topology of the MLS sensor to define a spatial adjacency relationship between points. We then investigate each possible connexion between adjacent points and filter them by searching collinear structures in the scene, or structures perpendicular to the laser beams. Next, we create triangles for each triplet of self-connected edges. Last, we improve this method with a regularization based on the co-planarity of triangles and collinearity of remaining edges. We compare our results to a naive simplicial complexes reconstruction based on edge length.

  12. Making a scene: exploring the dimensions of place through Dutch popular music, 1960-2010

    NARCIS (Netherlands)

    Brandellero, A.; Pfeffer, K.

    2015-01-01

    This paper applies a multi-layered conceptualisation of place to the analysis of particular music scenes in the Netherlands, 1960-2010. We focus on: the clustering of music-related activities in locations; the delineation of spatially tied music scenes, based on a shared identity, reproduced over

  13. Functional LH1 antenna complexes influence electron transfer in bacterial photosynthetic reaction centers

    NARCIS (Netherlands)

    Visschers, R.W.; Vulto, S.I.E.; Jones, M.R.; van Grondelle, R.; Kraayenhof, R.

    1999-01-01

    The effect of the light harvesting 1 (LH1) antenna complex on the driving force for light-driven electron transfer in the Rhodobacter sphaeroides reaction center has been examined. Equilibrium redox titrations show that the presence of the LH1 antenna complex influences the free energy change for

  14. Functional LH1 antenna complexes influence electron transfer in bacterial photosynthetic reaction centers.

    NARCIS (Netherlands)

    Visschers, R.W.; Vulto, S.I.E.; Jones, M.R.; van Grondelle, R.; Kraayenhof, R.

    1999-01-01

    The effect of the light harvesting 1 (LH1) antenna complex on the driving force for light-driven electron transfer in the Rhodobacter sphaeroides reaction center has been examined. Equilibrium redox titrations show that the presence of the LH1 antenna complex influences the free energy change for

  15. Cortical networks dynamically emerge with the interplay of slow and fast oscillations for memory of a natural scene.

    Science.gov (United States)

    Mizuhara, Hiroaki; Sato, Naoyuki; Yamaguchi, Yoko

    2015-05-01

    Neural oscillations are crucial for revealing dynamic cortical networks and for serving as a possible mechanism of inter-cortical communication, especially in association with mnemonic function. The interplay of the slow and fast oscillations might dynamically coordinate the mnemonic cortical circuits to rehearse stored items during working memory retention. We recorded simultaneous EEG-fMRI during a working memory task involving a natural scene to verify whether the cortical networks emerge with the neural oscillations for memory of the natural scene. The slow EEG power was enhanced in association with the better accuracy of working memory retention, and accompanied cortical activities in the mnemonic circuits for the natural scene. Fast oscillation showed a phase-amplitude coupling to the slow oscillation, and its power was tightly coupled with the cortical activities for representing the visual images of natural scenes. The mnemonic cortical circuit with the slow neural oscillations would rehearse the distributed natural scene representations with the fast oscillation for working memory retention. The coincidence of the natural scene representations could be obtained by the slow oscillation phase to create a coherent whole of the natural scene in the working memory. Copyright © 2015 Elsevier Inc. All rights reserved.

  16. How children remember neutral and emotional pictures: boundary extension in children's scene memories.

    Science.gov (United States)

    Candel, Ingrid; Merckelbach, Harald; Houben, Katrijn; Vandyck, Inne

    2004-01-01

    Boundary extension is the tendency to remember more of a scene than was actually shown. The dominant interpretation of this memory illusion is that it originates from schemata that people construct when viewing a scene. Evidence of boundary extension has been obtained primarily with adult participants who remember neutral pictures. The current study addressed the developmental stability of this phenomenon. Therefore, we investigated whether children aged 10-12 years display boundary extension for neutral pictures. Moreover, we examined emotional scene memory. Eighty-seven children drew pictures from memory after they had seen either neutral or emotional pictures. Both their neutral and emotional drawings revealed boundary extension. Apparently, the schema construction that underlies boundary extension is a robust and ubiquitous process.

  17. Using selected scenes from Brazilian films to teach about substance use disorders, within medical education.

    Science.gov (United States)

    Castaldelli-Maia, João Mauricio; Oliveira, Hercílio Pereira; Andrade, Arthur Guerra; Lotufo-Neto, Francisco; Bhugra, Dinesh

    2012-01-01

    Themes like alcohol and drug abuse, relationship difficulties, psychoses, autism and personality dissociation disorders have been widely used in films. Psychiatry and psychiatric conditions in various cultural settings are increasingly taught using films. Many articles on cinema and psychiatry have been published but none have presented any methodology on how to select material. Here, the authors look at the portrayal of abusive use of alcohol and drugs during the Brazilian cinema revival period (1994 to 2008). Qualitative study at two universities in the state of São Paulo. Scenes were selected from films available at rental stores and were analyzed using a specifically designed protocol. We assessed how realistic these scenes were and their applicability for teaching. One author selected 70 scenes from 50 films (graded for realism and teaching applicability > 8). These were then rated by another two judges. Rating differences among the three judges were assessed using nonparametric tests (P 8) were defined as "quality scenes". Thirty-nine scenes from 27 films were identified as "quality scenes". Alcohol, cannabis, cocaine, hallucinogens and inhalants were included in these. Signs and symptoms of intoxication, abusive/harmful use and dependence were shown. We have produced rich teaching material for discussing psychopathology relating to alcohol and drug use that can be used both at undergraduate and at postgraduate level. Moreover, it could be seen that certain drug use behavioral patterns are deeply rooted in some Brazilian films and groups.

  18. Social dimension and complexity differentially influence brain responses during feedback processing.

    Science.gov (United States)

    Pfabigan, Daniela M; Gittenberger, Marianne; Lamm, Claus

    2017-10-30

    Recent research emphasizes the importance of social factors during performance monitoring. Thus, the current study investigated the impact of social stimuli -such as communicative gestures- on feedback processing. Moreover, it addressed a shortcoming of previous studies, which failed to consider stimulus complexity as potential confounding factor. Twenty-four volunteers performed a time estimation task while their electroencephalogram was recorded. Either social complex, social non-complex, non-social complex, or non-social non-complex stimuli were used to provide performance feedback. No effects of social dimension or complexity were found for task performance. In contrast, Feedback-Related Negativity (FRN) and P300 amplitudes were sensitive to both factors, with larger FRN and P300 amplitudes after social compared to non-social stimuli, and larger FRN amplitudes after complex positive than non-complex positive stimuli. P2 amplitudes were solely sensitive to feedback valence and social dimension. Subjectively, social complex stimuli were rated as more motivating than non-social complex ones. Independently of each other, social dimension and visual complexity influenced amplitude variation during performance monitoring. Social stimuli seem to be perceived as more salient, which is corroborated by P2, FRN and P300 results, as well as by subjective ratings. This could be explained due to their given relevance during every day social interactions.

  19. Fixation and saliency during search of natural scenes: the case of visual agnosia.

    Science.gov (United States)

    Foulsham, Tom; Barton, Jason J S; Kingstone, Alan; Dewhurst, Richard; Underwood, Geoffrey

    2009-07-01

    Models of eye movement control in natural scenes often distinguish between stimulus-driven processes (which guide the eyes to visually salient regions) and those based on task and object knowledge (which depend on expectations or identification of objects and scene gist). In the present investigation, the eye movements of a patient with visual agnosia were recorded while she searched for objects within photographs of natural scenes and compared to those made by students and age-matched controls. Agnosia is assumed to disrupt the top-down knowledge available in this task, and so may increase the reliance on bottom-up cues. The patient's deficit in object recognition was seen in poor search performance and inefficient scanning. The low-level saliency of target objects had an effect on responses in visual agnosia, and the most salient region in the scene was more likely to be fixated by the patient than by controls. An analysis of model-predicted saliency at fixation locations indicated a closer match between fixations and low-level saliency in agnosia than in controls. These findings are discussed in relation to saliency-map models and the balance between high and low-level factors in eye guidance.

  20. Extracting flat-field images from scene-based image sequences using phase correlation

    Energy Technology Data Exchange (ETDEWEB)

    Caron, James N., E-mail: Caron@RSImd.com [Research Support Instruments, 4325-B Forbes Boulevard, Lanham, Maryland 20706 (United States); Montes, Marcos J. [Naval Research Laboratory, Code 7231, 4555 Overlook Avenue, SW, Washington, DC 20375 (United States); Obermark, Jerome L. [Naval Research Laboratory, Code 8231, 4555 Overlook Avenue, SW, Washington, DC 20375 (United States)

    2016-06-15

    Flat-field image processing is an essential step in producing high-quality and radiometrically calibrated images. Flat-fielding corrects for variations in the gain of focal plane array electronics and unequal illumination from the system optics. Typically, a flat-field image is captured by imaging a radiometrically uniform surface. The flat-field image is normalized and removed from the images. There are circumstances, such as with remote sensing, where a flat-field image cannot be acquired in this manner. For these cases, we developed a phase-correlation method that allows the extraction of an effective flat-field image from a sequence of scene-based displaced images. The method uses sub-pixel phase correlation image registration to align the sequence to estimate the static scene. The scene is removed from sequence producing a sequence of misaligned flat-field images. An average flat-field image is derived from the realigned flat-field sequence.

  1. The perception of naturalness correlates with low-level visual features of environmental scenes.

    Directory of Open Access Journals (Sweden)

    Marc G Berman

    Full Text Available Previous research has shown that interacting with natural environments vs. more urban or built environments can have salubrious psychological effects, such as improvements in attention and memory. Even viewing pictures of nature vs. pictures of built environments can produce similar effects. A major question is: What is it about natural environments that produces these benefits? Problematically, there are many differing qualities between natural and urban environments, making it difficult to narrow down the dimensions of nature that may lead to these benefits. In this study, we set out to uncover visual features that related to individuals' perceptions of naturalness in images. We quantified naturalness in two ways: first, implicitly using a multidimensional scaling analysis and second, explicitly with direct naturalness ratings. Features that seemed most related to perceptions of naturalness were related to the density of contrast changes in the scene, the density of straight lines in the scene, the average color saturation in the scene and the average hue diversity in the scene. We then trained a machine-learning algorithm to predict whether a scene was perceived as being natural or not based on these low-level visual features and we could do so with 81% accuracy. As such we were able to reliably predict subjective perceptions of naturalness with objective low-level visual features. Our results can be used in future studies to determine if these features, which are related to naturalness, may also lead to the benefits attained from interacting with nature.

  2. Attention in natural scenes: Affective-motivational factors guide gaze independently of visual salience.

    Science.gov (United States)

    Schomaker, Judith; Walper, Daniel; Wittmann, Bianca C; Einhäuser, Wolfgang

    2017-04-01

    In addition to low-level stimulus characteristics and current goals, our previous experience with stimuli can also guide attentional deployment. It remains unclear, however, if such effects act independently or whether they interact in guiding attention. In the current study, we presented natural scenes including every-day objects that differed in affective-motivational impact. In the first free-viewing experiment, we presented visually-matched triads of scenes in which one critical object was replaced that varied mainly in terms of motivational value, but also in terms of valence and arousal, as confirmed by ratings by a large set of observers. Treating motivation as a categorical factor, we found that it affected gaze. A linear-effect model showed that arousal, valence, and motivation predicted fixations above and beyond visual characteristics, like object size, eccentricity, or visual salience. In a second experiment, we experimentally investigated whether the effects of emotion and motivation could be modulated by visual salience. In a medium-salience condition, we presented the same unmodified scenes as in the first experiment. In a high-salience condition, we retained the saturation of the critical object in the scene, and decreased the saturation of the background, and in a low-salience condition, we desaturated the critical object while retaining the original saturation of the background. We found that highly salient objects guided gaze, but still found additional additive effects of arousal, valence and motivation, confirming that higher-level factors can also guide attention, as measured by fixations towards objects in natural scenes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  3. ELECTRIC FACTORS INFLUENCING THE COMPLEX EROSION PROCESSING BY INTRODUCING THE ELECTROLYTE THROUGH THE TRANSFER OBJECT

    Directory of Open Access Journals (Sweden)

    Alin Nioata

    2014-05-01

    Full Text Available The electric and electrochemical complex erosion processing is influenced by a great number of factors acting in tight interdependence and mutually influencing one another for achieving the stability of the processing process and achieving the final technological characteristics.The values taking part in developing the fundamental phenomena of the mechanism of complex erosion prevailing and contributes to the definition of technological characteristics, are factors.The paper presents the U potential difference and electric strength I as determining factors of the complex erosion process as well as other factors deriving from them: the current density, the power of the supply source.

  4. Effect of Viewing Smoking Scenes in Motion Pictures on Subsequent Smoking Desire in Audiences in South Korea.

    Science.gov (United States)

    Sohn, Minsung; Jung, Minsoo

    2017-07-17

    In the modern era of heightened awareness of public health, smoking scenes in movies remain relatively free from public monitoring. The effect of smoking scenes in movies on the promotion of viewers' smoking desire remains unknown. The study aimed to explore whether exposure of adolescent smokers to images of smoking in fılms could stimulate smoking behavior. Data were derived from a national Web-based sample survey of 748 Korean high-school students. Participants aged 16-18 years were randomly assigned to watch three short video clips with or without smoking scenes. After adjusting covariates using propensity score matching, paired sample t test and logistic regression analyses compared the difference in smoking desire before and after exposure of participants to smoking scenes. For male adolescents, cigarette craving was significantly higher in those who watched movies with smoking scenes than in the control group who did not view smoking scenes (t 307.96 =2.066, Pfilms and assigning a smoking-related screening grade to films is warranted. ©Minsung Sohn, Minsoo Jung. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 17.07.2017.

  5. Position-Invariant Robust Features for Long-Term Recognition of Dynamic Outdoor Scenes

    Science.gov (United States)

    Kawewong, Aram; Tangruamsub, Sirinart; Hasegawa, Osamu

    A novel Position-Invariant Robust Feature, designated as PIRF, is presented to address the problem of highly dynamic scene recognition. The PIRF is obtained by identifying existing local features (i.e. SIFT) that have a wide baseline visibility within a place (one place contains more than one sequential images). These wide-baseline visible features are then represented as a single PIRF, which is computed as an average of all descriptors associated with the PIRF. Particularly, PIRFs are robust against highly dynamical changes in scene: a single PIRF can be matched correctly against many features from many dynamical images. This paper also describes an approach to using these features for scene recognition. Recognition proceeds by matching an individual PIRF to a set of features from test images, with subsequent majority voting to identify a place with the highest matched PIRF. The PIRF system is trained and tested on 2000+ outdoor omnidirectional images and on COLD datasets. Despite its simplicity, PIRF offers a markedly better rate of recognition for dynamic outdoor scenes (ca. 90%) than the use of other features. Additionally, a robot navigation system based on PIRF (PIRF-Nav) can outperform other incremental topological mapping methods in terms of time (70% less) and memory. The number of PIRFs can be reduced further to reduce the time while retaining high accuracy, which makes it suitable for long-term recognition and localization.

  6. Semantic Categorization Precedes Affective Evaluation of Visual Scenes

    Science.gov (United States)

    Nummenmaa, Lauri; Hyona, Jukka; Calvo, Manuel G.

    2010-01-01

    We compared the primacy of affective versus semantic categorization by using forced-choice saccadic and manual response tasks. Participants viewed paired emotional and neutral scenes involving humans or animals flashed rapidly in extrafoveal vision. Participants were instructed to categorize the targets by saccading toward the location occupied by…

  7. The Effect of Scene Variation on the Redundant Use of Color in Definite Reference

    Science.gov (United States)

    Koolen, Ruud; Goudbeek, Martijn; Krahmer, Emiel

    2013-01-01

    This study investigates to what extent the amount of variation in a visual scene causes speakers to mention the attribute color in their definite target descriptions, focusing on scenes in which this attribute is not needed for identification of the target. The results of our three experiments show that speakers are more likely to redundantly…

  8. The complexity of narrative interferes in the use of conjunctions in children with specific language impairment.

    Science.gov (United States)

    Gonzalez, Deborah Oliveira; Cáceres, Ana Manhani; Bento-Gaz, Ana Carolina Paiva; Befi-Lopes, Debora Maria

    2012-01-01

    To verify the use of conjunctions in narratives, and to investigate the influence of stimuli's complexity over the type of conjunctions used by children with specific language impairment (SLI) and children with typical language development. Participants were 40 children (20 with typical language development and 20 with SLI) with ages between 7 and 10 years, paired by age range. Fifteen stories with increasing of complexity were used to obtain the narratives; stories were classified into mechanical, behavioral and intentional, and each of them was represented by four scenes. Narratives were analyzed according to occurrence and classification of conjunctions. Both groups used more coordinative than subordinate conjunctions, with significant decrease in the use of conjunctions in the discourse of SLI children. The use of conjunctions varied according to the type of narrative: for coordinative conjunctions, both groups differed only between intentional and behavioral narratives, with higher occurrence in behavioral ones; for subordinate conjunctions, typically developing children's performance did not show differences between narratives, while SLI children presented fewer occurrences in intentional narratives, which was different from other narratives. Both groups used more coordinative than subordinate conjunctions; however, typically developing children presented more conjunctions than SLI children. The production of children with SLI was influenced by stimulus, since more complex narratives has less use of subordinate conjunctions.

  9. Deep Salient Feature Based Anti-Noise Transfer Network for Scene Classification of Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Xi Gong

    2018-03-01

    Full Text Available Remote sensing (RS scene classification is important for RS imagery semantic interpretation. Although tremendous strides have been made in RS scene classification, one of the remaining open challenges is recognizing RS scenes in low quality variance (e.g., various scales and noises. This paper proposes a deep salient feature based anti-noise transfer network (DSFATN method that effectively enhances and explores the high-level features for RS scene classification in different scales and noise conditions. In DSFATN, a novel discriminative deep salient feature (DSF is introduced by saliency-guided DSF extraction, which conducts a patch-based visual saliency (PBVS algorithm using “visual attention” mechanisms to guide pre-trained CNNs for producing the discriminative high-level features. Then, an anti-noise network is proposed to learn and enhance the robust and anti-noise structure information of RS scene by directly propagating the label information to fully-connected layers. A joint loss is used to minimize the anti-noise network by integrating anti-noise constraint and a softmax classification loss. The proposed network architecture can be easily trained with a limited amount of training data. The experiments conducted on three different scale RS scene datasets show that the DSFATN method has achieved excellent performance and great robustness in different scales and noise conditions. It obtains classification accuracy of 98.25%, 98.46%, and 98.80%, respectively, on the UC Merced Land Use Dataset (UCM, the Google image dataset of SIRI-WHU, and the SAT-6 dataset, advancing the state-of-the-art substantially.

  10. Using selected scenes from Brazilian films to teach about substance use disorders, within medical education

    Directory of Open Access Journals (Sweden)

    João Mauricio Castaldelli-Maia

    Full Text Available CONTEXT AND OBJECTIVES: Themes like alcohol and drug abuse, relationship difficulties, psychoses, autism and personality dissociation disorders have been widely used in films. Psychiatry and psychiatric conditions in various cultural settings are increasingly taught using films. Many articles on cinema and psychiatry have been published but none have presented any methodology on how to select material. Here, the authors look at the portrayal of abusive use of alcohol and drugs during the Brazilian cinema revival period (1994 to 2008. DESIGN AND SETTING: Qualitative study at two universities in the state of São Paulo. METHODS: Scenes were selected from films available at rental stores and were analyzed using a specifically designed protocol. We assessed how realistic these scenes were and their applicability for teaching. One author selected 70 scenes from 50 films (graded for realism and teaching applicability > 8. These were then rated by another two judges. Rating differences among the three judges were assessed using nonparametric tests (P 8 were defined as "quality scenes". RESULTS: Thirty-nine scenes from 27 films were identified as "quality scenes". Alcohol, cannabis, cocaine, hallucinogens and inhalants were included in these. Signs and symptoms of intoxication, abusive/harmful use and dependence were shown. CONCLUSIONS: We have produced rich teaching material for discussing psychopathology relating to alcohol and drug use that can be used both at undergraduate and at postgraduate level. Moreover, it could be seen that certain drug use behavioral patterns are deeply rooted in some Brazilian films and groups.

  11. Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing

    Directory of Open Access Journals (Sweden)

    Qianghui Zhang

    2016-07-01

    Full Text Available Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS, which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD based on Stolt interpolation. Finally, a modified TSP (MTSP is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application.

  12. Scene data fusion: Real-time standoff volumetric gamma-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Barnowski, Ross [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720, United States of America (United States); Haefner, Andrew; Mihailescu, Lucian [Lawrence Berkeley National Lab - Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720, United States of America (United States); Vetter, Kai [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720, United States of America (United States); Lawrence Berkeley National Lab - Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720, United States of America (United States)

    2015-11-11

    An approach to gamma-ray imaging has been developed that enables near real-time volumetric (3D) imaging of unknown environments thus improving the utility of gamma-ray imaging for source-search and radiation mapping applications. The approach, herein dubbed scene data fusion (SDF), is based on integrating mobile radiation imagers with real-time tracking and scene reconstruction algorithms to enable a mobile mode of operation and 3D localization of gamma-ray sources. A 3D model of the scene, provided in real-time by a simultaneous localization and mapping (SLAM) algorithm, is incorporated into the image reconstruction reducing the reconstruction time and improving imaging performance. The SDF concept is demonstrated in this work with a Microsoft Kinect RGB-D sensor, a real-time SLAM solver, and a cart-based Compton imaging platform comprised of two 3D position-sensitive high purity germanium (HPGe) detectors. An iterative algorithm based on Compton kinematics is used to reconstruct the gamma-ray source distribution in all three spatial dimensions. SDF advances the real-world applicability of gamma-ray imaging for many search, mapping, and verification scenarios by improving the tractiblity of the gamma-ray image reconstruction and providing context for the 3D localization of gamma-ray sources within the environment in real-time.

  13. Registration of eye reflection and scene images using an aspherical eye model.

    Science.gov (United States)

    Nakazawa, Atsushi; Nitschke, Christian; Nishida, Toyoaki

    2016-11-01

    This paper introduces an image registration algorithm between an eye reflection and a scene image. Although there are currently a large number of image registration algorithms, this task remains difficult due to nonlinear distortions at the eye surface and large amounts of noise, such as iris texture, eyelids, eyelashes, and their shadows. To overcome this issue, we developed an image registration method combining an aspherical eye model that simulates nonlinear distortions considering eye geometry and a two-step iterative registration strategy that obtains dense correspondence of the feature points to achieve accurate image registrations for the entire image region. We obtained a database of eye reflection and scene images featuring four subjects in indoor and outdoor scenes and compared the registration performance with different asphericity conditions. Results showed that the proposed approach can perform accurate registration with an average accuracy of 1.05 deg by using the aspherical cornea model. This work is relevant for eye image analysis in general, enabling novel applications and scenarios.

  14. Qualitative spatial logic descriptors from 3D indoor scenes to generate explanations in natural language.

    Science.gov (United States)

    Falomir, Zoe; Kluth, Thomas

    2018-05-01

    The challenge of describing 3D real scenes is tackled in this paper using qualitative spatial descriptors. A key point to study is which qualitative descriptors to use and how these qualitative descriptors must be organized to produce a suitable cognitive explanation. In order to find answers, a survey test was carried out with human participants which openly described a scene containing some pieces of furniture. The data obtained in this survey are analysed, and taking this into account, the QSn3D computational approach was developed which uses a XBox 360 Kinect to obtain 3D data from a real indoor scene. Object features are computed on these 3D data to identify objects in indoor scenes. The object orientation is computed, and qualitative spatial relations between the objects are extracted. These qualitative spatial relations are the input to a grammar which applies saliency rules obtained from the survey study and generates cognitive natural language descriptions of scenes. Moreover, these qualitative descriptors can be expressed as first-order logical facts in Prolog for further reasoning. Finally, a validation study is carried out to test whether the descriptions provided by QSn3D approach are human readable. The obtained results show that their acceptability is higher than 82%.

  15. Ambient visual information confers a context-specific, long-term benefit on memory for haptic scenes.

    Science.gov (United States)

    Pasqualotto, Achille; Finucane, Ciara M; Newell, Fiona N

    2013-09-01

    We investigated the effects of indirect, ambient visual information on haptic spatial memory. Using touch only, participants first learned an array of objects arranged in a scene and were subsequently tested on their recognition of that scene which was always hidden from view. During haptic scene exploration, participants could either see the surrounding room or were blindfolded. We found a benefit in haptic memory performance only when ambient visual information was available in the early stages of the task but not when participants were initially blindfolded. Specifically, when ambient visual information was available a benefit on performance was found in a subsequent block of trials during which the participant was blindfolded (Experiment 1), and persisted over a delay of one week (Experiment 2). However, we found that the benefit for ambient visual information did not transfer to a novel environment (Experiment 3). In Experiment 4 we further investigated the nature of the visual information that improved haptic memory and found that geometric information about a surrounding (virtual) room rather than isolated object landmarks, facilitated haptic scene memory. Our results suggest that vision improves haptic memory for scenes by providing an environment-centred, allocentric reference frame for representing object location through touch. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Panoramic Search: The Interaction of Memory and Vision in Search through a Familiar Scene

    Science.gov (United States)

    Oliva, Aude; Wolfe, Jeremy M. Arsenio, Helga C.

    2004-01-01

    How do observers search through familiar scenes? A novel panoramic search method is used to study the interaction of memory and vision in natural search behavior. In panoramic search, observers see part of an unchanging scene larger than their current field of view. A target object can be visible, present in the display but hidden from view, or…

  17. Combination of Morphological Operations with Structure based Partitioning and grouping for Text String detection from Natural Scenes

    OpenAIRE

    Vyankatesh V. Rampurkar; Gyankamal J. Chhajed

    2014-01-01

    Text information in natural scene images serves as important clues for many image-based applications such as scene perceptive, content-based image retrieval, assistive direction-finding and automatic geocoding. Now days different approaches like countours based, Image binarization and enhancement based, Gradient based and colour reduction based techniques can be used for the text detection from natural scenes. In this paper the combination of morphological operations with structure based part...

  18. Video Scene Parsing with Predictive Feature Learning

    OpenAIRE

    Jin, Xiaojie; Li, Xin; Xiao, Huaxin; Shen, Xiaohui; Lin, Zhe; Yang, Jimei; Chen, Yunpeng; Dong, Jian; Liu, Luoqi; Jie, Zequn; Feng, Jiashi; Yan, Shuicheng

    2016-01-01

    In this work, we address the challenging video scene parsing problem by developing effective representation learning methods given limited parsing annotations. In particular, we contribute two novel methods that constitute a unified parsing framework. (1) \\textbf{Predictive feature learning}} from nearly unlimited unlabeled video data. Different from existing methods learning features from single frame parsing, we learn spatiotemporal discriminative features by enforcing a parsing network to ...

  19. How context information and target information guide the eyes from the first epoch of search in real-world scenes.

    Science.gov (United States)

    Spotorno, Sara; Malcolm, George L; Tatler, Benjamin W

    2014-02-11

    This study investigated how the visual system utilizes context and task information during the different phases of a visual search task. The specificity of the target template (the picture or the name of the target) and the plausibility of target position in real-world scenes were manipulated orthogonally. Our findings showed that both target template information and guidance of spatial context are utilized to guide eye movements from the beginning of scene inspection. In both search initiation and subsequent scene scanning, the availability of a specific visual template was particularly useful when the spatial context of the scene was misleading and the availability of a reliable scene context facilitated search mainly when the template was abstract. Target verification was affected principally by the level of detail of target template, and was quicker in the case of a picture cue. The results indicate that the visual system can utilize target template guidance and context guidance flexibly from the beginning of scene inspection, depending upon the amount and the quality of the available information supplied by either of these high-level sources. This allows for optimization of oculomotor behavior throughout the different phases of search within a real-world scene.

  20. Detecting text in natural scenes with multi-level MSER and SWT

    Science.gov (United States)

    Lu, Tongwei; Liu, Renjun

    2018-04-01

    The detection of the characters in the natural scene is susceptible to factors such as complex background, variable viewing angle and diverse forms of language, which leads to poor detection results. Aiming at these problems, a new text detection method was proposed, which consisted of two main stages, candidate region extraction and text region detection. At first stage, the method used multiple scale transformations of original image and multiple thresholds of maximally stable extremal regions (MSER) to detect the text regions which could detect character regions comprehensively. At second stage, obtained SWT maps by using the stroke width transform (SWT) algorithm to compute the candidate regions, then using cascaded classifiers to propose non-text regions. The proposed method was evaluated on the standard benchmark datasets of ICDAR2011 and the datasets that we made our own data sets. The experiment results showed that the proposed method have greatly improved that compared to other text detection methods.

  1. The lifesaving potential of specialized on-scene medical support for urban tactical operations.

    Science.gov (United States)

    Metzger, Jeffery C; Eastman, Alexander L; Benitez, Fernando L; Pepe, Paul E

    2009-01-01

    Since the 1980s, the specialized field of tactical medicine has evolved with growing support from numerous law-enforcement and medical organizations. On-scene backup from tactical emergency medical support (TEMS) providers has not only permitted more immediate advanced medical aid to injured officers, victims, bystanders, and suspects, but also allows for rapid after-incident medical screening or minor treatments that can obviate an unnecessary transport to an emergency department. The purpose of this report is to document one very explicit benefit of TEMS deployment, namely, a situation in which a police officer's life was saved by the routine on-scene presence of specialized TEMS physicians. In this specific case, a police officer was shot in the anterior neck during a law-enforcement operation and became moribund with massive hemorrhage and compromised airway. Two TEMS physicians, who had been integrated into the tactical law-enforcement team, were on scene, controlled the hemorrhage, and provided a surgical airway. By the time of arrival at the hospital, the patient had begun purposeful movements and, within 12 hours, was alert and oriented. Considering the rapid decline in the patient's condition, it was later deemed by quality assurance reviewers that the on-scene presence of these TEMS providers was lifesaving.

  2. Influence of step complexity and presentation style on step performance of computerized emergency operating procedures

    Energy Technology Data Exchange (ETDEWEB)

    Xu Song [Department of Industrial Engineering, Tsinghua University, Beijing 100084 (China); Li Zhizhong [Department of Industrial Engineering, Tsinghua University, Beijing 100084 (China)], E-mail: zzli@tsinghua.edu.cn; Song Fei; Luo Wei; Zhao Qianyi; Salvendy, Gavriel [Department of Industrial Engineering, Tsinghua University, Beijing 100084 (China)

    2009-02-15

    With the development of information technology, computerized emergency operating procedures (EOPs) are taking the place of paper-based ones. However, ergonomics issues of computerized EOPs have not been studied adequately since the industrial practice is quite limited yet. This study examined the influence of step complexity and presentation style of EOPs on step performance. A simulated computerized EOP system was developed in two presentation styles: Style A: one- and two-dimensional flowcharts combination; Style B: two-dimensional flowchart and success logic tree combination. Step complexity was quantified by a complexity measure model based on an entropy concept. Forty subjects participated in the experiment of EOP execution using the simulated system. The results of data analysis on the experiment data indicate that step complexity and presentation style could significantly influence step performance (both step error rate and operation time). Regression models were also developed. The regression analysis results imply that operation time of a step could be well predicted by step complexity while step error rate could only partly predicted by it. The result of a questionnaire investigation implies that step error rate was influenced not only by the operation task itself but also by other human factors. These findings may be useful for the design and assessment of computerized EOPs.

  3. Environmental impact to multimedia systems on the example of fingerprint aging behavior at crime scenes

    Science.gov (United States)

    Merkel, Ronny; Breuhan, Andy; Hildebrandt, Mario; Vielhauer, Claus; Bräutigam, Anja

    2012-06-01

    In the field of crime scene forensics, current methods of evidence collection, such as the acquisition of shoe-marks, tireimpressions, palm-prints or fingerprints are in most cases still performed in an analogue way. For example, fingerprints are captured by powdering and sticky tape lifting, ninhydrine bathing or cyanoacrylate fuming and subsequent photographing. Images of the evidence are then further processed by forensic experts. With the upcoming use of new multimedia systems for the digital capturing and processing of crime scene traces in forensics, higher resolutions can be achieved, leading to a much better quality of forensic images. Furthermore, the fast and mostly automated preprocessing of such data using digital signal processing techniques is an emerging field. Also, by the optical and non-destructive lifting of forensic evidence, traces are not destroyed and therefore can be re-captured, e.g. by creating time series of a trace, to extract its aging behavior and maybe determine the time the trace was left. However, such new methods and tools face different challenges, which need to be addressed before a practical application in the field. Based on the example of fingerprint age determination, which is an unresolved research challenge to forensic experts since decades, we evaluate the influences of different environmental conditions as well as different types of sweating and their implications to the capturing sensory, preprocessing methods and feature extraction. We use a Chromatic White Light (CWL) sensor to exemplary represent such a new optical and contactless measurement device and investigate the influence of 16 different environmental conditions, 8 different sweat types and 11 different preprocessing methods on the aging behavior of 48 fingerprint time series (2592 fingerprint scans in total). We show the challenges that arise for such new multimedia systems capturing and processing forensic evidence

  4. The effects of scene characteristics, resolution, and compression on the ability to recognize objects in video

    Science.gov (United States)

    Dumke, Joel; Ford, Carolyn G.; Stange, Irena W.

    2011-03-01

    Public safety practitioners increasingly use video for object recognition tasks. These end users need guidance regarding how to identify the level of video quality necessary for their application. The quality of video used in public safety applications must be evaluated in terms of its usability for specific tasks performed by the end user. The Public Safety Communication Research (PSCR) project performed a subjective test as one of the first in a series to explore visual intelligibility in video-a user's ability to recognize an object in a video stream given various conditions. The test sought to measure the effects on visual intelligibility of three scene parameters (target size, scene motion, scene lighting), several compression rates, and two resolutions (VGA (640x480) and CIF (352x288)). Seven similarly sized objects were used as targets in nine sets of near-identical source scenes, where each set was created using a different combination of the parameters under study. Viewers were asked to identify the objects via multiple choice questions. Objective measurements were performed on each of the scenes, and the ability of the measurement to predict visual intelligibility was studied.

  5. Polycation-sodium lauryl ether sulfate-type surfactant complexes: influence of ethylene oxide length.

    Science.gov (United States)

    Vleugels, Leo F W; Pollet, Jennifer; Tuinier, Remco

    2015-05-21

    Polyelectrolyte-surfactant complexes (PESC) are a class of materials which form spontaneously by self-assembly driven by electrostatic and hydrophobic interactions. PESC containing sodium lauryl ether sulfates (SLES) have found wide application in hair care products like shampoo. Typically, SLES with only one or two ethylene oxide (EO) groups are used for this application. We have studied the influence of the size of the EO block (ranging from 0 to 30 EO groups) on complexation with two model polycations: linear polyDADMAC and branched PEI. PESC size and electrostatic properties were determined during stepwise titration of buffered polycation solutions. The critical aggregation concentration (CAC) of PESC was determined by surface tension measurements and fluorescence spectroscopy. For polyDADMAC, there is no influence of the size of the EO block on the complexation behavior; the stiff polycation governs the structure formation. For PEI, it was seen that the EO block size does affect the structure of the complexes. The CAC value of the investigated complexes turns out to be rather independent of the EO block size; however, the CMC/CAC ratio decreases with increasing size of the EO block. This latter observation explains why the Lochhead-Goddard effect is most effective for small EO blocks.

  6. Representation of Gravity-Aligned Scene Structure in Ventral Pathway Visual Cortex.

    Science.gov (United States)

    Vaziri, Siavash; Connor, Charles E

    2016-03-21

    The ventral visual pathway in humans and non-human primates is known to represent object information, including shape and identity [1]. Here, we show the ventral pathway also represents scene structure aligned with the gravitational reference frame in which objects move and interact. We analyzed shape tuning of recently described macaque monkey ventral pathway neurons that prefer scene-like stimuli to objects [2]. Individual neurons did not respond to a single shape class, but to a variety of scene elements that are typically aligned with gravity: large planes in the orientation range of ground surfaces under natural viewing conditions, planes in the orientation range of ceilings, and extended convex and concave edges in the orientation range of wall/floor/ceiling junctions. For a given neuron, these elements tended to share a common alignment in eye-centered coordinates. Thus, each neuron integrated information about multiple gravity-aligned structures as they would be seen from a specific eye and head orientation. This eclectic coding strategy provides only ambiguous information about individual structures but explicit information about the environmental reference frame and the orientation of gravity in egocentric coordinates. In the ventral pathway, this could support perceiving and/or predicting physical events involving objects subject to gravity, recognizing object attributes like animacy based on movement not caused by gravity, and/or stabilizing perception of the world against changes in head orientation [3-5]. Our results, like the recent discovery of object weight representation [6], imply that the ventral pathway is involved not just in recognition, but also in physical understanding of objects and scenes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  7. Assembling a game development scene? Uncovering Finland’s largest demo party

    Directory of Open Access Journals (Sweden)

    Heikki Tyni

    2014-03-01

    Full Text Available The study takes look at Assembly, a large-scale LAN and demo party founded in 1992 and organized annually in Helsinki, Finland. Assembly is used as a case study to explore the relationship between computer hobbyism – including gaming, demoscene and other related activities – and professional game development. Drawing from expert interviews, a visitor query and news coverage we ask what kind of functions Assembly has played for the scene in general, and on the formation and fostering of the Finnish game industry in particular. The conceptual contribution of the paper is constructed around the interrelated concepts of scene, technicity and gaming capital.

  8. Image policy, subjectivation and argument scenes

    Directory of Open Access Journals (Sweden)

    Ângela Cristina Salgueiro Marques

    2014-12-01

    Full Text Available This paper is aimed at discussing, with focus on Jacques Rancière, how an image policy can be noticed in the creative production of scenes of dissent from which the political agent emerge, appears and constitute himself in a process of subjectivation. The political and critical power of the image is linked to survival acts: operations and attempts that enable to resist to captures, silences and excesses comitted by the media discourses, by the social institutions and by the State.

  9. John Lennon, autograph hound: The fan-musician community in Hamburg's early rock-and-roll scene, 1960–65

    Directory of Open Access Journals (Sweden)

    Julia Sneeringer

    2011-03-01

    Full Text Available This article explores the Beat music scene in Hamburg, West Germany, in the early 1960s. This scene became famous for its role in incubating the Beatles, who played over 250 nights there in 1960–62, but this article focuses on the prominent role of fans in this scene. Here fans were welcomed by bands and club owners as cocreators of a scene that offered respite from the prevailing conformism of West Germany during the Economic Miracle. This scene, born at the confluence of commercial and subcultural impulses, was also instrumental in transforming rock and roll from a working-class niche product to a cross-class lingua franca for youth. It was also a key element in West Germany's broader processes of democratization during the 1960s, opening up social space in which the meanings of authority, respectability, and democracy itself could be questioned and reworked.

  10. Automatic structural scene digitalization.

    Science.gov (United States)

    Tang, Rui; Wang, Yuhan; Cosker, Darren; Li, Wenbin

    2017-01-01

    In this paper, we present an automatic system for the analysis and labeling of structural scenes, floor plan drawings in Computer-aided Design (CAD) format. The proposed system applies a fusion strategy to detect and recognize various components of CAD floor plans, such as walls, doors, windows and other ambiguous assets. Technically, a general rule-based filter parsing method is fist adopted to extract effective information from the original floor plan. Then, an image-processing based recovery method is employed to correct information extracted in the first step. Our proposed method is fully automatic and real-time. Such analysis system provides high accuracy and is also evaluated on a public website that, on average, archives more than ten thousands effective uses per day and reaches a relatively high satisfaction rate.

  11. A STEP TOWARDS DYNAMIC SCENE ANALYSIS WITH ACTIVE MULTI-VIEW RANGE IMAGING SYSTEMS

    Directory of Open Access Journals (Sweden)

    M. Weinmann

    2012-07-01

    Full Text Available Obtaining an appropriate 3D description of the local environment remains a challenging task in photogrammetric research. As terrestrial laser scanners (TLSs perform a highly accurate, but time-dependent spatial scanning of the local environment, they are only suited for capturing static scenes. In contrast, new types of active sensors provide the possibility of simultaneously capturing range and intensity information by images with a single measurement, and the high frame rate also allows for capturing dynamic scenes. However, due to the limited field of view, one observation is not sufficient to obtain a full scene coverage and therefore, typically, multiple observations are collected from different locations. This can be achieved by either placing several fixed sensors at different known locations or by using a moving sensor. In the latter case, the relation between different observations has to be estimated by using information extracted from the captured data and then, a limited field of view may lead to problems if there are too many moving objects within it. Hence, a moving sensor platform with multiple and coupled sensor devices offers the advantages of an extended field of view which results in a stabilized pose estimation, an improved registration of the recorded point clouds and an improved reconstruction of the scene. In this paper, a new experimental setup for investigating the potentials of such multi-view range imaging systems is presented which consists of a moving cable car equipped with two synchronized range imaging devices. The presented setup allows for monitoring in low altitudes and it is suitable for getting dynamic observations which might arise from moving cars or from moving pedestrians. Relying on both 3D geometry and 2D imagery, a reliable and fully automatic approach for co-registration of captured point cloud data is presented which is essential for a high quality of all subsequent tasks. The approach involves using

  12. Memory-guided attention during active viewing of edited dynamic scenes.

    Science.gov (United States)

    Valuch, Christian; König, Peter; Ansorge, Ulrich

    2017-01-01

    Films, TV shows, and other edited dynamic scenes contain many cuts, which are abrupt transitions from one video shot to the next. Cuts occur within or between scenes, and often join together visually and semantically related shots. Here, we tested to which degree memory for the visual features of the precut shot facilitates shifting attention to the postcut shot. We manipulated visual similarity across cuts, and measured how this affected covert attention (Experiment 1) and overt attention (Experiments 2 and 3). In Experiments 1 and 2, participants actively viewed a target movie that randomly switched locations with a second, distractor movie at the time of the cuts. In Experiments 1 and 2, participants were able to deploy attention more rapidly and accurately to the target movie's continuation when visual similarity was high than when it was low. Experiment 3 tested whether this could be explained by stimulus-driven (bottom-up) priming by feature similarity, using one clip at screen center that was followed by two alternative continuations to the left and right. Here, even the highest similarity across cuts did not capture attention. We conclude that following cuts of high visual similarity, memory-guided attention facilitates the deployment of attention, but this effect is (top-down) dependent on the viewer's active matching of scene content across cuts.

  13. Planarity constrained multi-view depth map reconstruction for urban scenes

    Science.gov (United States)

    Hou, Yaolin; Peng, Jianwei; Hu, Zhihua; Tao, Pengjie; Shan, Jie

    2018-05-01

    Multi-view depth map reconstruction is regarded as a suitable approach for 3D generation of large-scale scenes due to its flexibility and scalability. However, there are challenges when this technique is applied to urban scenes where apparent man-made regular shapes may present. To address this need, this paper proposes a planarity constrained multi-view depth (PMVD) map reconstruction method. Starting with image segmentation and feature matching for each input image, the main procedure is iterative optimization under the constraints of planar geometry and smoothness. A set of candidate local planes are first generated by an extended PatchMatch method. The image matching costs are then computed and aggregated by an adaptive-manifold filter (AMF), whereby the smoothness constraint is applied to adjacent pixels through belief propagation. Finally, multiple criteria are used to eliminate image matching outliers. (Vertical) aerial images, oblique (aerial) images and ground images are used for qualitative and quantitative evaluations. The experiments demonstrated that the PMVD outperforms the popular multi-view depth map reconstruction with an accuracy two times better for the aerial datasets and achieves an outcome comparable to the state-of-the-art for ground images. As expected, PMVD is able to preserve the planarity for piecewise flat structures in urban scenes and restore the edges in depth discontinuous areas.

  14. Was That Levity or Livor Mortis? Crime Scene Investigators' Perspectives on Humor and Work

    Science.gov (United States)

    Vivona, Brian D.

    2012-01-01

    Humor is common and purposeful in most work settings. Although researchers have examined humor and joking behavior in various work settings, minimal research has been done on humor applications in the field of crime scene investigation. The crime scene investigator encounters death, trauma, and tragedy in a more intimate manner than any other…

  15. Sex differences in the brain response to affective scenes with or without humans.

    Science.gov (United States)

    Proverbio, Alice Mado; Adorni, Roberta; Zani, Alberto; Trestianu, Laura

    2009-10-01

    Recent findings have demonstrated that women might be more reactive than men to viewing painful stimuli (vicarious response to pain), and therefore more empathic [Han, S., Fan, Y., & Mao, L. (2008). Gender difference in empathy for pain: An electrophysiological investigation. Brain Research, 1196, 85-93]. We investigated whether the two sexes differed in their cerebral responses to affective pictures portraying humans in different positive or negative contexts compared to natural or urban scenarios. 440 IAPS slides were presented to 24 Italian students (12 women and 12 men). Half the pictures displayed humans while the remaining scenes lacked visible persons. ERPs were recorded from 128 electrodes and swLORETA (standardized weighted Low-Resolution Electromagnetic Tomography) source reconstruction was performed. Occipital P115 was greater in response to persons than to scenes and was affected by the emotional valence of the human pictures. This suggests that processing of biologically relevant stimuli is prioritized. Orbitofrontal N2 was greater in response to positive than negative human pictures in women but not in men, and not to scenes. A late positivity (LP) to suffering humans far exceeded the response to negative scenes in women but not in men. In both sexes, the contrast suffering-minus-happy humans revealed a difference in the activation of the occipito/temporal, right occipital (BA19), bilateral parahippocampal, left dorsal prefrontal cortex (DPFC) and left amygdala. However, increased right amygdala and right frontal area activities were observed only in women. The humans-minus-scenes contrast revealed a difference in the activation of the middle occipital gyrus (MOG) in men, and of the left inferior parietal (BA40), left superior temporal gyrus (STG, BA38) and right cingulate (BA31) in women (270-290 ms). These data indicate a sex-related difference in the brain response to humans, possibly supporting human empathy.

  16. An Indoor Scene Recognition-Based 3D Registration Mechanism for Real-Time AR-GIS Visualization in Mobile Applications

    Directory of Open Access Journals (Sweden)

    Wei Ma

    2018-03-01

    Full Text Available Mobile Augmented Reality (MAR systems are becoming ideal platforms for visualization, permitting users to better comprehend and interact with spatial information. Subsequently, this technological development, in turn, has prompted efforts to enhance mechanisms for registering virtual objects in real world contexts. Most existing AR 3D Registration techniques lack the scene recognition capabilities needed to describe accurately the positioning of virtual objects in scenes representing reality. Moreover, the application of such registration methods in indoor AR-GIS systems is further impeded by the limited capacity of these systems to detect the geometry and semantic information in indoor environments. In this paper, we propose a novel method for fusing virtual objects and indoor scenes, based on indoor scene recognition technology. To accomplish scene fusion in AR-GIS, we first detect key points in reference images. Then, we perform interior layout extraction using a Fully Connected Networks (FCN algorithm to acquire layout coordinate points for the tracking targets. We detect and recognize the target scene in a video frame image to track targets and estimate the camera pose. In this method, virtual 3D objects are fused precisely to a real scene, according to the camera pose and the previously extracted layout coordinate points. Our results demonstrate that this approach enables accurate fusion of virtual objects with representations of real world indoor environments. Based on this fusion technique, users can better grasp virtual three-dimensional representations on an AR-GIS platform.

  17. Context modulates attention to social scenes in toddlers with autism

    Science.gov (United States)

    Chawarska, Katarzyna; Macari, Suzanne; Shic, Frederick

    2013-01-01

    Background In typical development, the unfolding of social and communicative skills hinges upon the ability to allocate and sustain attention towards people, a skill present moments after birth. Deficits in social attention have been well documented in autism, though the underlying mechanisms are poorly understood. Methods In order to parse the factors that are responsible for limited social attention in toddlers with autism, we manipulated the context in which a person appeared in their visual field with regard to the presence of salient social (child-directed speech and eye contact) and nonsocial (distractor toys) cues for attention. Participants included 13- to 25-month-old toddlers with autism (AUT; n=54), developmental delay (DD; n=22), and typical development (TD; n=48). Their visual responses were recorded with an eye-tracker. Results In conditions devoid of eye contact and speech, the distribution of attention between key features of the social scene in toddlers with autism was comparable to that in DD and TD controls. However, when explicit dyadic cues were introduced, toddlers with autism showed decreased attention to the entire scene and, when they looked at the scene, they spent less time looking at the speaker’s face and monitoring her lip movements than the control groups. In toddlers with autism, decreased time spent exploring the entire scene was associated with increased symptom severity and lower nonverbal functioning; atypical language profiles were associated with decreased monitoring of the speaker’s face and her mouth. Conclusions While in certain contexts toddlers with autism attend to people and objects in a typical manner, they show decreased attentional response to dyadic cues for attention. Given that mechanisms supporting responsivity to dyadic cues are present shortly after birth and are highly consequential for development of social cognition and communication, these findings have important implications for the understanding of the

  18. Far-infrared pedestrian detection for advanced driver assistance systems using scene context

    Science.gov (United States)

    Wang, Guohua; Liu, Qiong; Wu, Qingyao

    2016-04-01

    Pedestrian detection is one of the most critical but challenging components in advanced driver assistance systems. Far-infrared (FIR) images are well-suited for pedestrian detection even in a dark environment. However, most current detection approaches just focus on pedestrian patterns themselves, where robust and real-time detection cannot be well achieved. We propose a fast FIR pedestrian detection approach, called MAP-HOGLBP-T, to explicitly exploit the scene context for the driver assistance system. In MAP-HOGLBP-T, three algorithms are developed to exploit the scene contextual information from roads, vehicles, and background objects of high homogeneity, and we employ the Bayesian approach to build a classifier learner which respects the scene contextual information. We also develop a multiframe approval scheme to enhance the detection performance based on spatiotemporal continuity of pedestrians. Our empirical study on real-world datasets has demonstrated the efficiency and effectiveness of the proposed method. The performance is shown to be better than that of state-of-the-art low-level feature-based approaches.

  19. Training Small Networks for Scene Classification of Remote Sensing Images via Knowledge Distillation

    Directory of Open Access Journals (Sweden)

    Guanzhou Chen

    2018-05-01

    Full Text Available Scene classification, aiming to identify the land-cover categories of remotely sensed image patches, is now a fundamental task in the remote sensing image analysis field. Deep-learning-model-based algorithms are widely applied in scene classification and achieve remarkable performance, but these high-level methods are computationally expensive and time-consuming. Consequently in this paper, we introduce a knowledge distillation framework, currently a mainstream model compression method, into remote sensing scene classification to improve the performance of smaller and shallower network models. Our knowledge distillation training method makes the high-temperature softmax output of a small and shallow student model match the large and deep teacher model. In our experiments, we evaluate knowledge distillation training method for remote sensing scene classification on four public datasets: AID dataset, UCMerced dataset, NWPU-RESISC dataset, and EuroSAT dataset. Results show that our proposed training method was effective and increased overall accuracy (3% in AID experiments, 5% in UCMerced experiments, 1% in NWPU-RESISC and EuroSAT experiments for small and shallow models. We further explored the performance of the student model on small and unbalanced datasets. Our findings indicate that knowledge distillation can improve the performance of small network models on datasets with lower spatial resolution images, numerous categories, as well as fewer training samples.

  20. Fundamental remote sensing science research program. Part 1: Scene radiation and atmospheric effects characterization project

    Science.gov (United States)

    Murphy, R. E.; Deering, D. W.

    1984-01-01

    Brief articles summarizing the status of research in the scene radiation and atmospheric effect characterization (SRAEC) project are presented. Research conducted within the SRAEC program is focused on the development of empirical characterizations and mathematical process models which relate the electromagnetic energy reflected or emitted from a scene to the biophysical parameters of interest.

  1. Eye Movement Control in Scene Viewing and Reading: Evidence from the Stimulus Onset Delay Paradigm

    Science.gov (United States)

    Luke, Steven G.; Nuthmann, Antje; Henderson, John M.

    2013-01-01

    The present study used the stimulus onset delay paradigm to investigate eye movement control in reading and in scene viewing in a within-participants design. Short onset delays (0, 25, 50, 200, and 350 ms) were chosen to simulate the type of natural processing difficulty encountered in reading and scene viewing. Fixation duration increased…

  2. The Role of Forensic Botany in Solving a Case: Scientific Evidence on the Falsification of a Crime Scene.

    Science.gov (United States)

    Aquila, Isabella; Gratteri, Santo; Sacco, Matteo A; Ricci, Pietrantonio

    2018-05-01

    Forensic botany can provide useful information for pathologists, particularly on crime scene investigation. We report the case of a man who arrived at the hospital and died shortly afterward. The body showed widespread electrical lesions. The statements of his brother and wife about the incident aroused a large amount of suspicion in the investigators. A crime scene investigation was carried out, along with a botanical morphological survey on small vegetations found on the corpse. An autopsy was also performed. Botanical analysis showed some samples of Xanthium spinosum, thus leading to the discovery of the falsification of the crime scene although the location of the true crime scene remained a mystery. The botanical analysis, along with circumstantial data and autopsy findings, led to the discovery of the real crime scene and became crucial as part of the legal evidence regarding the falsity of the statements made to investigators. © 2017 American Academy of Forensic Sciences.

  3. Research on 3D virtual campus scene modeling based on 3ds Max and VRML

    Science.gov (United States)

    Kang, Chuanli; Zhou, Yanliu; Liang, Xianyue

    2015-12-01

    With the rapid development of modem technology, the digital information management and the virtual reality simulation technology has become a research hotspot. Virtual campus 3D model can not only express the real world objects of natural, real and vivid, and can expand the campus of the reality of time and space dimension, the combination of school environment and information. This paper mainly uses 3ds Max technology to create three-dimensional model of building and on campus buildings, special land etc. And then, the dynamic interactive function is realized by programming the object model in 3ds Max by VRML .This research focus on virtual campus scene modeling technology and VRML Scene Design, and the scene design process in a variety of real-time processing technology optimization strategy. This paper guarantees texture map image quality and improve the running speed of image texture mapping. According to the features and architecture of Guilin University of Technology, 3ds Max, AutoCAD and VRML were used to model the different objects of the virtual campus. Finally, the result of virtual campus scene is summarized.

  4. Color Independent Components Based SIFT Descriptors for Object/Scene Classification

    Science.gov (United States)

    Ai, Dan-Ni; Han, Xian-Hua; Ruan, Xiang; Chen, Yen-Wei

    In this paper, we present a novel color independent components based SIFT descriptor (termed CIC-SIFT) for object/scene classification. We first learn an efficient color transformation matrix based on independent component analysis (ICA), which is adaptive to each category in a database. The ICA-based color transformation can enhance contrast between the objects and the background in an image. Then we compute CIC-SIFT descriptors over all three transformed color independent components. Since the ICA-based color transformation can boost the objects and suppress the background, the proposed CIC-SIFT can extract more effective and discriminative local features for object/scene classification. The comparison is performed among seven SIFT descriptors, and the experimental classification results show that our proposed CIC-SIFT is superior to other conventional SIFT descriptors.

  5. The ART of CSI: An augmented reality tool (ART) to annotate crime scenes in forensic investigation

    NARCIS (Netherlands)

    Streefkerk, J.W.; Houben, M.; Amerongen, P. van; Haar, F. ter; Dijk, J.

    2013-01-01

    Forensic professionals have to collect evidence at crime scenes quickly and without contamination. A handheld Augmented Reality (AR) annotation tool allows these users to virtually tag evidence traces at crime scenes and to review, share and export evidence lists. In an user walkthrough with this

  6. Imaging infrared: Scene simulation, modeling, and real image tracking; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    Science.gov (United States)

    Triplett, Milton J.; Wolverton, James R.; Hubert, August J.

    1989-09-01

    Various papers on scene simulation, modeling, and real image tracking using IR imaging are presented. Individual topics addressed include: tactical IR scene generator, dynamic FLIR simulation in flight training research, high-speed dynamic scene simulation in UV to IR spectra, development of an IR sensor calibration facility, IR celestial background scene description, transmission measurement of optical components at cryogenic temperatures, diffraction model for a point-source generator, silhouette-based tracking for tactical IR systems, use of knowledge in electrooptical trackers, detection and classification of target formations in IR image sequences, SMPRAD: simplified three-dimensional cloud radiance model, IR target generator, recent advances in testing of thermal imagers, generic IR system models with dynamic image generation, modeling realistic target acquisition using IR sensors in multiple-observer scenarios, and novel concept of scene generation and comprehensive dynamic sensor test.

  7. Exploiting current-generation graphics hardware for synthetic-scene generation

    Science.gov (United States)

    Tanner, Michael A.; Keen, Wayne A.

    2010-04-01

    Increasing seeker frame rate and pixel count, as well as the demand for higher levels of scene fidelity, have driven scene generation software for hardware-in-the-loop (HWIL) and software-in-the-loop (SWIL) testing to higher levels of parallelization. Because modern PC graphics cards provide multiple computational cores (240 shader cores for a current NVIDIA Corporation GeForce and Quadro cards), implementation of phenomenology codes on graphics processing units (GPUs) offers significant potential for simultaneous enhancement of simulation frame rate and fidelity. To take advantage of this potential requires algorithm implementation that is structured to minimize data transfers between the central processing unit (CPU) and the GPU. In this paper, preliminary methodologies developed at the Kinetic Hardware In-The-Loop Simulator (KHILS) will be presented. Included in this paper will be various language tradeoffs between conventional shader programming, Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), including performance trades and possible pathways for future tool development.

  8. The Effects of Linguistic Labels Related to Abstract Scenes on Memory

    Directory of Open Access Journals (Sweden)

    Kentaro Inomata

    2011-10-01

    Full Text Available Boundary extension is the false memory beyond the actual boundary of a picture scene. Gagnier (2011 suggested that a linguistic label has no effect on the magnitude of boundary extension. Although she controlled the timing of the presentation or information of the linguistic label, the information of stimulus was not changed. In the present study, the depiction of the main object was controlled in order to change the contextual information of a scene. In experiment, the 68 participants were shown 12 pictures. The stimulus consisted pictures that depicted the main object or did not depict the main object, and half of them were presented with linguistic description. Participants rated the object-less pictures more closely than the original pictures, when the former were presented with linguistic labels. However, when they were presented without linguistic labels, boundary extension did not occur. There was no effect of labels on the pictures that depicted the main objects. On the basis of these results, the linguistic label enhances the representation of the abstract scene like a homogeneous field or a wall. This finding suggests that boundary extension may be affected by not only visual information but also by other sensory information mediated by linguistic representation.

  9. Research on spatial features of streets under the influence of immersion communication technology brought by new media

    Science.gov (United States)

    Xu, Hua-wei; Feng, Chen

    2017-04-01

    The rapid development of new media has exacerbated the complexity of urban street space’s information interaction. With the influence of the immersion communication, the streetscape has constructed a special scene like ‘media convergence’, which has brought a huge challenge for maintaining the urban streetscape order. The Spatial Visual Communication Research Method which should break the limitation of the traditional aesthetic space research, can provide a brand new prospect for this phenomenon research. This study aims to analyze and summarize the communication characteristics of new media and its context, which will be helpful for understanding the social meaning within the order change of the street’s spatial and physical environment.

  10. Situational Influences on Reactions to Observed Violence.

    Science.gov (United States)

    Berkowitz, Leonard

    1986-01-01

    Examines data on what situational factors influence people's desire to view violent television programming. Surveys research on the effects on viewer's behavior of the presence of other observers, the nature of the available target, situational features operating as retrieval cues, the viewers' interpretations of the violent scenes, and the…

  11. Setting the scene

    International Nuclear Information System (INIS)

    Curran, S.

    1977-01-01

    The reasons for the special meeting on the breeder reactor are outlined with some reference to the special Scottish interest in the topic. Approximately 30% of the electrical energy generated in Scotland is nuclear and the special developments at Dounreay make policy decisions on the future of the commercial breeder reactor urgent. The participants review the major questions arising in arriving at such decisions. In effect an attempt is made to respond to the wish of the Secretary of State for Energy to have informed debate. To set the scene the importance of energy availability as regards to the strength of the national economy is stressed and the reasons for an increasing energy demand put forward. Examination of alternative sources of energy shows that none is definitely capable of filling the foreseen energy gap. This implies an integrated thermal/breeder reactor programme as the way to close the anticipated gap. The problems of disposal of radioactive waste and the safeguards in the handling of plutonium are outlined. Longer-term benefits, including the consumption of plutonium and naturally occurring radioactive materials, are examined. (author)

  12. Forensic botany as a useful tool in the crime scene: Report of a case.

    Science.gov (United States)

    Margiotta, Gabriele; Bacaro, Giovanni; Carnevali, Eugenia; Severini, Simona; Bacci, Mauro; Gabbrielli, Mario

    2015-08-01

    The ubiquitous presence of plant species makes forensic botany useful for many criminal cases. Particularly, bryophytes are useful for forensic investigations because many of them are clonal and largely distributed. Bryophyte shoots can easily become attached to shoes and clothes and it is possible to be found on footwear, providing links between crime scene and individuals. We report a case of suicide of a young girl happened in Siena, Tuscany, Italia. The cause of traumatic injuries could be ascribed to suicide, to homicide, or to accident. In absence of eyewitnesses who could testify the dynamics of the event, the crime scene investigation was fundamental to clarify the accident. During the scene analysis, some fragments of Tortula muralis Hedw. and Bryum capillare Hedw were found. The fragments were analyzed by a bryologists in order to compare them with the moss present on the stairs that the victim used immediately before the death. The analysis of these bryophytes found at the crime scene allowed to reconstruct the accident. Even if this evidence, of course, is circumstantial, it can be useful in forensic cases, together with the other evidences, to reconstruct the dynamics of events. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  13. The formation of music-scenes in Manchester and their relation to urban space and the image of the city

    DEFF Research Database (Denmark)

    Nielsen, Tom

    2013-01-01

    and the image of the city. This image has later been utilised in the branding of Manchester as a creative city. The case is interesting in relation to the current ideals of planning ‘creative cities’ and local cultural scenes. The music scenes cannot be seen as participatory projects and has developed in more......The formation of music-scenes in Manchester and their relation to urban space and the image of the city The paper I would like to present derives from a study of the relation between the atmospheric qualities of a city and the formation of music scenes. I have studied Manchester which is a known...

  14. Beyond Contagion: Reality Mining Reveals Complex Patterns of Social Influence.

    Directory of Open Access Journals (Sweden)

    Aamena Alshamsi

    Full Text Available Contagion, a concept from epidemiology, has long been used to characterize social influence on people's behavior and affective (emotional states. While it has revealed many useful insights, it is not clear whether the contagion metaphor is sufficient to fully characterize the complex dynamics of psychological states in a social context. Using wearable sensors that capture daily face-to-face interaction, combined with three daily experience sampling surveys, we collected the most comprehensive data set of personality and emotion dynamics of an entire community of work. From this high-resolution data about actual (rather than self-reported face-to-face interaction, a complex picture emerges where contagion (that can be seen as adaptation of behavioral responses to the behavior of other people cannot fully capture the dynamics of transitory states. We found that social influence has two opposing effects on states: adaptation effects that go beyond mere contagion, and complementarity effects whereby individuals' behaviors tend to complement the behaviors of others. Surprisingly, these effects can exhibit completely different directions depending on the stable personality or emotional dispositions (stable traits of target individuals. Our findings provide a foundation for richer models of social dynamics, and have implications on organizational engineering and workplace well-being.

  15. Beyond Contagion: Reality Mining Reveals Complex Patterns of Social Influence.

    Science.gov (United States)

    Alshamsi, Aamena; Pianesi, Fabio; Lepri, Bruno; Pentland, Alex; Rahwan, Iyad

    2015-01-01

    Contagion, a concept from epidemiology, has long been used to characterize social influence on people's behavior and affective (emotional) states. While it has revealed many useful insights, it is not clear whether the contagion metaphor is sufficient to fully characterize the complex dynamics of psychological states in a social context. Using wearable sensors that capture daily face-to-face interaction, combined with three daily experience sampling surveys, we collected the most comprehensive data set of personality and emotion dynamics of an entire community of work. From this high-resolution data about actual (rather than self-reported) face-to-face interaction, a complex picture emerges where contagion (that can be seen as adaptation of behavioral responses to the behavior of other people) cannot fully capture the dynamics of transitory states. We found that social influence has two opposing effects on states: adaptation effects that go beyond mere contagion, and complementarity effects whereby individuals' behaviors tend to complement the behaviors of others. Surprisingly, these effects can exhibit completely different directions depending on the stable personality or emotional dispositions (stable traits) of target individuals. Our findings provide a foundation for richer models of social dynamics, and have implications on organizational engineering and workplace well-being.

  16. REGULARITIES OF THE INFLUENCE OF ORGANIZATIONAL AND TECHNOLOGICAL FACTORS ON THE DURATION OF CONSTRUCTION OF HIGH-RISE MULTIFUNCTIONAL COMPLEXES

    Directory of Open Access Journals (Sweden)

    ZAIATS Yi. I.

    2015-10-01

    Full Text Available Problem statement. Technical and economic indexes of projects of construction of high-rise multifunctional complexes, namely: the duration of construction works and the cost of building products depends on the technology of construction works and method of construction organization, and on their choice influence the architectural and design, constructional and engineering made decisions. Purpose. To reveal the regularity of influence of organizational and technological factors on the duration of construction of high-rise multifunctional complexes in the conditions of dense city building. Conclusion. To reveal the regularity of the influence of organizational and technological factors (the height, the factor complexity of design of project and and estimate documentation, factor of complexity of construction works, the factor of complexity of control of investment and construction project, economy factor, comfort factor, factor of technology of projected solutions for the duration of the construction of high-rise multifunctional complexes (depending on their height: from 73,5 m to 100 m inclusively; from 100 m to 200 m inclusively allow us to quantitatively assess their influence and can be used in the development of the methodology of substantiation of the expediency and effectiveness of the realization of projects of high-rise construction in condition of compacted urban development, based on the consideration of the influence of organizational and technological aspects.

  17. Cultural heritage and history in the European metal scene

    NARCIS (Netherlands)

    Klepper, de S.; Molpheta, S.; Pille, S.; Saouma, R.; During, R.; Muilwijk, M.

    2007-01-01

    This paper represents an inquiry on the use of history and cultural heritage in the metal scene. It is an attempt to show how history and cultural heritage can possibly be spread among people using an unconventional way. The followed research method was built on an explorative study that included an

  18. Words Matter: Scene Text for Image Classification and Retrieval

    NARCIS (Netherlands)

    Karaoglu, S.; Tao, R.; Gevers, T.; Smeulders, A.W.M.

    Text in natural images typically adds meaning to an object or scene. In particular, text specifies which business places serve drinks (e.g., cafe, teahouse) or food (e.g., restaurant, pizzeria), and what kind of service is provided (e.g., massage, repair). The mere presence of text, its words, and

  19. Characterization, propagation, and simulation of infrared scenes; Proceedings of the Meeting, Orlando, FL, Apr. 16-20, 1990

    Science.gov (United States)

    Watkins, Wendell R.; Zegel, Ferdinand H.; Triplett, Milton J.

    1990-09-01

    Various papers on the characterization, propagation, and simulation of IR scenes are presented. Individual topics addressed include: total radiant exitance measurements, absolute measurement of diffuse and specular reflectance using an FTIR spectrometer with an integrating sphere, fundamental limits in temperature estimation, incorporating the BRDF into an IR scene-generation system, characterizing IR dynamic response for foliage backgrounds, modeling sea surface effects in FLIR performance codes, automated imaging IR seeker performance evaluation system, generation of signature data bases with fast codes, background measurements using the NPS-IRST system. Also discussed are: naval ocean IR background analysis, camouflage simulation and effectiveness assessment for the individual soldier, discussion of IR scene generators, multiwavelength Scophony IR scene projector, LBIR target generator and calibrator for preflight seeker tests, dual-mode hardware-in-the-loop simulation facility, development of the IR blackbody source of gravity-type heat pipe and study of its characteristic.

  20. Enhancing Visual Basic GUI Applications using VRML Scenes

    OpenAIRE

    Bala Dhandayuthapani Veerasamy

    2010-01-01

    Rapid Application Development (RAD) enables ever expanding needs for speedy development of computer application programs that are sophisticated, reliable, and full-featured. Visual Basic was the first RAD tool for the Windows operating system, and too many people say still it is the best. To provide very good attraction in visual basic 6 applications, this paper directing to use VRML scenes over the visual basic environment.

  1. Coping with Perceived Ethnic Prejudice on the Gay Scene

    Science.gov (United States)

    Jaspal, Rusi

    2017-01-01

    There has been only cursory research into the sociological and psychological aspects of ethnic/racial discrimination among ethnic minority gay and bisexual men, and none that focuses specifically upon British ethnic minority gay men. This article focuses on perceptions of intergroup relations on the gay scene among young British South Asian gay…

  2. Fire Engine Support and On-scene Time in Prehospital Stroke Care - A Prospective Observational Study.

    Science.gov (United States)

    Puolakka, Tuukka; Väyrynen, Taneli; Erkkilä, Elja-Pekka; Kuisma, Markku

    2016-06-01

    Introduction On-scene time (OST) previously has been shown to be a significant component of Emergency Medical Services' (EMS') operational delay in acute stroke. Since stroke patients are managed routinely by two-person ambulance crews, increasing the number of personnel available on the scene is a possible method to improve their performance. Hypothesis Using fire engine crews to support ambulances on the scene in acute stroke is hypothesized to be associated with a shorter OST. All patients transported to hospital as thrombolysis candidates during a one-year study period were registered by the ambulance crews using a case report form that included patient characteristics and operational EMS data. Seventy-seven patients (41 [53%] male; mean age of 68.9 years [SD=15]; mean Glasgow Coma Score [GCS] of 15 points [IQR=14-15]) were eligible for the study. Forty-five cases were managed by ambulance and fire engine crews together and 32 by the ambulance crews alone. The median ambulance response time was seven minutes (IQR=5-10) and the fire engine response time was six minutes (IQR=5-8). The number of EMS personnel on the scene was six (IQR=5-7) and two (IQR=2-2), and the OST was 21 minutes (IQR=18-26) and 24 minutes (IQR=20-32; P =.073) for the groups, respectively. In a following regression analysis, using stroke as the dispatch code was the only variable associated with short (engine crews to support ambulances in acute stroke care was not associated with a shorter on-scene stay when compared to standard management by two-person ambulance crews alone. Using stroke as the dispatch code was the only variable that was associated independently with a short OST. Puolakka T , Väyrynen T , Erkkilä E-P , Kuisma M . Fire engine support and on-scene time in prehospital stroke care - a prospective observational study. Prehosp Disaster Med. 2016;31(3):278-281.

  3. Low-threshold Care for Marginalised Hard Drug Users: Marginalisation and Socialisation in the Rotterdam Hard Drug Scene

    NARCIS (Netherlands)

    J. van der Poel (Agnes)

    2009-01-01

    textabstractSince the early 1990s several developments have taken place in the hard drug scene in the Netherlands. Key elements in these developments were harm reduction measures, introduction of crack, open drug scenes, police interventions, drug-related nuisance, low-threshold care facilities and

  4. The Influence of Time Pressure and Case Complexity on Physicians׳ Diagnostic Performance

    Directory of Open Access Journals (Sweden)

    Dalal A. ALQahtani

    2016-12-01

    Conclusions: Time pressure did not impact the diagnostic performance, whereas the complexity of the clinical case negatively influenced the diagnostic accuracy. Further studies with the enhanced experimental manipulation of time pressure are needed to reveal the effect of time pressure, if any, on a physician׳s diagnostic performance.

  5. A comparison of directed search target detection versus in-scene target detection in Worldview-2 datasets

    Science.gov (United States)

    Grossman, S.

    2015-05-01

    Since the events of September 11, 2001, the intelligence focus has moved from large order-of-battle targets to small targets of opportunity. Additionally, the business community has discovered the use of remotely sensed data to anticipate demand and derive data on their competition. This requires the finer spectral and spatial fidelity now available to recognize those targets. This work hypothesizes that directed searches using calibrated data perform at least as well as inscene manually intensive target detection searches. It uses calibrated Worldview-2 multispectral images with NEF generated signatures and standard detection algorithms to compare bespoke directed search capabilities against ENVI™ in-scene search capabilities. Multiple execution runs are performed at increasing thresholds to generate detection rates. These rates are plotted and statistically analyzed. While individual head-to-head comparison results vary, 88% of the directed searches performed at least as well as in-scene searches with 50% clearly outperforming in-scene methods. The results strongly support the premise that directed searches perform at least as well as comparable in-scene searches.

  6. Range sections as rock models for intensity rock scene segmentation

    CSIR Research Space (South Africa)

    Mkwelo, S

    2007-11-01

    Full Text Available This paper presents another approach to segmenting a scene of rocks on a conveyor belt for the purposes of measuring rock size. Rock size estimation instruments are used to monitor, optimize and control milling and crushing in the mining industry...

  7. Napping and the Selective Consolidation of Negative Aspects of Scenes

    Science.gov (United States)

    Payne, Jessica D.; Kensinger, Elizabeth A.; Wamsley, Erin; Spreng, R. Nathan; Alger, Sara; Gibler, Kyle; Schacter, Daniel L.; Stickgold, Robert

    2018-01-01

    After information is encoded into memory, it undergoes an offline period of consolidation that occurs optimally during sleep. The consolidation process not only solidifies memories, but also selectively preserves aspects of experience that are emotionally salient and relevant for future use. Here, we provide evidence that an afternoon nap is sufficient to trigger preferential memory for emotional information contained in complex scenes. Selective memory for negative emotional information was enhanced after a nap compared to wakefulness in two control conditions designed to carefully address interference and time-of-day confounds. Although prior evidence has connected negative emotional memory formation to rapid eye movement (REM) sleep physiology, we found that non-REM delta activity and the amount of slow wave sleep (SWS) in the nap were robustly related to the selective consolidation of negative information. These findings suggest that the mechanisms underlying memory consolidation benefits associated with napping and nighttime sleep are not always the same. Finally, we provide preliminary evidence that the magnitude of the emotional memory benefit conferred by sleep is equivalent following a nap and a full night of sleep, suggesting that selective emotional remembering can be economically achieved by taking a nap. PMID:25706830

  8. Sleep promotes lasting changes in selective memory for emotional scenes.

    Science.gov (United States)

    Payne, Jessica D; Chambers, Alexis M; Kensinger, Elizabeth A

    2012-01-01

    Although we know that emotional events enjoy a privileged status in our memories, we still have much to learn about how emotional memories are processed, stored, and how they change over time. Here we show a positive association between REM sleep and the selective consolidation of central, negative aspects of complex scenes. Moreover, we show that the placement of sleep is critical for this selective emotional memory benefit. When testing occurred 24 h post-encoding, subjects who slept soon after learning (24 h Sleep First group) had superior memory for emotional objects compared to subjects whose sleep was delayed for 16 h post-encoding following a full day of wakefulness (24 h Wake First group). However, this increase in memory for emotional objects corresponded with a decrease in memory for the neutral backgrounds on which these objects were placed. Furthermore, memory for emotional objects in the 24 h Sleep First group was comparable to performance after just a 12 h delay containing a night of sleep, suggesting that sleep soon after learning selectively stabilizes emotional memory. These results suggest that the sleeping brain preserves in long-term memory only what is emotionally salient and perhaps most adaptive to remember.

  9. End User Perceptual Distorted Scenes Enhancement Algorithm Using Partition-Based Local Color Values for QoE-Guaranteed IPTV

    Science.gov (United States)

    Kim, Jinsul

    In this letter, we propose distorted scenes enhancement algorithm in order to provide end user perceptual QoE-guaranteed IPTV service. The block edge detection with weight factor and partition-based local color values method can be applied for the degraded video frames which are affected by network transmission errors such as out of order, jitter, and packet loss to improve QoE efficiently. Based on the result of quality metric after using the distorted scenes enhancement algorithm, the distorted scenes have been restored better than others.

  10. The complexity behind the date

    CERN Multimedia

    2009-01-01

    For the waiting world, and indeed for most of us here at CERN, ‘the LHC schedule’ simply means the date that the LHC will restart - and we only take notice when that end-date changes. But in fact the schedule is a constantly evolving intricate document coordinating all the repairs, consolidation and commissioning in every part of the machine. So, what actually goes on behind the scenes in timing and planning all the work on one of the most complex scientific instruments ever built?

  11. The capture and recreation of 3D auditory scenes

    Science.gov (United States)

    Li, Zhiyun

    The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.

  12. Influence of Exposure to Sexually Explicit Films on the Sexual Behavior of Secondary School Students in Ibadan, Nigeria.

    Science.gov (United States)

    Odeleye, Olubunmi; Ajuwon, Ademola J

    2015-01-01

    Young people in secondary schools who are prone to engage in risky sexual behaviors spend considerable time watching Television (TV) which often presents sex scenes. The influence of exposure to sex scenes on TV (SSTV) has been little researched in Nigeria. This study was therefore designed to determine the perceived influence of exposure to SSTV on the sexual behavior of secondary school students in Ibadan North Local Government Area. A total of 489 randomly selected students were surveyed. Mean age of respondents was 14.1 ± 1.9 years and 53.8% were females. About 91% had ever been exposed to sex scenes. The type of TV program from which most respondents reported exposure to sexual scenes was movies (86.9%). Majority reported exposure to all forms of SSTV from secondary storage devices. Students whose TV watching behavior was not monitored had heavier exposures to SSTV compared with those who were. About 56.3% of females and 26.5% of males affirmed that watching SSTV had affected their sexual behavior. Predictor of sex-related activities was exposure to heavy sex scenes. Peer education and school-based programs should include topics to teach young people on how to evaluate presentations of TV programs. © The Author(s) 2015 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  13. Signs of depth-luminance covariance in 3-D cluttered scenes.

    Science.gov (United States)

    Scaccia, Milena; Langer, Michael S

    2018-03-01

    In three-dimensional (3-D) cluttered scenes such as foliage, deeper surfaces often are more shadowed and hence darker, and so depth and luminance often have negative covariance. We examined whether the sign of depth-luminance covariance plays a role in depth perception in 3-D clutter. We compared scenes rendered with negative and positive depth-luminance covariance where positive covariance means that deeper surfaces are brighter and negative covariance means deeper surfaces are darker. For each scene, the sign of the depth-luminance covariance was given by occlusion cues. We tested whether subjects could use this sign information to judge the depth order of two target surfaces embedded in 3-D clutter. The clutter consisted of distractor surfaces that were randomly distributed in a 3-D volume. We tested three independent variables: the sign of the depth-luminance covariance, the colors of the targets and distractors, and the background luminance. An analysis of variance showed two main effects: Subjects performed better when the deeper surfaces were darker and when the color of the target surfaces was the same as the color of the distractors. There was also a strong interaction: Subjects performed better under a negative depth-luminance covariance condition when targets and distractors had different colors than when they had the same color. Our results are consistent with a "dark means deep" rule, but the use of this rule depends on the similarity between the color of the targets and color of the 3-D clutter.

  14. Poetics of the Key Scenes of V. Pelevin’s Literary Historiosophy

    Directory of Open Access Journals (Sweden)

    Tatyana E. Sorokina

    2017-03-01

    Full Text Available The article studies the poetics of the key scenes of the artistic philosophy of history of Victor Pelevin. There is an attempt to identify the features of his rational creative manner. Firstly, in the novels and stories of the writer there is modernity, transformed into grotesque images «of our days». Secondly, dynamism of fable text formation is in the context of philosophical reflection and breakthroughs that allow the reader to feel attached to the «wisdom», without leaving the area of the plot that can be disclosed within mass culture. Thirdly, the component of the philosophical text does not require the reader’s own intellectual effort; humor, irony and sarcasm eliminate vector of didactic narrative complexity of the book. Fourthly, the frequent use of profanity, limit of the freedom of the narrator and the characters from the classic morality reinforce grotesque sound of works and stimulate the reader to go through a certain surge of marginality, however, not going beyond the act of reading. And fifthly, it is accomplished the introduction to Buddhist culture, which is «preached» by means of historical realities of modern Russia. The paradox of Pelevin’s poetics: the impossibility of history, but it is impossible not to talk about it. What it is not substantially constant appears in the dialogue of the novel. The ambiguity of chronotope, the duality of time and space, ironic attitude to the character – the basis of poetics is not only in the novel «Chapaev and Emptiness», which raises the question of the existence of history as a fact of consciousness, the reality existing outside our mind, but also the «Sacred Book of the Werewolf» and «Тhe Empire V». The key image – the image of the emptiness. Consequently, the poetics of the key scenes of artistic historiosophy of Pelevin is considered on the material of novels «Chapaev and Emptiness», «The Sacred Book of the Werewolf» and «The Empire V».

  15. Complex Network Structure Influences Processing in Long-Term and Short-Term Memory

    Science.gov (United States)

    Vitevitch, Michael S.; Chan, Kit Ying; Roodenrys, Steven

    2012-01-01

    Complex networks describe how entities in systems interact; the structure of such networks is argued to influence processing. One measure of network structure, clustering coefficient, C, measures the extent to which neighbors of a node are also neighbors of each other. Previous psycholinguistic experiments found that the C of phonological…

  16. 3D Aware Correction and Completion of Depth Maps in Piecewise Planar Scenes

    KAUST Repository

    Thabet, Ali Kassem

    2015-04-16

    RGB-D sensors are popular in the computer vision community, especially for problems of scene understanding, semantic scene labeling, and segmentation. However, most of these methods depend on reliable input depth measurements, while discarding unreliable ones. This paper studies how reliable depth values can be used to correct the unreliable ones, and how to complete (or extend) the available depth data beyond the raw measurements of the sensor (i.e. infer depth at pixels with unknown depth values), given a prior model on the 3D scene. We consider piecewise planar environments in this paper, since many indoor scenes with man-made objects can be modeled as such. We propose a framework that uses the RGB-D sensor’s noise profile to adaptively and robustly fit plane segments (e.g. floor and ceiling) and iteratively complete the depth map, when possible. Depth completion is formulated as a discrete labeling problem (MRF) with hard constraints and solved efficiently using graph cuts. To regularize this problem, we exploit 3D and appearance cues that encourage pixels to take on depth values that will be compatible in 3D to the piecewise planar assumption. Extensive experiments, on a new large-scale and challenging dataset, show that our approach results in more accurate depth maps (with 20 % more depth values) than those recorded by the RGB-D sensor. Additional experiments on the NYUv2 dataset show that our method generates more 3D aware depth. These generated depth maps can also be used to improve the performance of a state-of-the-art RGB-D SLAM method.

  17. TESSITURA OF A CREATIVE PROCESS: AUTOBIOGRAPHY ON SCENE

    Directory of Open Access Journals (Sweden)

    Daniel Santos Costa

    2015-04-01

    Full Text Available This text presents weavings of a way to make the arts scene using the autobiographical support the creative process. Thus, we elucidate some of these weavings process while legitimizing the production of knowledge through artistic praxis, of sensitive experience. Introducing the concept of autobiography in analogy to the artistic and sequentially present the possibility of a laboratory setting amalgamated into reality/fiction. Keywords: creative process; autobiography; body.

  18. The Sport Expert's Attention Superiority on Skill-related Scene Dynamic by the Activation of left Medial Frontal Gyrus: An ERP and LORETA Study.

    Science.gov (United States)

    He, Mengyang; Qi, Changzhu; Lu, Yang; Song, Amanda; Hayat, Saba Z; Xu, Xia

    2018-05-21

    Extensive studies have shown that a sports expert is superior to a sports novice in visually perceptual-cognitive processes of sports scene information, however the attentional and neural basis of it has not been thoroughly explored. The present study examined whether a sport expert has the attentional superiority on scene information relevant to his/her sport skill, and explored what factor drives this superiority. To address this problem, EEGs were recorded as participants passively viewed sport scenes (tennis vs. non-tennis) and negative emotional faces in the context of a visual attention task, where the pictures of sport scenes or of negative emotional faces randomly followed the pictures with overlapping sport scenes and negative emotional faces. ERP results showed that for experts, the evoked potential of attentional competition elicited by the overlap of tennis scene was significantly larger than that evoked by the overlap of non-tennis scene, while this effect was absent for novices. The LORETA showed that the experts' left medial frontal gyrus (MFG) cortex was significantly more active as compared to the right MFG when processing the overlap of tennis scene, but the lateralization effect was not significant in novices. Those results indicate that experts have attentional superiority on skill-related scene information, despite intruding the scene through negative emotional faces that are prone to cause negativity bias toward their visual field as a strong distractor. This superiority is actuated by the activation of left MFG cortex and probably due to self-reference. Copyright © 2018 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. The evolution of cerebellum structure correlates with nest complexity.

    Science.gov (United States)

    Hall, Zachary J; Street, Sally E; Healy, Susan D

    2013-01-01

    Across the brains of different bird species, the cerebellum varies greatly in the amount of surface folding (foliation). The degree of cerebellar foliation is thought to correlate positively with the processing capacity of the cerebellum, supporting complex motor abilities, particularly manipulative skills. Here, we tested this hypothesis by investigating the relationship between cerebellar foliation and species-typical nest structure in birds. Increasing complexity of nest structure is a measure of a bird's ability to manipulate nesting material into the required shape. Consistent with our hypothesis, avian cerebellar foliation increases as the complexity of the nest built increases, setting the scene for the exploration of nest building at the neural level.

  20. A novel scene-based non-uniformity correction method for SWIR push-broom hyperspectral sensors

    Science.gov (United States)

    Hu, Bin-Lin; Hao, Shi-Jing; Sun, De-Xin; Liu, Yin-Nian

    2017-09-01

    A novel scene-based non-uniformity correction (NUC) method for short-wavelength infrared (SWIR) push-broom hyperspectral sensors is proposed and evaluated. This method relies on the assumption that for each band there will be ground objects with similar reflectance to form uniform regions when a sufficient number of scanning lines are acquired. The uniform regions are extracted automatically through a sorting algorithm, and are used to compute the corresponding NUC coefficients. SWIR hyperspectral data from airborne experiment are used to verify and evaluate the proposed method, and results show that stripes in the scenes have been well corrected without any significant information loss, and the non-uniformity is less than 0.5%. In addition, the proposed method is compared to two other regular methods, and they are evaluated based on their adaptability to the various scenes, non-uniformity, roughness and spectral fidelity. It turns out that the proposed method shows strong adaptability, high accuracy and efficiency.

  1. Generation of Variations on Theme Music Based on Impressions of Story Scenes Considering Human's Feeling of Music and Stories

    Directory of Open Access Journals (Sweden)

    Kenkichi Ishizuka

    2008-01-01

    Full Text Available This paper describes a system which generates variations on theme music fitting to story scenes represented by texts and/or pictures. Inputs to the present system are original theme music and numerical information on given story scenes. The present system varies melodies, tempos, tones, tonalities, and accompaniments of given theme music based on impressions of story scenes. Genetic algorithms (GAs using modular neural network (MNN models as fitness functions are applied to music generation in order to reflect user's feeling of music and stories. The present system adjusts MNN models for each user on line. This paper also describes the evaluation experiments to confirm whether the generated variations on theme music reflect impressions of story scenes appropriately or not.

  2. Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Fan Hu

    2015-11-01

    Full Text Available Learning efficient image representations is at the core of the scene classification task of remote sensing imagery. The existing methods for solving the scene classification task, based on either feature coding approaches with low-level hand-engineered features or unsupervised feature learning, can only generate mid-level image features with limited representative ability, which essentially prevents them from achieving better performance. Recently, the deep convolutional neural networks (CNNs, which are hierarchical architectures trained on large-scale datasets, have shown astounding performance in object recognition and detection. However, it is still not clear how to use these deep convolutional neural networks for high-resolution remote sensing (HRRS scene classification. In this paper, we investigate how to transfer features from these successfully pre-trained CNNs for HRRS scene classification. We propose two scenarios for generating image features via extracting CNN features from different layers. In the first scenario, the activation vectors extracted from fully-connected layers are regarded as the final image features; in the second scenario, we extract dense features from the last convolutional layer at multiple scales and then encode the dense features into global image features through commonly used feature coding approaches. Extensive experiments on two public scene classification datasets demonstrate that the image features obtained by the two proposed scenarios, even with a simple linear classifier, can result in remarkable performance and improve the state-of-the-art by a significant margin. The results reveal that the features from pre-trained CNNs generalize well to HRRS datasets and are more expressive than the low- and mid-level features. Moreover, we tentatively combine features extracted from different CNN models for better performance.

  3. The Processing Speed of Scene Categorization at Multiple Levels of Description: The Superordinate Advantage Revisited.

    Science.gov (United States)

    Banno, Hayaki; Saiki, Jun

    2015-03-01

    Recent studies have sought to determine which levels of categories are processed first in visual scene categorization and have shown that the natural and man-made superordinate-level categories are understood faster than are basic-level categories. The current study examined the robustness of the superordinate-level advantage in a visual scene categorization task. A go/no-go categorization task was evaluated with response time distribution analysis using an ex-Gaussian template. A visual scene was categorized as either superordinate or basic level, and two basic-level categories forming a superordinate category were judged as either similar or dissimilar to each other. First, outdoor/ indoor groups and natural/man-made were used as superordinate categories to investigate whether the advantage could be generalized beyond the natural/man-made boundary. Second, a set of images forming a superordinate category was manipulated. We predicted that decreasing image set similarity within the superordinate-level category would work against the speed advantage. We found that basic-level categorization was faster than outdoor/indoor categorization when the outdoor category comprised dissimilar basic-level categories. Our results indicate that the superordinate-level advantage in visual scene categorization is labile across different categories and category structures. © 2015 SAGE Publications.

  4. The influence of vertical disparity gradient and cue conflict on EEG omega complexity in Panum's limiting case.

    Science.gov (United States)

    Li, Huayun; Jia, Huibin; Yu, Dongchuan

    2018-03-01

    Using behavioral measures and ERP technique, researchers discovered at least two factors could influence the final perception of depth in Panum's limiting case, which are the vertical disparity gradient and the degree of cue conflict between two- and three-dimensional shapes. Although certain event-related potential components have been proved to be sensitive to the different levels of these two factors, some methodological limitations existed in this technique. In this study, we proposed that the omega complexity of EEG signal may serve as an important supplement of the traditional event-related potential technique. We found that the trials with lower vertical gradient disparity have lower omega complexity (i.e., higher global functional connectivity) of the occipital region, especially that of the right-occipital hemisphere. Moreover, for occipital omega complexity, the trials with low-cue conflict have significantly larger omega complexity than those with medium- and high-cue conflict. It is also found that the electrodes located in the middle line of the occipital region (i.e., POz and Oz) are more crucial to the impact of different levels of cue conflict on omega complexity than the other electrodes located in the left- and right-occipital hemispheres. These evidences demonstrated that the EEG omega complexity could reflect distinct neural activities evoked by Panum's limiting case configurations, with different levels of vertical disparity gradient and cue conflict. Besides, the influence of vertical disparity gradient and cue conflict on omega complexity may be regional dependent. NEW & NOTEWORTHY The EEG omega complexity could reflect distinct neural activities evoked by Panum's limiting case configurations with different levels of vertical disparity gradient and cue conflict. The influence of vertical disparity gradient and cue conflict on omega complexity is regional dependent. The omega complexity of EEG signal can serve as an important supplement of the

  5. Analogical reasoning in children with specific language impairment: Evidence from a scene analogy task.

    Science.gov (United States)

    Krzemien, Magali; Jemel, Boutheina; Maillart, Christelle

    2017-01-01

    Analogical reasoning is a human ability that maps systems of relations. It develops along with relational knowledge, working memory and executive functions such as inhibition. It also maintains a mutual influence on language development. Some authors have taken a greater interest in the analogical reasoning ability of children with language disorders, specifically those with specific language impairment (SLI). These children apparently have weaker analogical reasoning abilities than their aged-matched peers without language disorders. Following cognitive theories of language acquisition, this deficit could be one of the causes of language disorders in SLI, especially those concerning productivity. To confirm this deficit and its link to language disorders, we use a scene analogy task to evaluate the analogical performance of SLI children and compare them to controls of the same age and linguistic abilities. Results show that children with SLI perform worse than age-matched peers, but similar to language-matched peers. They are more influenced by increased task difficulty. The association between language disorders and analogical reasoning in SLI can be confirmed. The hypothesis of limited processing capacity in SLI is also being considered.

  6. Application for 3d Scene Understanding in Detecting Discharge of Domesticwaste Along Complex Urban Rivers

    Science.gov (United States)

    Ninsalam, Y.; Qin, R.; Rekittke, J.

    2016-06-01

    In our study we use 3D scene understanding to detect the discharge of domestic solid waste along an urban river. Solid waste found along the Ciliwung River in the neighbourhoods of Bukit Duri and Kampung Melayu may be attributed to households. This is in part due to inadequate municipal waste infrastructure and services which has caused those living along the river to rely upon it for waste disposal. However, there has been little research to understand the prevalence of household waste along the river. Our aim is to develop a methodology that deploys a low cost sensor to identify point source discharge of solid waste using image classification methods. To demonstrate this we describe the following five-step method: 1) a strip of GoPro images are captured photogrammetrically and processed for dense point cloud generation; 2) depth for each image is generated through a backward projection of the point clouds; 3) a supervised image classification method based on Random Forest classifier is applied on the view dependent red, green, blue and depth (RGB-D) data; 4) point discharge locations of solid waste can then be mapped by projecting the classified images to the 3D point clouds; 5) then the landscape elements are classified into five types, such as vegetation, human settlement, soil, water and solid waste. While this work is still ongoing, the initial results have demonstrated that it is possible to perform quantitative studies that may help reveal and estimate the amount of waste present along the river bank.

  7. APPLICATION FOR 3D SCENE UNDERSTANDING IN DETECTING DISCHARGE OF DOMESTICWASTE ALONG COMPLEX URBAN RIVERS

    Directory of Open Access Journals (Sweden)

    Y. Ninsalam

    2016-06-01

    Full Text Available In our study we use 3D scene understanding to detect the discharge of domestic solid waste along an urban river. Solid waste found along the Ciliwung River in the neighbourhoods of Bukit Duri and Kampung Melayu may be attributed to households. This is in part due to inadequate municipal waste infrastructure and services which has caused those living along the river to rely upon it for waste disposal. However, there has been little research to understand the prevalence of household waste along the river. Our aim is to develop a methodology that deploys a low cost sensor to identify point source discharge of solid waste using image classification methods. To demonstrate this we describe the following five-step method: 1 a strip of GoPro images are captured photogrammetrically and processed for dense point cloud generation; 2 depth for each image is generated through a backward projection of the point clouds; 3 a supervised image classification method based on Random Forest classifier is applied on the view dependent red, green, blue and depth (RGB-D data; 4 point discharge locations of solid waste can then be mapped by projecting the classified images to the 3D point clouds; 5 then the landscape elements are classified into five types, such as vegetation, human settlement, soil, water and solid waste. While this work is still ongoing, the initial results have demonstrated that it is possible to perform quantitative studies that may help reveal and estimate the amount of waste present along the river bank.

  8. Hearing something emotional influences memory for what was just seen: How arousal amplifies effects of competition in memory consolidation.

    Science.gov (United States)

    Ponzio, Allison; Mather, Mara

    2014-12-01

    Enhanced memory for emotional items often comes at the cost of memory for the background scenes. Because emotional foreground items both induce arousal and attract attention, it is not clear whether the emotion effects are simply the result of shifts in visual attention during encoding or whether arousal has effects beyond simple attention capture. In the current study, participants viewed a series of scenes that each either had a foreground object or did not have one, and then, after each image, heard either an emotionally arousing negative sound or a neutral sound. After a 24-hr delay, they returned for a memory test for the objects and scenes. Postencoding arousal decreased recognition memory of scenes shown behind superimposed objects but not memory of scenes shown alone. These findings support the hypothesis that arousal amplifies the effects of competition between mental representations, influencing memory consolidation of currently active representations.

  9. Walk This Way: Improving Pedestrian Agent-Based Models through Scene Activity Analysis

    Directory of Open Access Journals (Sweden)

    Andrew Crooks

    2015-09-01

    Full Text Available Pedestrian movement is woven into the fabric of urban regions. With more people living in cities than ever before, there is an increased need to understand and model how pedestrians utilize and move through space for a variety of applications, ranging from urban planning and architecture to security. Pedestrian modeling has been traditionally faced with the challenge of collecting data to calibrate and validate such models of pedestrian movement. With the increased availability of mobility datasets from video surveillance and enhanced geolocation capabilities in consumer mobile devices we are now presented with the opportunity to change the way we build pedestrian models. Within this paper we explore the potential that such information offers for the improvement of agent-based pedestrian models. We introduce a Scene- and Activity-Aware Agent-Based Model (SA2-ABM, a method for harvesting scene activity information in the form of spatiotemporal trajectories, and incorporate this information into our models. In order to assess and evaluate the improvement offered by such information, we carry out a range of experiments using real-world datasets. We demonstrate that the use of real scene information allows us to better inform our model and enhance its predictive capabilities.

  10. The anatomy of the crime scene

    DEFF Research Database (Denmark)

    Sandvik, Kjetil

    2010-01-01

    in the way that the certain actions and events which have taken place have left a variety of marks and traces which may be read and interpreted. Traces of blood, nails, hair constitutes (DNA)codes which can be decrypted and deciphered, in the same way as traces of gun powder, bullet holes, physical damage...... and interpretation. During her investigation the detective's ability to make logical reasoning and deductive thinking as well as to make use of her imagination is crucial to how the crime scene is first deconstructed and then reconstructed as a setting for the story (that is the actions of crime). By decoding...

  11. Signature modelling and radiometric rendering equations in infrared scene simulation systems

    CSIR Research Space (South Africa)

    Willers, CJ

    2011-09-01

    Full Text Available The development and optimisation of modern infrared systems necessitates the use of simulation systems to create radiometrically realistic representations (e.g. images) of infrared scenes. Such simulation systems are used in signature prediction...

  12. Scenes of Violence and Sex in Recent Award-Winning LGBT-Themed Young Adult Novels and the Ideologies They Offer Their Readers

    Science.gov (United States)

    Clark, Caroline T.; Blackburn, Mollie V.

    2016-01-01

    This study examines LGBT-inclusive and queering discourses in five recent award-winning LGBT-themed young adult books. The analysis brought scenes of violence and sex/love scenes to the fore. Violent scenes offered readers messages that LGBT people are either the victims of violence-fueled hatred and fear, or, in some cases, showed a gay person…

  13. Differential processing of natural scenes in typical and atypical Alzheimer disease measured with a saccade choice task

    Directory of Open Access Journals (Sweden)

    Muriel eBoucart

    2014-07-01

    Full Text Available Though atrophy of the medial temporal lobe, including structures (hippocampus and parahippocampal cortex that support scene perception and the binding of an object to its context, appears early in Alzheimer disease (AD few studies have investigated scene perception in people with AD. We assessed the ability to find a target object within a natural scene in people with typical AD and in people with atypical AD (posterior cortical atrophy. Pairs of colored photographs were displayed left and right of fixation for one second. Participants were asked to categorize the target (an animal either in moving their eyes toward the photograph containing the target (saccadic choice task or in pressing a key corresponding to the location of the target (manual choice task in separate blocks of trials. For both tasks performance was compared in two conditions: with isolated objects and with objects in scenes. Patients with atypical AD were more impaired to detect a target within a scene than people with typical AD who exhibited a pattern of performance more similar to that of age-matched controls in terms of accuracy, saccade latencies and benefit from contextual information. People with atypical AD benefited less from contextual information in both the saccade and the manual choice tasks suggesting a higher sensitivity to crowding and deficits in figure/ground segregation in people with lesions in posterior areas of the brain.

  14. Functional Organization of the Parahippocampal Cortex: Dissociable Roles for Context Representations and the Perception of Visual Scenes.

    Science.gov (United States)

    Baumann, Oliver; Mattingley, Jason B

    2016-02-24

    The human parahippocampal cortex has been ascribed central roles in both visuospatial and mnemonic processes. More specifically, evidence suggests that the parahippocampal cortex subserves both the perceptual analysis of scene layouts as well as the retrieval of associative contextual memories. It remains unclear, however, whether these two functional roles can be dissociated within the parahippocampal cortex anatomically. Here, we provide evidence for a dissociation between neural activation patterns associated with visuospatial analysis of scenes and contextual mnemonic processing along the parahippocampal longitudinal axis. We used fMRI to measure parahippocampal responses while participants engaged in a task that required them to judge the contextual relatedness of scene and object pairs, which were presented either as words or pictures. Results from combined factorial and conjunction analyses indicated that the posterior section of parahippocampal cortex is driven predominantly by judgments associated with pictorial scene analysis, whereas its anterior section is more active during contextual judgments regardless of stimulus category (scenes vs objects) or modality (word vs picture). Activation maxima associated with visuospatial and mnemonic processes were spatially segregated, providing support for the existence of functionally distinct subregions along the parahippocampal longitudinal axis and suggesting that, in humans, the parahippocampal cortex serves as a functional interface between perception and memory systems. Copyright © 2016 the authors 0270-6474/16/362536-07$15.00/0.

  15. Wooing-Scenes in “Richard III”: A Parody of Courtliness?

    Directory of Open Access Journals (Sweden)

    Agnieszka Stępkowska

    2009-11-01

    Full Text Available In the famous opening soliloquy of Shakespeare’s Richard III, Richard mightily voices his repugnance to “fair well-spoken days” and their “idle pleasures”. He realizes his physical deformity and believes that it sets him apart from others. He openly admits that he is “not shaped for sportive tricks, nor made to court an amorous looking-glass”. Yet, his monstrosity constitutes more perhaps of his aggressive masculine exceptionality rather than of his deformity. Richard’s bullying masculinity manifests itself in his contempt for women. In the wooing scenes we clearly see his pugnacious pursuit of power over effeminate contentment by reducing women to mere objects. Additionally, those scenes are interesting from a psychological viewpoint as they brim over with conflicting emotions. Therefore, the paper explores two wooing encounters of the play, which belong the best examples of effective persuasion and also something we may refer to as ‘the power of eloquence’.

  16. A Low Complexity Discrete Radiosity Method

    OpenAIRE

    Chatelier , Pierre Yves; Malgouyres , Rémy

    2006-01-01

    International audience; Rather than using Monte Carlo sampling techniques or patch projections to compute radiosity, it is possible to use a discretization of a scene into voxels and perform some discrete geometry calculus to quickly compute visibility information. In such a framework , the radiosity method may be as precise as a patch-based radiosity using hemicube computation for form-factors, but it lowers the overall theoretical complexity to an O(N log N) + O(N), where the O(N) is largel...

  17. A bottom-up model of spatial attention predicts human error patterns in rapid scene recognition.

    Science.gov (United States)

    Einhäuser, Wolfgang; Mundhenk, T Nathan; Baldi, Pierre; Koch, Christof; Itti, Laurent

    2007-07-20

    Humans demonstrate a peculiar ability to detect complex targets in rapidly presented natural scenes. Recent studies suggest that (nearly) no focal attention is required for overall performance in such tasks. Little is known, however, of how detection performance varies from trial to trial and which stages in the processing hierarchy limit performance: bottom-up visual processing (attentional selection and/or recognition) or top-down factors (e.g., decision-making, memory, or alertness fluctuations)? To investigate the relative contribution of these factors, eight human observers performed an animal detection task in natural scenes presented at 20 Hz. Trial-by-trial performance was highly consistent across observers, far exceeding the prediction of independent errors. This consistency demonstrates that performance is not primarily limited by idiosyncratic factors but by visual processing. Two statistical stimulus properties, contrast variation in the target image and the information-theoretical measure of "surprise" in adjacent images, predict performance on a trial-by-trial basis. These measures are tightly related to spatial attention, demonstrating that spatial attention and rapid target detection share common mechanisms. To isolate the causal contribution of the surprise measure, eight additional observers performed the animal detection task in sequences that were reordered versions of those all subjects had correctly recognized in the first experiment. Reordering increased surprise before and/or after the target while keeping the target and distractors themselves unchanged. Surprise enhancement impaired target detection in all observers. Consequently, and contrary to several previously published findings, our results demonstrate that attentional limitations, rather than target recognition alone, affect the detection of targets in rapidly presented visual sequences.

  18. Quantitative assessment of similarity between randomly acquired characteristics on high quality exemplars and crime scene impressions via analysis of feature size and shape.

    Science.gov (United States)

    Richetelli, Nicole; Nobel, Madonna; Bodziak, William J; Speir, Jacqueline A

    2017-01-01

    Forensic footwear evidence can prove invaluable to the resolution of a criminal investigation. Naturally, the value of a comparison varies with the rarity of the evidence, which is a function of both manufactured as well as randomly acquired characteristics (RACs). When focused specifically on the latter of these two types of features, empirical evidence demonstrates high discriminating power for the differentiation of known match and known non-match samples when presented with exemplars of high quality and exhibiting a sufficient number of clear and complex RACs. However, given the dynamic and unpredictable nature of the media, substrate, and deposition process encountered during the commission of a crime, RACs on crime scene prints are expected to exhibit a large range of variability in terms of reproducibility, clarity, and quality. Although the pattern recognition skill of the expert examiner is adept at recognizing and evaluating this type of natural variation, there is little research to suggest that objective and numerical metrics can globally process this variation when presented with RACs from degraded crime scene quality prints. As such, the goal of this study was to mathematically compare the loss and similarity of RACs in high quality exemplars versus crime-scene-like quality impressions as a function of RAC shape, perimeter, area, and common source. Results indicate that the unpredictable conditions associated with crime scene print production promotes RAC loss that varies between 33% and 100% with an average of 85%, and that when the entire outsole is taken as a constellation of features (or a RAC map), 64% of the crime-scene-like impressions exhibited 10 or fewer RACs, resulting in a 0.72 probability of stochastic dominance. Given this, individual RAC description and correspondence were further explored using five simple, but objective, numerical metrics of similarity. Statistically significant differences in similarity scores for RAC shape and size

  19. Literacy in the contemporary scene

    Directory of Open Access Journals (Sweden)

    Angela B. Kleiman

    2014-11-01

    Full Text Available In this paper I examine the relationship between literacy and contemporaneity. I take as a point of departure for my discussion school literacy and its links with literacies in other institutions of the contemporary scene, in order to determine the relation between contemporary ends of reading and writing (in other words, the meaning of being literate in contemporary society and the practices and activities effectively realized at school in order to reach those objectives. Using various examples from teaching and learning situations, I discuss digital literacy practices and multimodal texts and multiliteracies from both printed and digital cultures. Throughout, I keep as a background for the discussion the functions and objectives of school literacy and the professional training of teachers who would like to be effective literacy agents in the contemporary world.

  20. The Effect of Walking and Teleportation on Spatial Updating in Virtual and Real Scenes

    Directory of Open Access Journals (Sweden)

    J Vuong

    2013-10-01

    Full Text Available Intuitively, it seems as if we should be able to point accurately to the location of a target object within a room even if we were teleported to a different location and the object removed from view. We measured the precision of pointing to a previously-seen object in a real room, a virtual room with same dimensions (presented in immersive virtual reality and a sparse virtual scene consisting only of long thin poles at the same locations as the target object and room corners. Participants viewed the target object from one location, walked to another so that the object passed out of view, then turned in complete darkness to point at the location of the previously-viewed target. In a separate experiment, participants viewed a sparse scene consisting of long thin poles (including a target and had to point to the location of the absent target after teleportation to a new location within the scene. Pointing precision in this case was dramatically reduced (σ ≈ 34° compared to the conditions in which participants walked in the real room, virtual room or sparse scene. In the latter three conditions, pointing precision was very similar (σ ≈ 15° despite the removal of prominent distance cues in the sparse condition. Our results show that spatial updating after teleportation is substantially poorer than when walking between two locations. [Supported by Microsoft Research and Wellcome Trust

  1. Terrain Simplification Research in Augmented Scene Modeling

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    environment. As one of the most important tasks in augmented scene modeling, terrain simplification research has gained more and more attention. In this paper, we mainly focus on point selection problem in terrain simplification using triangulated irregular network. Based on the analysis and comparison of traditional importance measures for each input point, we put forward a new importance measure based on local entropy. The results demonstrate that the local entropy criterion has a better performance than any traditional methods. In addition, it can effectively conquer the "short-sight" problem associated with the traditional methods.

  2. Subjective emotional over-arousal to neutral social scenes in paranoid schizophrenia.

    Science.gov (United States)

    Haralanova, Evelina; Haralanov, Svetlozar; Beraldi, Anna; Möller, Hans-Jürgen; Hennig-Fast, Kristina

    2012-02-01

    From the clinical practice and some experimental studies, it is apparent that paranoid schizophrenia patients tend to assign emotional salience to neutral social stimuli. This aberrant cognitive bias has been conceptualized to result from increased emotional arousal, but direct empirical data are scarce. The aim of the present study was to quantify the subjective emotional arousal (SEA) evoked by emotionally non-salient (neutral) compared to emotionally salient (negative) social stimuli in schizophrenia patients and healthy controls. Thirty male inpatients with paranoid schizophrenia psychosis and 30 demographically matched healthy controls rated their level of SEA in response to neutral and negative social scenes from the International Affective Picture System and the Munich Affective Picture System. Schizophrenia patients compared to healthy controls had an increased overall SEA level. This relatively higher SEA was evoked only by the neutral but not by the negative social scenes. To our knowledge, the present study is the first designed to directly demonstrate subjective emotional over-arousal to neutral social scenes in paranoid schizophrenia. This finding might explain previous clinical and experimental data and could be viewed as the missing link between the primary neurobiological and secondary psychological mechanisms of paranoid psychotic-symptom formation. Furthermore, despite being very short and easy to perform, the task we used appeared to be sensitive enough to reveal emotional dysregulation, in terms of emotional disinhibition/hyperactivation in paranoid schizophrenia patients. Thus, it could have further research and clinical applications, including as a neurobehavioral probe for imaging studies.

  3. Campaign Documentaries: Behind-the-Scenes Perspectives Make Useful Teaching Tools

    Science.gov (United States)

    Wolfford, David

    2012-01-01

    Over the last 20 years, independent filmmakers have produced insightful documentaries of high profile political campaigns with behind-the-scenes footage. These documentaries offer inside looks and unique perspectives on electoral politics. This campaign season, consider "The War Room"; "A Perfect Candidate"; "Journeys With George;" "Chisholm '72";…

  4. Three-dimensional tracking for efficient fire fighting in complex situations

    Science.gov (United States)

    Akhloufi, Moulay; Rossi, Lucile

    2009-05-01

    Each year, hundred millions hectares of forests burn causing human and economic losses. For efficient fire fighting, the personnel in the ground need tools permitting the prediction of fire front propagation. In this work, we present a new technique for automatically tracking fire spread in three-dimensional space. The proposed approach uses a stereo system to extract a 3D shape from fire images. A new segmentation technique is proposed and permits the extraction of fire regions in complex unstructured scenes. It works in the visible spectrum and combines information extracted from YUV and RGB color spaces. Unlike other techniques, our algorithm does not require previous knowledge about the scene. The resulting fire regions are classified into different homogenous zones using clustering techniques. Contours are then extracted and a feature detection algorithm is used to detect interest points like local maxima and corners. Extracted points from stereo images are then used to compute the 3D shape of the fire front. The resulting data permits to build the fire volume. The final model is used to compute important spatial and temporal fire characteristics like: spread dynamics, local orientation, heading direction, etc. Tests conducted on the ground show the efficiency of the proposed scheme. This scheme is being integrated with a fire spread mathematical model in order to predict and anticipate the fire behaviour during fire fighting. Also of interest to fire-fighters, is the proposed automatic segmentation technique that can be used in early detection of fire in complex scenes.

  5. Binary Format for Scene (BIFS): combining MPEG-4 media to build rich multimedia services

    Science.gov (United States)

    Signes, Julien

    1998-12-01

    In this paper, we analyze the design concepts and some technical details behind the MPEG-4 standard, particularly the scene description layer, commonly known as the Binary Format for Scene (BIFS). We show how MPEG-4 may ease multimedia proliferation by offering a unique, optimized multimedia platform. Lastly, we analyze the potential of the technology for creating rich multimedia applications on various networks and platforms. An e-commerce application example is detailed, highlighting the benefits of the technology. Compression results show how rich applications may be built even on very low bit rate connections.

  6. Complex terrain influences ecosystem carbon responses to temperature and precipitation

    Science.gov (United States)

    Reyes, W. M.; Epstein, H. E.; Li, X.; McGlynn, B. L.; Riveros-Iregui, D. A.; Emanuel, R. E.

    2017-08-01

    Terrestrial ecosystem responses to temperature and precipitation have major implications for the global carbon cycle. Case studies demonstrate that complex terrain, which accounts for more than 50% of Earth's land surface, can affect ecological processes associated with land-atmosphere carbon fluxes. However, no studies have addressed the role of complex terrain in mediating ecophysiological responses of land-atmosphere carbon fluxes to climate variables. We synthesized data from AmeriFlux towers and found that for sites in complex terrain, responses of ecosystem CO2 fluxes to temperature and precipitation are organized according to terrain slope and drainage area, variables associated with water and energy availability. Specifically, we found that for tower sites in complex terrain, mean topographic slope and drainage area surrounding the tower explained between 51% and 78% of site-to-site variation in the response of CO2 fluxes to temperature and precipitation depending on the time scale. We found no such organization among sites in flat terrain, even though their flux responses exhibited similar ranges. These results challenge prevailing conceptual framework in terrestrial ecosystem modeling that assumes that CO2 fluxes derive from vertical soil-plant-climate interactions. We conclude that the terrain in which ecosystems are situated can also have important influences on CO2 responses to temperature and precipitation. This work has implications for about 14% of the total land area of the conterminous U.S. This area is considered topographically complex and contributes to approximately 15% of gross ecosystem carbon production in the conterminous U.S.

  7. Molecular identification of blow flies recovered from human cadavers during crime scene investigations in Malaysia.

    Science.gov (United States)

    Kavitha, Rajagopal; Nazni, Wasi Ahmad; Tan, Tian Chye; Lee, Han Lim; Isa, Mohd Noor Mat; Azirun, Mohd Sofian

    2012-12-01

    Forensic entomology applies knowledge about insects associated with decedent in crime scene investigation. It is possible to calculate a minimum postmortem interval (PMI) by determining the age and species of the oldest blow fly larvae feeding on decedent. This study was conducted in Malaysia to identify maggot specimens collected during crime scene investigations. The usefulness of the molecular and morphological approach in species identifications was evaluated in 10 morphologically identified blow fly larvae sampled from 10 different crime scenes in Malaysia. The molecular identification method involved the sequencing of a total length of 2.2 kilo base pairs encompassing the 'barcode' fragments of the mitochondrial cytochrome oxidase I (COI), cytochrome oxidase II (COII) and t-RNA leucine genes. Phylogenetic analyses confirmed the presence of Chrysomya megacephala, Chrysomya rufifacies and Chrysomya nigripes. In addition, one unidentified blow fly species was found based on phylogenetic tree analysis.

  8. Factors affecting athletes' motor behavior after the observation of scenes of cooperation and competition in competitive sport: the effect of sport attitude.

    Science.gov (United States)

    Stefani, Elisa De; De Marco, Doriana; Gentilucci, Maurizio

    2015-01-01

    This study delineated how observing sports scenes of cooperation or competition modulated an action of interaction, in expert athletes, depending on their specific sport attitude. In a kinematic study, athletes were divided into two groups depending on their attitude toward teammates (cooperative or competitive). Participants observed sport scenes of cooperation and competition (basketball, soccer, water polo, volleyball, and rugby) and then they reached for, picked up, and placed an object on the hand of a conspecific (giving action). Mixed-design ANOVAs were carried out on the mean values of grasping-reaching parameters. Data showed that the type of scene observed as well as the athletes' attitude affected reach-to-grasp actions to give. In particular, the cooperative athletes were speeded when they observed scenes of cooperation compared to when they observed scenes of competition. Participants were speeded when executing a giving action after observing actions of cooperation. This occurred only when they had a cooperative attitude. A match between attitude and intended action seems to be a necessary prerequisite for observing an effect of the observed type of scene on the performed action. It is possible that the observation of scenes of competition activated motor strategies which interfered with the strategies adopted by the cooperative participants to execute a cooperative (giving) sequence.

  9. Influence of the nucleobase on the physicochemical characteristics and biological activities of Sb{sup V}-ribonucleoside complexes

    Energy Technology Data Exchange (ETDEWEB)

    Ferreira, Claudio S.; Demicheli, Cynthia, E-mail: demichel@netuno.lcc.ufmg.b [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Quimica; Rocha, Iara C.M. da; Melo, Maria N. [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Parasitologia; Monte Neto, Rubens L.; Frezard, Frederic [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Dept. de Fisiologia e Biofisica

    2010-07-01

    The influence of the nucleobase (uracyl, U; cytosine, C; adenine, A; guanine, G) on the physicochemical characteristics and in vitro biological activities of Sb{sup V}-ribonucleoside complexes has been investigated. The 1:1 Sb-U and Sb-C complexes were characterized by NMR and ESI-MS spectroscopies and elemental analysis. The stability constant and the apparent association and dissociation rate constants of 1:1 Sb{sup V}-U, Sb{sup V}-C and Sb{sup V}-A complexes were determined. Although Sb{sup V} most probably binds via oxygen atoms to the same 2' and 3' positions in the different nucleosides, the ribose conformational changes and the physicochemical characteristics of the complex depend on the nucleobase. The nucleobase had a strong influence on the cytotoxicity against macrophages and the antileishmanial activity of the Sb{sup V}-ribonucleoside complexes. The Sb{sup V}-purine complexes were more cytotoxic and more effective against Leishmania chagasi than the Sb{sup V}-pyrimidine complexes, supporting the model that the interaction of Sb{sup V} with purine nucleosides may mediate the antileishmanial activity of pentavalent antimonial drugs. (author)

  10. An investigation into the effect of surveillance drones on textile evidence at crime scenes.

    Science.gov (United States)

    Bucknell, Alistair; Bassindale, Tom

    2017-09-01

    With increasing numbers of Police forces using drones for crime scene surveillance, the effect of the drones on trace evidence present needs evaluation. In this investigation the effect of flying a quadcopter drone at different heights over a controlled scene and taking off at different distances from the scene were measured. Yarn was placed on a range of floor surfaces and the number lost or moved from their original position was recorded. It was possible to estimate "safe" distances above and take off distance from the bath mat (2m and 1m respectively), and carpet tile (3m and 1m) which were the roughest surfaces. The maximum distances tested of 5m above and 2m from was not far enough to prevent significant disturbance with the other floor surfaces. This report illustrates the importance of considering the impact of new technologies into a forensic workflow on established forensic evidence prior to implementation. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  11. Functional relationships between the hippocampus and dorsomedial striatum in learning a visual scene-based memory task in rats.

    Science.gov (United States)

    Delcasso, Sébastien; Huh, Namjung; Byeon, Jung Seop; Lee, Jihyun; Jung, Min Whan; Lee, Inah

    2014-11-19

    The hippocampus is important for contextual behavior, and the striatum plays key roles in decision making. When studying the functional relationships with the hippocampus, prior studies have focused mostly on the dorsolateral striatum (DLS), emphasizing the antagonistic relationships between the hippocampus and DLS in spatial versus response learning. By contrast, the functional relationships between the dorsomedial striatum (DMS) and hippocampus are relatively unknown. The current study reports that lesions to both the hippocampus and DMS profoundly impaired performance of rats in a visual scene-based memory task in which the animals were required to make a choice response by using visual scenes displayed in the background. Analysis of simultaneous recordings of local field potentials revealed that the gamma oscillatory power was higher in the DMS, but not in CA1, when the rat performed the task using familiar scenes than novel ones. In addition, the CA1-DMS networks increased coherence at γ, but not at θ, rhythm as the rat mastered the task. At the single-unit level, the neuronal populations in CA1 and DMS showed differential firing patterns when responses were made using familiar visual scenes than novel ones. Such learning-dependent firing patterns were observed earlier in the DMS than in CA1 before the rat made choice responses. The present findings suggest that both the hippocampus and DMS process memory representations for visual scenes in parallel with different time courses and that flexible choice action using background visual scenes requires coordinated operations of the hippocampus and DMS at γ frequencies. Copyright © 2014 the authors 0270-6474/14/3415534-14$15.00/0.

  12. The audience eats more if a movie character keeps eating: An unconscious mechanism for media influence on eating behaviors.

    Science.gov (United States)

    Zhou, Shuo; Shapiro, Michael A; Wansink, Brian

    2017-01-01

    Media's presentation of eating is an important source of influence on viewers' eating goals and behaviors. Drawing on recent research indicating that whether a story character continues to pursue a goal or completes a goal can unconsciously influence an audience member's goals, a scene from a popular movie comedy was manipulated to end with a character continuing to eat (goal ongoing) or completed eating (goal completed). Participants (N = 147) were randomly assigned to a goal status condition. As a reward, after viewing the movie clip viewers were offered two types of snacks: ChexMix and M&M's, in various size portions. Viewers ate more food after watching the characters continue to eat compared to watching the characters complete eating, but only among those manipulated to identify with a character. Viewers were more likely to choose savory food after viewing the ongoing eating scenes, but sweet dessert-like food after viewing the completed eating scenes. The results extend the notion of media influence on unconscious goal contagion and satiation to movie eating, and raise the possibility that completing a goal can activate a logically subsequent goal. Implications for understanding media influence on eating and other health behaviors are discussed. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. The interaction of cognitive load and attention-directing cues in driving.

    Science.gov (United States)

    Lee, Yi-Ching; Lee, John D; Boyle, Linda Ng

    2009-06-01

    This study investigated the effect of a nondriving cognitively loading task on the relationship between drivers' endogenous and exogenous control of attention. Previous studies have shown that cognitive load leads to a withdrawal of attention from the forward scene and a narrowed field of view, which impairs hazard detection. Posner's cue-target paradigm was modified to study how endogenous and exogenous cues interact with cognitive load to influence drivers' attention in a complex dynamic situation. In a driving simulator, pedestrian crossing signs that predicted the spatial location of pedestrians acted as endogenous cues. To impose cognitive load on drivers, we had them perform an auditory task that simulated the demands of emerging in-vehicle technology. Irrelevant exogenous cues were added to half of the experimental drives by including scene clutter. The validity of endogenous cues influenced how drivers scanned for pedestrian targets. Cognitive load delayed drivers' responses, and scene clutter reduced drivers' fixation durations to pedestrians. Cognitive load diminished the influence of exogenous cues to attract attention to irrelevant areas, and drivers were more affected by scene clutter when the endogenous cues were invalid. Cognitive load suppresses interference from irrelevant exogenous cues and delays endogenous orienting of attention in driving. The complexity of everyday tasks, such as driving, is better captured experimentally in paradigms that represent the interactive nature of attention and processing load.

  14. Influence of Hydrophobicity on Polyelectrolyte Complexation

    Energy Technology Data Exchange (ETDEWEB)

    Sadman, Kazi [Department; amp, Engineering, Northwestern University, Evanston, Illinois 60208, United States; Wang, Qifeng [Department; amp, Engineering, Northwestern University, Evanston, Illinois 60208, United States; Chen, Yaoyao [Department; amp, Engineering, Northwestern University, Evanston, Illinois 60208, United States; Keshavarz, Bavand [Department; Jiang, Zhang [X-ray; Shull, Kenneth R. [Department; amp, Engineering, Northwestern University, Evanston, Illinois 60208, United States

    2017-11-16

    Polyelectrolyte complexes are a fascinating class of soft materials that can span the full spectrum of mechanical properties from low viscosity fluids to glassy solids. This spectrum can be accessed by modulating the extent of electrostatic association in these complexes. However, to realize the full potential of polyelectrolyte complexes as functional materials their molecular level details need to be clearly correlated with their mechanical response. The present work demonstrates that by making simple amendments to the chain architecture it is possible to affect the salt responsiveness of polyelectrolyte complexes in a systematic manner. This is achieved by quaternizing poly(4-vinylpyridine) (QVP) with methyl, ethyl and propyl substituents– thereby increasing the hydrophobicity with increasing side chain length– and complexing them with a common anionic polyelectrolyte, poly(styrene sulfonate). The mechanical 1 ACS Paragon Plus Environment behavior of these complexes is compared to the more hydrophilic system of poly(styrene sulfonate) and poly(diallyldimethylammonium) by quantifying the swelling behavior in response to salt stimuli. More hydrophobic complexes are found to be more resistant to doping by salt, yet the mechanical properties of the complex remain contingent on the overall swelling ratio of the complex itself, following near universal swelling-modulus master curves that are quantified in this work. The rheological behavior of QVP complex coacervates are found to be approximately the same, only requiring higher salt concentrations to overcome strong hydrophobic interactions, demonstrating that hydrophobicity can be used as an important parameter for tuning the stability of polyelectrolyte complexes in general, while still preserving the ability to be processed “saloplastically”.

  15. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    Science.gov (United States)

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-04-20

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  16. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    Directory of Open Access Journals (Sweden)

    J. Javier Yebes

    2015-04-01

    Full Text Available Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles. In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity, while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  17. The Power Of The Judicial Assistant/Law Clerk: Looking Behind The Scenes At Courts In The United States, England And Wales, And The Netherlands

    Directory of Open Access Journals (Sweden)

    Nina Holvast

    2016-03-01

    Full Text Available Although largely invisible to the public, behind the scenes, judicial assistants/law clerks frequently play a vital role in the process of adjudication. Yet, especially outside of the U.S., little is known about their role and duties in the judicial decision-making process. This article provides insight into the organization of the employment and the duties of judicial assistants in three different jurisdictions: the U.S., England and Wales, and the Netherlands. In particular, this article aims to gain an understanding of the effects different organizational structures have on the potential influence of assistants on the judicial process and to observe what restrictions are employed to prevent assistants from wielding too much influence

  18. Perceptual load in different regions of the visual scene and its relevance for driving.

    Science.gov (United States)

    Marciano, Hadas; Yeshurun, Yaffa

    2015-06-01

    The aim of this study was to better understand the role played by perceptual load, at both central and peripheral regions of the visual scene, in driving safety. Attention is a crucial factor in driving safety, and previous laboratory studies suggest that perceptual load is an important factor determining the efficiency of attentional selectivity. Yet, the effects of perceptual load on driving were never studied systematically. Using a driving simulator, we orthogonally manipulated the load levels at the road (central load) and its sides (peripheral load), while occasionally introducing critical events at one of these regions. Perceptual load affected driving performance at both regions of the visual scene. Critically, the effect was different for central versus peripheral load: Whereas load levels on the road mainly affected driving speed, load levels on its sides mainly affected the ability to detect critical events initiating from the roadsides. Moreover, higher levels of peripheral load impaired performance but mainly with low levels of central load, replicating findings with simple letter stimuli. Perceptual load has a considerable effect on driving, but the nature of this effect depends on the region of the visual scene at which the load is introduced. Given the observed importance of perceptual load, authors of future studies of driving safety should take it into account. Specifically, these findings suggest that our understanding of factors that may be relevant for driving safety would benefit from studying these factors under different levels of load at different regions of the visual scene. © 2014, Human Factors and Ergonomics Society.

  19. Video Inpainting of Complex Scenes

    OpenAIRE

    Newson, Alasdair; Almansa, Andrés; Fradet, Matthieu; Gousseau, Yann; Pérez, Patrick

    2015-01-01

    We propose an automatic video inpainting algorithm which relies on the optimisation of a global, patch-based functional. Our algorithm is able to deal with a variety of challenging situations which naturally arise in video inpainting, such as the correct reconstruction of dynamic textures, multiple moving objects and moving background. Furthermore, we achieve this in an order of magnitude less execution time with respect to the state-of-the-art. We are also able to achieve good quality result...

  20. 77 FR 45378 - Guidelines for Cases Requiring On-Scene Death Investigation

    Science.gov (United States)

    2012-07-31

    ... DEPARTMENT OF JUSTICE Office of Justice Programs [OJP (NIJ) Docket No. 1600] Guidelines for Cases... entitled, ``Guidelines for Cases Requiring On-Scene Death Investigation''. The opportunity to provide comments on this document is open to coroner/medical examiner office representatives, law enforcement...