WorldWideScience

Sample records for task scene analyzer

  1. Analyzing crime scene videos

    Science.gov (United States)

    Cunningham, Cindy C.; Peloquin, Tracy D.

    1999-02-01

    Since late 1996 the Forensic Identification Services Section of the Ontario Provincial Police has been actively involved in state-of-the-art image capture and the processing of video images extracted from crime scene videos. The benefits and problems of this technology for video analysis are discussed. All analysis is being conducted on SUN Microsystems UNIX computers, networked to a digital disk recorder that is used for video capture. The primary advantage of this system over traditional frame grabber technology is reviewed. Examples from actual cases are presented and the successes and limitations of this approach are explored. Suggestions to companies implementing security technology plans for various organizations (banks, stores, restaurants, etc.) will be made. Future directions for this work and new technologies are also discussed.

  2. ROBOT TASK SCENE ANALYZER

    International Nuclear Information System (INIS)

    Hamel, William R.; Everett, Steven

    2000-01-01

    Environmental restoration and waste management (ER and WM) challenges in the United States Department of Energy (DOE), and around the world, involve radiation or other hazards which will necessitate the use of remote operations to protect human workers from dangerous exposures. Remote operations carry the implication of greater costs since remote work systems are inherently less productive than contact human work due to the inefficiencies/complexities of teleoperation. To reduce costs and improve quality, much attention has been focused on methods to improve the productivity of combined human operator/remote equipment systems; the achievements to date are modest at best. The most promising avenue in the near term is to supplement conventional remote work systems with robotic planning and control techniques borrowed from manufacturing and other domains where robotic automation has been used. Practical combinations of teleoperation and robotic control will yield telerobotic work systems that outperform currently available remote equipment. It is believed that practical telerobotic systems may increase remote work efficiencies significantly. Increases of 30% to 50% have been conservatively estimated for typical remote operations. It is important to recognize that the basic hardware and software features of most modern remote manipulation systems can readily accommodate the functionality required for telerobotics. Further, several of the additional system ingredients necessary to implement telerobotic control--machine vision, 3D object and workspace modeling, automatic tool path generation and collision-free trajectory planning--are existent

  3. ROBOT TASK SCENE ANALYZER

    Energy Technology Data Exchange (ETDEWEB)

    William R. Hamel; Steven Everett

    2000-08-01

    Environmental restoration and waste management (ER and WM) challenges in the United States Department of Energy (DOE), and around the world, involve radiation or other hazards which will necessitate the use of remote operations to protect human workers from dangerous exposures. Remote operations carry the implication of greater costs since remote work systems are inherently less productive than contact human work due to the inefficiencies/complexities of teleoperation. To reduce costs and improve quality, much attention has been focused on methods to improve the productivity of combined human operator/remote equipment systems; the achievements to date are modest at best. The most promising avenue in the near term is to supplement conventional remote work systems with robotic planning and control techniques borrowed from manufacturing and other domains where robotic automation has been used. Practical combinations of teleoperation and robotic control will yield telerobotic work systems that outperform currently available remote equipment. It is believed that practical telerobotic systems may increase remote work efficiencies significantly. Increases of 30% to 50% have been conservatively estimated for typical remote operations. It is important to recognize that the basic hardware and software features of most modern remote manipulation systems can readily accommodate the functionality required for telerobotics. Further, several of the additional system ingredients necessary to implement telerobotic control--machine vision, 3D object and workspace modeling, automatic tool path generation and collision-free trajectory planning--are existent.

  4. Robot task space analyzer

    International Nuclear Information System (INIS)

    Hamel, W.R.; Osborn, J.

    1997-01-01

    Many nuclear projects such as environmental restoration and waste management challenges involve radiation or other hazards that will necessitate the use of remote operations that protect human workers from dangerous exposures. Remote work is far more costly to execute than what workers could accomplish directly with conventional tools and practices because task operations are slow and tedious due to difficulties of remote manipulation and viewing. Decades of experience within the nuclear remote operations community show that remote tasks may take hundreds of times longer than hands-on work; even with state-of-the-art force- reflecting manipulators and television viewing, remote task performance execution is five to ten times slower than equivalent direct contact work. Thus the requirement to work remotely is a major cost driver in many projects. Modest improvements in the work efficiency of remote systems can have high payoffs by reducing the completion time of projects. Additional benefits will accrue from improved work quality and enhanced safety

  5. Scene Categorization in Alzheimer's Disease: A Saccadic Choice Task

    Directory of Open Access Journals (Sweden)

    Quentin Lenoble

    2015-01-01

    Full Text Available Aims: We investigated the performance in scene categorization of patients with Alzheimer's disease (AD using a saccadic choice task. Method: 24 patients with mild AD, 28 age-matched controls and 26 young people participated in the study. The participants were presented pairs of coloured photographs and were asked to make a saccadic eye movement to the picture corresponding to the target scene (natural vs. urban, indoor vs. outdoor. Results: The patients' performance did not differ from chance for natural scenes. Differences between young and older controls and patients with AD were found in accuracy but not saccadic latency. Conclusions: The results are interpreted in terms of cerebral reorganization in the prefrontal and temporo-occipital cortex of patients with AD, but also in terms of impaired processing of visual global properties of scenes.

  6. Task relevance predicts gaze in videos of real moving scenes.

    Science.gov (United States)

    Howard, Christina J; Gilchrist, Iain D; Troscianko, Tom; Behera, Ardhendu; Hogg, David C

    2011-09-01

    Low-level stimulus salience and task relevance together determine the human fixation priority assigned to scene locations (Fecteau and Munoz in Trends Cogn Sci 10(8):382-390, 2006). However, surprisingly little is known about the contribution of task relevance to eye movements during real-world visual search where stimuli are in constant motion and where the 'target' for the visual search is abstract and semantic in nature. Here, we investigate this issue when participants continuously search an array of four closed-circuit television (CCTV) screens for suspicious events. We recorded eye movements whilst participants watched real CCTV footage and moved a joystick to continuously indicate perceived suspiciousness. We find that when multiple areas of a display compete for attention, gaze is allocated according to relative levels of reported suspiciousness. Furthermore, this measure of task relevance accounted for twice the amount of variance in gaze likelihood as the amount of low-level visual changes over time in the video stimuli.

  7. Age-related deficits in the mnemonic similarity task for objects and scenes.

    Science.gov (United States)

    Stark, Shauna M; Stark, Craig E L

    2017-08-30

    Using the Mnemonic Similarity Task (MST), we have demonstrated an age-related impairment in lure discrimination, or the ability to recognize an item as distinct from one that was similar, but not identical to one viewed earlier. A growing body of evidence links these behavioral changes to age-related alterations in the hippocampus. In this study, we sought to evaluate a novel version of this task, utilizing scenes that might emphasize the role of the hippocampus in contextual and spatial processing. In addition, we investigated whether, by utilizing two stimulus classes (scenes and objects), we could also interrogate the roles of the PRC and PHC in aging. Thus, we evaluated differential contributions to these tasks by relating performance on objects versus scenes to volumes of the hippocampus and surrounding medial temporal lobe structures. We found that while there was an age-related impairment on lure discrimination performance for both objects and scenes, relationships to brain volumes and other measure of memory performance were stronger when using objects. In particular, lure discrimination performance for objects showed a positive relationship with the volume of the hippocampus, specifically the combined dentate gyrus (DG) and CA3 subfields, and the subiculum. We conclude that though using scenes was effective in detecting age-related lure discrimination impairments, it does not provide as strong a brain-behavior relationship as using objects. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Differential processing of natural scenes in typical and atypical Alzheimer disease measured with a saccade choice task

    Directory of Open Access Journals (Sweden)

    Muriel eBoucart

    2014-07-01

    Full Text Available Though atrophy of the medial temporal lobe, including structures (hippocampus and parahippocampal cortex that support scene perception and the binding of an object to its context, appears early in Alzheimer disease (AD few studies have investigated scene perception in people with AD. We assessed the ability to find a target object within a natural scene in people with typical AD and in people with atypical AD (posterior cortical atrophy. Pairs of colored photographs were displayed left and right of fixation for one second. Participants were asked to categorize the target (an animal either in moving their eyes toward the photograph containing the target (saccadic choice task or in pressing a key corresponding to the location of the target (manual choice task in separate blocks of trials. For both tasks performance was compared in two conditions: with isolated objects and with objects in scenes. Patients with atypical AD were more impaired to detect a target within a scene than people with typical AD who exhibited a pattern of performance more similar to that of age-matched controls in terms of accuracy, saccade latencies and benefit from contextual information. People with atypical AD benefited less from contextual information in both the saccade and the manual choice tasks suggesting a higher sensitivity to crowding and deficits in figure/ground segregation in people with lesions in posterior areas of the brain.

  9. Analogical reasoning in children with specific language impairment: Evidence from a scene analogy task.

    Science.gov (United States)

    Krzemien, Magali; Jemel, Boutheina; Maillart, Christelle

    2017-01-01

    Analogical reasoning is a human ability that maps systems of relations. It develops along with relational knowledge, working memory and executive functions such as inhibition. It also maintains a mutual influence on language development. Some authors have taken a greater interest in the analogical reasoning ability of children with language disorders, specifically those with specific language impairment (SLI). These children apparently have weaker analogical reasoning abilities than their aged-matched peers without language disorders. Following cognitive theories of language acquisition, this deficit could be one of the causes of language disorders in SLI, especially those concerning productivity. To confirm this deficit and its link to language disorders, we use a scene analogy task to evaluate the analogical performance of SLI children and compare them to controls of the same age and linguistic abilities. Results show that children with SLI perform worse than age-matched peers, but similar to language-matched peers. They are more influenced by increased task difficulty. The association between language disorders and analogical reasoning in SLI can be confirmed. The hypothesis of limited processing capacity in SLI is also being considered.

  10. Functional relationships between the hippocampus and dorsomedial striatum in learning a visual scene-based memory task in rats.

    Science.gov (United States)

    Delcasso, Sébastien; Huh, Namjung; Byeon, Jung Seop; Lee, Jihyun; Jung, Min Whan; Lee, Inah

    2014-11-19

    The hippocampus is important for contextual behavior, and the striatum plays key roles in decision making. When studying the functional relationships with the hippocampus, prior studies have focused mostly on the dorsolateral striatum (DLS), emphasizing the antagonistic relationships between the hippocampus and DLS in spatial versus response learning. By contrast, the functional relationships between the dorsomedial striatum (DMS) and hippocampus are relatively unknown. The current study reports that lesions to both the hippocampus and DMS profoundly impaired performance of rats in a visual scene-based memory task in which the animals were required to make a choice response by using visual scenes displayed in the background. Analysis of simultaneous recordings of local field potentials revealed that the gamma oscillatory power was higher in the DMS, but not in CA1, when the rat performed the task using familiar scenes than novel ones. In addition, the CA1-DMS networks increased coherence at γ, but not at θ, rhythm as the rat mastered the task. At the single-unit level, the neuronal populations in CA1 and DMS showed differential firing patterns when responses were made using familiar visual scenes than novel ones. Such learning-dependent firing patterns were observed earlier in the DMS than in CA1 before the rat made choice responses. The present findings suggest that both the hippocampus and DMS process memory representations for visual scenes in parallel with different time courses and that flexible choice action using background visual scenes requires coordinated operations of the hippocampus and DMS at γ frequencies. Copyright © 2014 the authors 0270-6474/14/3415534-14$15.00/0.

  11. RockIT: A Graphical Program for Labeling and Analyzing Rock Scenes

    Science.gov (United States)

    Bornstein, B.; Castano, A.; Anderson, R. C.; Castano, R.

    2005-12-01

    We have developed the Rock Identification Toolkit (RockIT), a mature, cross-platform, graphical program designed to help geologists rapidly and accurately label rocks and particles in images. As images are labeled, RockIT reports both individual rock (or particle) statistics and overall scene statistics. Basic statistics include 2D rock (image) area and average albedo. A more involved set of statistics uses a direct least squares technique to fit a rock trace to an ellipse and report the semimajor and semiminor axes, orientation, eccentricity, and quality of fit. When range data (e.g. derived from a stereo image pair) is available, RockIT provides both 3D rock size and overall scene area estimates. All statistics may be manipulated interactively and later exported to plain, tab-delimited text for easy import into tools like Excel or Matlab for further sophisticated analysis, summarization, and visualization. RockIT can read and display both image and, when available, corresponding range data. Although it supports several popular graphics file formats (e.g. JPEG, TIFF, etc.) our focus has been on more domain-specific file formats, particularly the MER Planetary Data System (PDS) formats. Images, once displayed, may be enhanced in continually, in real-time, throughout the analysis process. Examples of image enhancement include increasing or decreasing brightness and contrast, performing simple min/max intensity normalizations or more complicated histogram equalizations. Several scientists and students at JPL, NASA, Cornell, and PSI use RockIT to identify and characterize rock shape, size, and distribution in MER microscopic imager and panoramic images. Recently, Golombek et al. (2005) used RockIT to compare rock size distributions and calculate cumulative fractional area coverage at several locations along the Spirit traverse. We have a laptop computer running RockIT to demonstrate its capabilities and allow people to experiment on their own.

  12. Natural scene recognition with increasing time-on-task: the role of typicality and global image properties.

    Science.gov (United States)

    Csathó, Á; van der Linden, D; Gács, B

    2015-01-01

    Human observers can recognize natural images very effectively. Yet, in the literature there is a debate about the extent to which the recognition of natural images requires controlled attentional processing. In the present study we address this topic by testing whether natural scene recognition is affected by mental fatigue. Mental fatigue is known to particularly compromise high-level, controlled attentional processing of local features. Effortless, automatic processing of more global features of an image stays relatively intact, however. We conducted a natural image categorization experiment (N = 20) in which mental fatigue was induced by time-on-task (ToT). Stimuli were images from 5 natural scene categories. Semantic typicality (high or low) and the magnitude of 7 global image properties were determined for each image in separate rating experiments. Significant performance effects of typicality and global properties on scene recognition were found, but, despite a general decline in performance, these effects remained unchanged with increasing ToT. The findings support the importance of the global property processing in natural scene recognition and suggest that this process is insensitive to mental fatigue.

  13. Analyzing Tasks to Promote Equity in Secondary Mathematics Teacher Education

    Science.gov (United States)

    Mintos, Alexia

    2017-01-01

    The purpose of this study is to understand characteristics and outcomes of instructional tasks used to support preservice secondary mathematics teachers (PSMTs) in learning about equity in secondary mathematics methods courses. This study focuses on five instructional tasks from four purposefully chosen teacher education programs. These activities…

  14. Rapid Gist Perception of Meaningful Real-Life Scenes: Exploring Individual and Gender Differences in Multiple Categorization Tasks

    Directory of Open Access Journals (Sweden)

    Steven Vanmarcke

    2015-02-01

    Full Text Available In everyday life, we are generally able to dynamically understand and adapt to socially (irelevant encounters, and to make appropriate decisions about these. All of this requires an impressive ability to directly filter and obtain the most informative aspects of a complex visual scene. Such rapid gist perception can be assessed in multiple ways. In the ultrafast categorization paradigm developed by Simon Thorpe et al. (1996, participants get a clear categorization task in advance and succeed at detecting the target object of interest (animal almost perfectly (even with 20 ms exposures. Since this pioneering work, follow-up studies consistently reported population-level reaction time differences on different categorization tasks, indicating a superordinate advantage (animal versus dog and effects of perceptual similarity (animals versus vehicles and object category size (natural versus animal versus dog. In this study, we replicated and extended these separate findings by using a systematic collection of different categorization tasks (varying in presentation time, task demands, and stimuli and focusing on individual differences in terms of e.g., gender and intelligence. In addition to replicating the main findings from the literature, we find subtle, yet consistent gender differences (women faster than men.

  15. Analyzing difficulties with mole-concept tasks by using familiar analog tasks

    Science.gov (United States)

    Gabel, Dorothy; Sherwood, Robert D.

    This study was conducted to determine which skills and concepts students have that are prerequisites for solving moles problems through the use of analog tasks. Two analogous tests with four forms of each were prepared that corresponded to a conventional moles test. The analogs used were oranges and granules of sugar. Slight variations between test items on various forms permitted comparisons that would indicate specific conceptual and mathematical difficulties that students might have in solving moles problems. Different forms of the two tests were randomly assigned to 332 high school chemistry students of five teachers in four schools in central Indiana. Comparisons of total test score, subtest scores, and the number of students answering an item correctly using appropriate t-test and chi square tests resulted in the following conclusions: (1) the size of the object makes no difference in the problem difficulty; (2) students understand the concepts of mass, volume, and particles equally well; (3) problems requiring two steps are harder than those requiring one step; (4) problems involving scientific notation are more difficult than those that do not; (5) problems involving the multiplication concept are easier than those involving the division concept; (6) problems involving the collective word bag are easier to solve than those using the word billion; (7) the use of the word a(n) makes the problem more difficult than using the number 1.

  16. From geometry to algebra and vice versa: Realistic mathematics education principles for analyzing geometry tasks

    Science.gov (United States)

    Jupri, Al

    2017-04-01

    In this article we address how Realistic Mathematics Education (RME) principles, including the intertwinement and the reality principles, are used to analyze geometry tasks. To do so, we carried out three phases of a small-scale study. First we analyzed four geometry problems - considered as tasks inviting the use of problem solving and reasoning skills - theoretically in the light of the RME principles. Second, we tested two problems to 31 undergraduate students of mathematics education program and other two problems to 16 master students of primary mathematics education program. Finally, we analyzed student written work and compared these empirical to the theoretical results. We found that there are discrepancies between what we expected theoretically and what occurred empirically in terms of mathematization and of intertwinement of mathematical concepts from geometry to algebra and vice versa. We conclude that the RME principles provide a fruitful framework for analyzing geometry tasks that, for instance, are intended for assessing student problem solving and reasoning skills.

  17. Fostering a student's skill for analyzing test items through an authentic task

    Science.gov (United States)

    Setiawan, Beni; Sabtiawan, Wahyu Budi

    2017-08-01

    Analyzing test items is a skill that must be mastered by prospective teachers, in order to determine the quality of test questions which have been written. The main aim of this research was to describe the effectiveness of authentic task to foster the student's skill for analyzing test items involving validity, reliability, item discrimination index, level of difficulty, and distractor functioning through the authentic task. The participant of the research is students of science education study program, science and mathematics faculty, Universitas Negeri Surabaya, enrolled for assessment course. The research design was a one-group posttest design. The treatment in this study is that the students were provided an authentic task facilitating the students to develop test items, then they analyze the items like a professional assessor using Microsoft Excel and Anates Software. The data of research obtained were analyzed descriptively, such as the analysis was presented by displaying the data of students' skill, then they were associated with theories or previous empirical studies. The research showed the task facilitated the students to have the skills. Thirty-one students got a perfect score for the analyzing, five students achieved 97% mastery, two students had 92% mastery, and another two students got 89% and 79% of mastery. The implication of the finding was the students who get authentic tasks forcing them to perform like a professional, the possibility of the students for achieving the professional skills will be higher at the end of learning.

  18. The Communication of Culturally Dominant Modes of Attention from Parents to Children: A Comparison of Canadian and Japanese Parent-Child Conversations during a Joint Scene Description Task.

    Science.gov (United States)

    Senzaki, Sawa; Masuda, Takahiko; Takada, Akira; Okada, Hiroyuki

    2016-01-01

    Previous findings have indicated that, when presented with visual information, North American undergraduate students selectively attend to focal objects, whereas East Asian undergraduate students are more sensitive to background information. However, little is known about how these differences are driven by culture and socialization processes. In this study, two experiments investigated how young children and their parents used culturally unique modes of attention (selective vs. context sensitive attention). We expected that children would slowly learn culturally unique modes of attention, and the experience of communicating with their parents would aid the development of such modes of attention. Study 1 tested children's solitary performance by examining Canadian and Japanese children's (4-6 vs. 7-9 years old) modes of attention during a scene description task, whereby children watched short animations by themselves and then described their observations. The results confirmed that children did not demonstrate significant cross-cultural differences in attention during the scene description task while working independently, although results did show rudimentary signs of culturally unique modes of attention in this task scenario by age 9. Study 2 examined parent-child (4-6 and 7-9 years old) dyads using the same task. The results indicated that parents communicated to their children differently across cultures, replicating attentional differences among undergraduate students in previous cross-cultural studies. Study 2 also demonstrated that children's culturally unique description styles increased significantly with age. The descriptions made by the older group (7-9 years old) showed significant cross-cultural variances in attention, while descriptions among the younger group (4-6 years old) did not. The significance of parental roles in the development of culturally unique modes of attention is discussed in addition to other possible facilitators of this

  19. The Communication of Culturally Dominant Modes of Attention from Parents to Children: A Comparison of Canadian and Japanese Parent-Child Conversations during a Joint Scene Description Task.

    Directory of Open Access Journals (Sweden)

    Sawa Senzaki

    Full Text Available Previous findings have indicated that, when presented with visual information, North American undergraduate students selectively attend to focal objects, whereas East Asian undergraduate students are more sensitive to background information. However, little is known about how these differences are driven by culture and socialization processes. In this study, two experiments investigated how young children and their parents used culturally unique modes of attention (selective vs. context sensitive attention. We expected that children would slowly learn culturally unique modes of attention, and the experience of communicating with their parents would aid the development of such modes of attention. Study 1 tested children's solitary performance by examining Canadian and Japanese children's (4-6 vs. 7-9 years old modes of attention during a scene description task, whereby children watched short animations by themselves and then described their observations. The results confirmed that children did not demonstrate significant cross-cultural differences in attention during the scene description task while working independently, although results did show rudimentary signs of culturally unique modes of attention in this task scenario by age 9. Study 2 examined parent-child (4-6 and 7-9 years old dyads using the same task. The results indicated that parents communicated to their children differently across cultures, replicating attentional differences among undergraduate students in previous cross-cultural studies. Study 2 also demonstrated that children's culturally unique description styles increased significantly with age. The descriptions made by the older group (7-9 years old showed significant cross-cultural variances in attention, while descriptions among the younger group (4-6 years old did not. The significance of parental roles in the development of culturally unique modes of attention is discussed in addition to other possible facilitators of

  20. To search or to like: Mapping fixations to differentiate two forms of incidental scene memory.

    Science.gov (United States)

    Choe, Kyoung Whan; Kardan, Omid; Kotabe, Hiroki P; Henderson, John M; Berman, Marc G

    2017-10-01

    We employed eye-tracking to investigate how performing different tasks on scenes (e.g., intentionally memorizing them, searching for an object, evaluating aesthetic preference) can affect eye movements during encoding and subsequent scene memory. We found that scene memorability decreased after visual search (one incidental encoding task) compared to intentional memorization, and that preference evaluation (another incidental encoding task) produced better memory, similar to the incidental memory boost previously observed for words and faces. By analyzing fixation maps, we found that although fixation map similarity could explain how eye movements during visual search impairs incidental scene memory, it could not explain the incidental memory boost from aesthetic preference evaluation, implying that implicit mechanisms were at play. We conclude that not all incidental encoding tasks should be taken to be similar, as different mechanisms (e.g., explicit or implicit) lead to memory enhancements or decrements for different incidental encoding tasks.

  1. A Comparison of the Visual Attention Patterns of People with Aphasia and Adults without Neurological Conditions for Camera-Engaged and Task-Engaged Visual Scenes

    Science.gov (United States)

    Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria

    2016-01-01

    Purpose: The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Method: Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological…

  2. Research in interactive scene analysis

    Science.gov (United States)

    Tenenbaum, J. M.; Barrow, H. G.; Weyl, S. A.

    1976-01-01

    Cooperative (man-machine) scene analysis techniques were developed whereby humans can provide a computer with guidance when completely automated processing is infeasible. An interactive approach promises significant near-term payoffs in analyzing various types of high volume satellite imagery, as well as vehicle-based imagery used in robot planetary exploration. This report summarizes the work accomplished over the duration of the project and describes in detail three major accomplishments: (1) the interactive design of texture classifiers; (2) a new approach for integrating the segmentation and interpretation phases of scene analysis; and (3) the application of interactive scene analysis techniques to cartography.

  3. The anatomy of the crime scene

    DEFF Research Database (Denmark)

    Sandvik, Kjetil

    2010-01-01

    have concluded their work collecting, preserving and cataloguing various traces like fingerprints, bloodstains and so on. But the analysis will also show similarities between real-life and fictional crime scene investigations: also in real-life practices reconstruction and interpretation of the crime...... scene is conducted by investigators (crime scene coordinators) who's task it is to decide how the investigation should be carried out and which is best described as a narrative practice; a systematic - and expertise based - work of imagination...

  4. Analyzing task-based user study data to determine colormap efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Ashton, Zoe Charon Maria [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Wendelberger, Joanne Roth [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Ticknor, Lawrence O. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Turton, Terece [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Samsel, Francesca [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-07-23

    Domain scientists need colormaps to visualize their data and are especially useful for identifying areas of interest, like in ocean data to identify eddies or characterize currents. However, traditional Rainbow colormap performs poorly for understanding details, because of the small perceptual range. In order to assist domain scientists in recognizing and identifying important details in their data, different colormaps need to be applied to allow higher perceptual definition. Visual artist Francesca Samsel used her understanding of color theory to create new colormaps to improve perception. While domain scientists find the new colormaps to be useful, we implemented a rigorous and quantitative study to determine whether or not the new colormaps have perceptually more colors. Color count data from one of these studies will be analyzed in depth in order to determine whether or not the new colormaps have more perceivable colors and what affects the number of perceivable colors.

  5. Analyze the beta waves of electroencephalogram signals from young musicians and non-musicians in major scale working memory task.

    Science.gov (United States)

    Hsu, Chien-Chang; Cheng, Ching-Wen; Chiu, Yi-Shiuan

    2017-02-15

    Electroencephalograms can record wave variations in any brain activity. Beta waves are produced when an external stimulus induces logical thinking, computation, and reasoning during consciousness. This work uses the beta wave of major scale working memory N-back tasks to analyze the differences between young musicians and non-musicians. After the feature analysis uses signal filtering, Hilbert-Huang transformation, and feature extraction methods to identify differences, k-means clustering algorithm are used to group them into different clusters. The results of feature analysis showed that beta waves significantly differ between young musicians and non-musicians from the low memory load of working memory task. Copyright © 2017 Elsevier B.V. All rights reserved.

  6. Getting behind the Scenes of Fleetwood Mac's "Rumours": Using a Documentary on the Making of a Music Album to Learn about Task Groups

    Science.gov (United States)

    Comer, Debra R.; Holbrook, Robert L., Jr.

    2012-01-01

    The authors present an efficient and easy-to-implement experiential exercise that reinforces for students key concepts about task groups (i.e., group cohesiveness, conflict within groups, group effectiveness, group norms, and group roles). The exercise, which uses a documentary about the making of Fleetwood Mac's "Rumours" album to demonstrate the…

  7. Multimodal computational attention for scene understanding and robotics

    CERN Document Server

    Schauerte, Boris

    2016-01-01

    This book presents state-of-the-art computational attention models that have been successfully tested in diverse application areas and can build the foundation for artificial systems to efficiently explore, analyze, and understand natural scenes. It gives a comprehensive overview of the most recent computational attention models for processing visual and acoustic input. It covers the biological background of visual and auditory attention, as well as bottom-up and top-down attentional mechanisms and discusses various applications. In the first part new approaches for bottom-up visual and acoustic saliency models are presented and applied to the task of audio-visual scene exploration of a robot. In the second part the influence of top-down cues for attention modeling is investigated. .

  8. Underwater Scene Composition

    Science.gov (United States)

    Kim, Nanyoung

    2009-01-01

    In this article, the author describes an underwater scene composition for elementary-education majors. This project deals with watercolor with crayon or oil-pastel resist (medium); the beauty of nature represented by fish in the underwater scene (theme); texture and pattern (design elements); drawing simple forms (drawing skill); and composition…

  9. Analyzing gait variability and dual-task interference in patients with Parkinson's disease and freezing by means of the word-color Stroop test.

    Science.gov (United States)

    Kleiner, Ana Francisca Rozin; Pagnussat, Aline S; Prisco, Giulia di; Vagnini, Alessandro; Stocchi, Fabrizio; De Pandis, Maria Francesca; Galli, Manuela

    2017-12-02

    The ability to carry out two tasks at once is critical to effective functioning in the real world and deficits are termed Dual-task interference or effect-DTE. DTE substantially compromised the gait of subjects with Parkinson's disease and freezing of gait (PD + FOG), leading to exaggerated slowing, increasing gait dysrhythmicity, and inducing FOG episodes. This study aimed to investigate the DTE in gait variability of subjects with PD and freezing of gait (PD + FOG). Thirty-three patients with PD + FOG and 14 healthy individuals (REFERENCE) took part at this study. Two gait conditions were analyzed: usual walking (single task) and walking while taking the word-color Stroop test (dual task). The computed variables were as follows: gait velocity, step length, step timing, gait asymmetry, variability measures and DTE of each variable. The PD + FOG group has presented negative DTE values for all analyzed variables, indicating dual task cost. The REFERENCE group has presented dual-task benefits for step length standard deviation and step time. Differences between both groups and conditions were found for all variables, except for step time. Taking the word-color Stroop test while walking led to a larger dual-task cost in subjects with PD + FOG.

  10. The anatomy of the crime scene

    DEFF Research Database (Denmark)

    Sandvik, Kjetil

    have concluded their work collecting, preserving and cataloguing various traces like fingerprints, bloodstains and so on. But the analysis will also show similarities between real-life and fictional crime scene investigations: also in real-life practices reconstruction and interpretation of the crime...... scene is conducted by investigators (crime scene coordinators) who's task it is to decide how the investigation should be carried out and which is best described as a narrative practice; a systematic - and expertise based - work of imagination....

  11. The time course of natural scene perception with reduced attention.

    Science.gov (United States)

    Groen, Iris I A; Ghebreab, Sennay; Lamme, Victor A F; Scholte, H Steven

    2016-02-01

    Attention is thought to impose an informational bottleneck on vision by selecting particular information from visual scenes for enhanced processing. Behavioral evidence suggests, however, that some scene information is extracted even when attention is directed elsewhere. Here, we investigated the neural correlates of this ability by examining how attention affects electrophysiological markers of scene perception. In two electro-encephalography (EEG) experiments, human subjects categorized real-world scenes as manmade or natural (full attention condition) or performed tasks on unrelated stimuli in the center or periphery of the scenes (reduced attention conditions). Scene processing was examined in two ways: traditional trial averaging was used to assess the presence of a categorical manmade/natural distinction in event-related potentials, whereas single-trial analyses assessed whether EEG activity was modulated by scene statistics that are diagnostic of naturalness of individual scenes. The results indicated that evoked activity up to 250 ms was unaffected by reduced attention, showing intact categorical differences between manmade and natural scenes and strong modulations of single-trial activity by scene statistics in all conditions. Thus initial processing of both categorical and individual scene information remained intact with reduced attention. Importantly, however, attention did have profound effects on later evoked activity; full attention on the scene resulted in prolonged manmade/natural differences, increased neural sensitivity to scene statistics, and enhanced scene memory. These results show that initial processing of real-world scene information is intact with diminished attention but that the depth of processing of this information does depend on attention. Copyright © 2016 the American Physiological Society.

  12. Predicting the Valence of a Scene from Observers' Eye Movements.

    Science.gov (United States)

    R-Tavakoli, Hamed; Atyabi, Adham; Rantanen, Antti; Laukka, Seppo J; Nefti-Meziani, Samia; Heikkilä, Janne

    2015-01-01

    Multimedia analysis benefits from understanding the emotional content of a scene in a variety of tasks such as video genre classification and content-based image retrieval. Recently, there has been an increasing interest in applying human bio-signals, particularly eye movements, to recognize the emotional gist of a scene such as its valence. In order to determine the emotional category of images using eye movements, the existing methods often learn a classifier using several features that are extracted from eye movements. Although it has been shown that eye movement is potentially useful for recognition of scene valence, the contribution of each feature is not well-studied. To address the issue, we study the contribution of features extracted from eye movements in the classification of images into pleasant, neutral, and unpleasant categories. We assess ten features and their fusion. The features are histogram of saccade orientation, histogram of saccade slope, histogram of saccade length, histogram of saccade duration, histogram of saccade velocity, histogram of fixation duration, fixation histogram, top-ten salient coordinates, and saliency map. We utilize machine learning approach to analyze the performance of features by learning a support vector machine and exploiting various feature fusion schemes. The experiments reveal that 'saliency map', 'fixation histogram', 'histogram of fixation duration', and 'histogram of saccade slope' are the most contributing features. The selected features signify the influence of fixation information and angular behavior of eye movements in the recognition of the valence of images.

  13. Scene change detection based on multimodal integration

    Science.gov (United States)

    Zhu, Yingying; Zhou, Dongru

    2003-09-01

    Scene change detection is an essential step to automatic and content-based video indexing, retrieval and browsing. In this paper, a robust scene change detection and classification approach is presented, which analyzes audio, visual and textual sources and accounts for their inter-relations and coincidence to semantically identify and classify video scenes. Audio analysis focuses on the segmentation of audio stream into four types of semantic data such as silence, speech, music and environmental sound. Further processing on speech segments aims at locating speaker changes. Video analysis partitions visual stream into shots. Text analysis can provide a supplemental source of clues for scene classification and indexing information. We integrate the video and audio analysis results to identify video scenes and use the text information detected by the video OCR technology or derived from transcripts available to refine scene classification. Results from single source segmentation are in some cases suboptimal. By combining visual, aural features adn the accessorial text information, the scence extraction accuracy is enhanced, and more semantic segmentations are developed. Experimental results are proven to rather promising.

  14. Superpixel-Based Feature for Aerial Image Scene Recognition

    Directory of Open Access Journals (Sweden)

    Hongguang Li

    2018-01-01

    Full Text Available Image scene recognition is a core technology for many aerial remote sensing applications. Different landforms are inputted as different scenes in aerial imaging, and all landform information is regarded as valuable for aerial image scene recognition. However, the conventional features of the Bag-of-Words model are designed using local points or other related information and thus are unable to fully describe landform areas. This limitation cannot be ignored when the aim is to ensure accurate aerial scene recognition. A novel superpixel-based feature is proposed in this study to characterize aerial image scenes. Then, based on the proposed feature, a scene recognition method of the Bag-of-Words model for aerial imaging is designed. The proposed superpixel-based feature that utilizes landform information establishes top-task superpixel extraction of landforms to bottom-task expression of feature vectors. This characterization technique comprises the following steps: simple linear iterative clustering based superpixel segmentation, adaptive filter bank construction, Lie group-based feature quantification, and visual saliency model-based feature weighting. Experiments of image scene recognition are carried out using real image data captured by an unmanned aerial vehicle (UAV. The recognition accuracy of the proposed superpixel-based feature is 95.1%, which is higher than those of scene recognition algorithms based on other local features.

  15. Binding actions and scenes in visual long-term memory.

    Science.gov (United States)

    Urgolites, Zhisen Jiang; Wood, Justin N

    2013-12-01

    How does visual long-term memory store representations of different entities (e.g., objects, actions, and scenes) that are present in the same visual event? Are the different entities stored as an integrated representation in memory, or are they stored separately? To address this question, we asked observers to view a large number of events; in each event, an action was performed within a scene. Afterward, the participants were shown pairs of action-scene sets and indicated which of the two they had seen. When the task required recognizing the individual actions and scenes, performance was high (80%). Conversely, when the task required remembering which actions had occurred within which scenes, performance was significantly lower (59%). We observed this dissociation between memory for individual entities and memory for entity bindings across multiple testing conditions and presentation durations. These experiments indicate that visual long-term memory stores information about actions and information about scenes separately from one another, even when an action and scene were observed together in the same visual event. These findings also highlight an important limitation of human memory: Situations that require remembering actions and scenes as integrated events (e.g., eyewitness testimony) may be particularly vulnerable to memory errors.

  16. Turning an Urban Scene Video into a Cinemagraph

    OpenAIRE

    Yan, Hang; Liu, Yebin; Furukawa, Yasutaka

    2016-01-01

    This paper proposes an algorithm that turns a regular video capturing urban scenes into a high-quality endless animation, known as a Cinemagraph. The creation of a Cinemagraph usually requires a static camera in a carefully configured scene. The task becomes challenging for a regular video with a moving camera and objects. Our approach first warps an input video into the viewpoint of a reference camera. Based on the warped video, we propose effective temporal analysis algorithms to detect reg...

  17. Hydrological AnthropoScenes

    Science.gov (United States)

    Cudennec, Christophe

    2016-04-01

    The Anthropocene concept encapsulates the planetary-scale changes resulting from accelerating socio-ecological transformations, beyond the stratigraphic definition actually in debate. The emergence of multi-scale and proteiform complexity requires inter-discipline and system approaches. Yet, to reduce the cognitive challenge of tackling this complexity, the global Anthropocene syndrome must now be studied from various topical points of view, and grounded at regional and local levels. A system approach should allow to identify AnthropoScenes, i.e. settings where a socio-ecological transformation subsystem is clearly coherent within boundaries and displays explicit relationships with neighbouring/remote scenes and within a nesting architecture. Hydrology is a key topical point of view to be explored, as it is important in many aspects of the Anthropocene, either with water itself being a resource, hazard or transport force; or through the network, connectivity, interface, teleconnection, emergence and scaling issues it determines. We will schematically exemplify these aspects with three contrasted hydrological AnthropoScenes in Tunisia, France and Iceland; and reframe therein concepts of the hydrological change debate. Bai X., van der Leeuw S., O'Brien K., Berkhout F., Biermann F., Brondizio E., Cudennec C., Dearing J., Duraiappah A., Glaser M., Revkin A., Steffen W., Syvitski J., 2016. Plausible and desirable futures in the Anthropocene: A new research agenda. Global Environmental Change, in press, http://dx.doi.org/10.1016/j.gloenvcha.2015.09.017 Brondizio E., O'Brien K., Bai X., Biermann F., Steffen W., Berkhout F., Cudennec C., Lemos M.C., Wolfe A., Palma-Oliveira J., Chen A. C-T. Re-conceptualizing the Anthropocene: A call for collaboration. Global Environmental Change, in review. Montanari A., Young G., Savenije H., Hughes D., Wagener T., Ren L., Koutsoyiannis D., Cudennec C., Grimaldi S., Blöschl G., Sivapalan M., Beven K., Gupta H., Arheimer B., Huang Y

  18. Semantic Reasoning for Scene Interpretation

    DEFF Research Database (Denmark)

    Jensen, Lars Baunegaard With; Baseski, Emre; Pugeault, Nicolas

    2008-01-01

    In this paper, we propose a hierarchical architecture for representing scenes, covering 2D and 3D aspects of visual scenes as well as the semantic relations between the different aspects. We argue that labeled graphs are a suitable representational framework for this representation and demonstrat...

  19. Big visual data analysis scene classification and geometric labeling

    CERN Document Server

    Chen, Chen; Kuo, C -C Jay

    2016-01-01

    This book offers an overview of traditional big visual data analysis approaches and provides state-of-the-art solutions for several scene comprehension problems, indoor/outdoor classification, outdoor scene classification, and outdoor scene layout estimation. It is illustrated with numerous natural and synthetic color images, and extensive statistical analysis is provided to help readers visualize big visual data distribution and the associated problems. Although there has been some research on big visual data analysis, little work has been published on big image data distribution analysis using the modern statistical approach described in this book. By presenting a complete methodology on big visual data analysis with three illustrative scene comprehension problems, it provides a generic framework that can be applied to other big visual data analysis tasks.

  20. Setting the scene

    International Nuclear Information System (INIS)

    Curran, S.

    1977-01-01

    The reasons for the special meeting on the breeder reactor are outlined with some reference to the special Scottish interest in the topic. Approximately 30% of the electrical energy generated in Scotland is nuclear and the special developments at Dounreay make policy decisions on the future of the commercial breeder reactor urgent. The participants review the major questions arising in arriving at such decisions. In effect an attempt is made to respond to the wish of the Secretary of State for Energy to have informed debate. To set the scene the importance of energy availability as regards to the strength of the national economy is stressed and the reasons for an increasing energy demand put forward. Examination of alternative sources of energy shows that none is definitely capable of filling the foreseen energy gap. This implies an integrated thermal/breeder reactor programme as the way to close the anticipated gap. The problems of disposal of radioactive waste and the safeguards in the handling of plutonium are outlined. Longer-term benefits, including the consumption of plutonium and naturally occurring radioactive materials, are examined. (author)

  1. Forensic 3D Scene Reconstruction

    International Nuclear Information System (INIS)

    LITTLE, CHARLES Q.; PETERS, RALPH R.; RIGDON, J. BRIAN; SMALL, DANIEL E.

    1999-01-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene

  2. Two Distinct Scene-Processing Networks Connecting Vision and Memory.

    Science.gov (United States)

    Baldassano, Christopher; Esteva, Andre; Fei-Fei, Li; Beck, Diane M

    2016-01-01

    A number of regions in the human brain are known to be involved in processing natural scenes, but the field has lacked a unifying framework for understanding how these different regions are organized and interact. We provide evidence from functional connectivity and meta-analyses for a new organizational principle, in which scene processing relies upon two distinct networks that split the classically defined parahippocampal place area (PPA). The first network of strongly connected regions consists of the occipital place area/transverse occipital sulcus and posterior PPA, which contain retinotopic maps and are not strongly coupled to the hippocampus at rest. The second network consists of the caudal inferior parietal lobule, retrosplenial complex, and anterior PPA, which connect to the hippocampus (especially anterior hippocampus), and are implicated in both visual and nonvisual tasks, including episodic memory and navigation. We propose that these two distinct networks capture the primary functional division among scene-processing regions, between those that process visual features from the current view of a scene and those that connect information from a current scene view with a much broader temporal and spatial context. This new framework for understanding the neural substrates of scene-processing bridges results from many lines of research, and makes specific functional predictions.

  3. Sensory substitution: the spatial updating of auditory scenes ‘mimics’ the spatial updating of visual scenes

    Directory of Open Access Journals (Sweden)

    Achille ePasqualotto

    2016-04-01

    Full Text Available Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or ‘soundscapes’. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localising sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgement of relative direction task (JRD was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s. Moreover, our results have practical implications to improve training methods with sensory substitution devices.

  4. Falling out of time: enhanced memory for scenes presented at behaviorally irrelevant points in time in posttraumatic stress disorder (PTSD).

    Science.gov (United States)

    Levy-Gigi, Einat; Kéri, Szabolcs

    2012-01-01

    Spontaneous encoding of the visual environment depends on the behavioral relevance of the task performed simultaneously. If participants identify target letters or auditory tones while viewing a series of briefly presented natural and urban scenes, they demonstrate effective scene recognition only when a target, but not a behaviorally irrelevant distractor, appears together with the scene. Here, we show that individuals with posttraumatic stress disorder (PTSD), who witnessed the red sludge disaster in Hungary, show the opposite pattern of performance: enhanced recognition of scenes presented together with distractors and deficient recognition of scenes presented with targets. The recognition of trauma-related and neutral scenes was not different in individuals with PTSD. We found a positive correlation between memory for scenes presented with auditory distractors and re-experiencing symptoms (memory intrusions and flashbacks). These results suggest that abnormal encoding of visual scenes at behaviorally irrelevant events might be associated with intrusive experiences by disrupting the flow of time.

  5. Falling out of time: enhanced memory for scenes presented at behaviorally irrelevant points in time in posttraumatic stress disorder (PTSD.

    Directory of Open Access Journals (Sweden)

    Einat Levy-Gigi

    Full Text Available Spontaneous encoding of the visual environment depends on the behavioral relevance of the task performed simultaneously. If participants identify target letters or auditory tones while viewing a series of briefly presented natural and urban scenes, they demonstrate effective scene recognition only when a target, but not a behaviorally irrelevant distractor, appears together with the scene. Here, we show that individuals with posttraumatic stress disorder (PTSD, who witnessed the red sludge disaster in Hungary, show the opposite pattern of performance: enhanced recognition of scenes presented together with distractors and deficient recognition of scenes presented with targets. The recognition of trauma-related and neutral scenes was not different in individuals with PTSD. We found a positive correlation between memory for scenes presented with auditory distractors and re-experiencing symptoms (memory intrusions and flashbacks. These results suggest that abnormal encoding of visual scenes at behaviorally irrelevant events might be associated with intrusive experiences by disrupting the flow of time.

  6. Crime Scenes as Augmented Reality

    DEFF Research Database (Denmark)

    Sandvik, Kjetil

    2010-01-01

    Using the concept of augmented reality, this article will investigate how places in various ways have become augmented by means of different mediatization strategies. Augmentation of reality implies an enhancement of the places' emotional character: a certain mood, atmosphere or narrative surplus......, physical damage: they are all readable and interpretable signs. As augmented reality the crime scene carries a narrative which at first is hidden and must be revealed. Due to the process of investigation and the detective's ability to reason and deduce, the crime scene as place is reconstructed as virtual...... to understand the concept of augmentet reality. The crime scene is an encoded place due to certain actions and events which have taken place and which have left various traces which in turn may be read and interpreted: blood, nails, hair are all (DNA) codes to be cracked as are traces of gun powder, shot holes...

  7. Review network for scene text recognition

    Science.gov (United States)

    Li, Shuohao; Han, Anqi; Chen, Xu; Yin, Xiaoqing; Zhang, Jun

    2017-09-01

    Recognizing text in images captured in the wild is a fundamental preprocessing task for many computer vision and machine learning applications and has gained significant attention in recent years. This paper proposes an end-to-end trainable deep review neural network for scene text recognition, which is a combination of feature extraction, feature reviewing, feature attention, and sequence recognition. Our model can generate the predicted text without any segmentation or grouping algorithm. Because the attention model in the feature attention stage lacks global modeling ability, a review network is applied to extract the global context of sequence data in the feature reviewing stage. We perform rigorous experiments across a number of standard benchmarks, including IIIT5K, SVT, ICDAR03, and ICDAR13 datasets. Experimental results show that our model is comparable to or outperforms state-of-the-art techniques.

  8. Relationship between Childhood Meal Scenes at Home Remembered by University Students and their Current Personality

    OpenAIRE

    恩村, 咲希; Onmura, Saki

    2013-01-01

    This study examines the relationship between childhood meal scenes at home that are remembered by university students and their current personality. The meal scenes are analyzed in terms of companions, conversation content, conversation frequency, atmosphere, and consideration of meals. The scale of the conversation content in childhood meal scenes was prepared on the basis of the results of a preliminary survey. The result showed that a relationship was found between personality traits and c...

  9. Parallel programming of saccades during natural scene viewing: evidence from eye movement positions.

    Science.gov (United States)

    Wu, Esther X W; Gilani, Syed Omer; van Boxtel, Jeroen J A; Amihai, Ido; Chua, Fook Kee; Yen, Shih-Cheng

    2013-10-24

    Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (>100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis.

  10. Multiple synergistic effects of emotion and memory on proactive processes leading to scene recognition.

    Science.gov (United States)

    Schettino, Antonio; Loeys, Tom; Pourtois, Gilles

    2013-11-01

    Visual scene recognition is a proactive process through which contextual cues and top-down expectations facilitate the extraction of invariant features. Whether the emotional content of the scenes exerts a reliable influence on these processes or not, however, remains an open question. Here, topographic ERP mapping analysis and a distributed source localization method were used to characterize the electrophysiological correlates of proactive processes leading to scene recognition, as well as the potential modulation of these processes by memory and emotion. On each trial, the content of a complex neutral or emotional scene was progressively revealed, and participants were asked to decide whether this scene had previously been encountered or not (delayed match-to-sample task). Behavioral results showed earlier recognition for old compared to new scenes, as well as delayed recognition for emotional vs. neutral scenes. Electrophysiological results revealed that, ~400 ms following stimulus onset, activity in ventral object-selective regions increased linearly as a function of accumulation of perceptual evidence prior to recognition of old scenes. The emotional content of the scenes had an early influence in these areas. By comparison, at the same latency, the processing of new scenes was mostly achieved by dorsal and medial frontal brain areas, including the anterior cingulate cortex and the insula. In the latter region, emotion biased recognition at later stages, likely corresponding to decision making processes. These findings suggest that emotion can operate at distinct and multiple levels during proactive processes leading to scene recognition, depending on the extent of prior encounter with these scenes. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Pooling Objects for Recognizing Scenes without Examples

    NARCIS (Netherlands)

    Kordumova, S.; Mensink, T.; Snoek, C.G.M.

    2016-01-01

    In this paper we aim to recognize scenes in images without using any scene images as training data. Different from attribute based approaches, we do not carefully select the training classes to match the unseen scene classes. Instead, we propose a pooling over ten thousand of off-the-shelf object

  12. Literacy in the contemporary scene

    OpenAIRE

    Angela B. Kleiman

    2014-01-01

    In this paper I examine the relationship between literacy and contemporaneity. I take as a point of departure for my discussion school literacy and its links with literacies in other institutions of the contemporary scene, in order to determine the relation between contemporary ends of reading and writing (in other words, the meaning of being literate in contemporary society) and the practices and activities effectively realized at school in order to reach those objectives. Using various exam...

  13. Towards Unsupervised Familiar Scene Recognition in Egocentric Videos

    NARCIS (Netherlands)

    Talavera Martínez, Estefanía

    2015-01-01

    Nowadays, there is an upsurge of interest in using lifelogging devices. Such devices generate huge amounts of image data; consequently, the need for automatic methods for analyzing and summarizing these data is drastically increasing. We present a new method for familiar scene recognition in

  14. Visual search in scenes involves selective and non-selective pathways

    Science.gov (United States)

    Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R

    2010-01-01

    How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734

  15. Number 13 / Part I. Music. 3. Mad Scenes: A Warning against Overwhelming Passions

    Directory of Open Access Journals (Sweden)

    Marisi Rossella

    2017-03-01

    Full Text Available This study focuses on mad scenes in poetry and musical theatre, stressing that, according to Aristotle’s theory on catharsis and the Affektenlehre, they had a pedagogical role on the audience. Some mad scenes by J.S. Bach, Handel and Mozart are briefly analyzed, highlighting their most relevant textual and musical characteristics.

  16. SAR Raw Data Generation for Complex Airport Scenes

    Directory of Open Access Journals (Sweden)

    Jia Li

    2014-10-01

    Full Text Available The method of generating the SAR raw data of complex airport scenes is studied in this paper. A formulation of the SAR raw signal model of airport scenes is given. Via generating the echoes from the background, aircrafts and buildings, respectively, the SAR raw data of the unified SAR imaging geometry is obtained from their vector additions. The multipath scattering and the shadowing between the background and different ground covers of standing airplanes and buildings are analyzed. Based on the scattering characteristics, coupling scattering models and SAR raw data models of different targets are given, respectively. A procedure is given to generate the SAR raw data of airport scenes. The SAR images from the simulated raw data demonstrate the validity of the proposed method.

  17. Learning physical parameters from dynamic scenes.

    Science.gov (United States)

    Ullman, Tomer D; Stuhlmüller, Andreas; Goodman, Noah D; Tenenbaum, Joshua B

    2018-04-10

    Humans acquire their most basic physical concepts early in development, and continue to enrich and expand their intuitive physics throughout life as they are exposed to more and varied dynamical environments. We introduce a hierarchical Bayesian framework to explain how people can learn physical parameters at multiple levels. In contrast to previous Bayesian models of theory acquisition (Tenenbaum, Kemp, Griffiths, & Goodman, 2011), we work with more expressive probabilistic program representations suitable for learning the forces and properties that govern how objects interact in dynamic scenes unfolding over time. We compare our model to human learners on a challenging task of estimating multiple physical parameters in novel microworlds given short movies. This task requires people to reason simultaneously about multiple interacting physical laws and properties. People are generally able to learn in this setting and are consistent in their judgments. Yet they also make systematic errors indicative of the approximations people might make in solving this computationally demanding problem with limited computational resources. We propose two approximations that complement the top-down Bayesian approach. One approximation model relies on a more bottom-up feature-based inference scheme. The second approximation combines the strengths of the bottom-up and top-down approaches, by taking the feature-based inference as its point of departure for a search in physical-parameter space. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Successful Scene Encoding in Presymptomatic Early-Onset Alzheimer's Disease.

    Science.gov (United States)

    Quiroz, Yakeel T; Willment, Kim Celone; Castrillon, Gabriel; Muniz, Martha; Lopera, Francisco; Budson, Andrew; Stern, Chantal E

    2015-01-01

    Brain regions critical to episodic memory are altered during the preclinical stages of Alzheimer's disease (AD). However, reliable means of identifying cognitively-normal individuals at higher risk to develop AD have not been established. To examine whether functional MRI can detect early functional changes associated with scene encoding in a group of presymptomatic presenilin-1 (PSEN1) E280A mutation carriers. Participants were 39 young, cognitively-normal individuals from an autosomal dominant early-onset AD kindred, located in Antioquia, Colombia. Participants performed a functional MRI scene encoding task and a post-scan subsequent memory test. PSEN1 mutation carriers exhibited hyperactivation within medial temporal lobe regions (hippocampus,parahippocampal formation) during successful scene encoding compared to age-matched non-carriers. Hyperactivation in medial temporal lobe regions during scene encoding is seen in individuals genetically-determined to develop AD years before their clinical onset. Our findings will guide future research with the ultimate goal of using functional neuroimaging in the early detection of preclinical AD.

  19. Scene Segmentation with DAG-Recurrent Neural Networks.

    Science.gov (United States)

    Shuai, Bing; Zup, Zhen; Wang, Bing; Wang, Gang

    2017-06-06

    In this paper, we address the challenging task of scene segmentation. In order to capture the rich contextual dependencies over image regions, we propose Directed Acyclic Graph - Recurrent Neural Networks (DAG-RNN) to perform context aggregation over locally connected feature maps. More specifically, DAG-RNN is placed on top of pre-trained CNN (feature extractor) to embed context into local features so that their representative capability can be enhanced. In comparison with plain CNN (as in Fully Convolutional Networks - FCN), DAG-RNN is empirically found to be significantly more effective at aggregating context. Therefore, DAG-RNN demonstrates noticeably performance superiority over FCNs on scene segmentation. Besides, DAG-RNN entails dramatically less parameters as well as demands fewer computation operations, which makes DAG-RNN more favorable to be potentially applied on resource-constrained embedded devices. Meanwhile, the class occurrence frequencies are extremely imbalanced in scene segmentation, so we propose a novel class-weighted loss to train the segmentation network. The loss distributes reasonably higher attention weights to infrequent classes during network training, which is essential to boost their parsing performance. We evaluate our segmentation network on three challenging public scene segmentation benchmarks: Sift Flow, Pascal Context and COCO Stuff. On top of them, we achieve very impressive segmentation performance.

  20. Target Tracking Based Scene Analysis

    Science.gov (United States)

    1984-08-01

    NATO Advanced Study PG Institute, Braunlage/ Harz , FRG, June 21 July 2, 1I82 Springer, Berlin, 1983, pp. 493-501. 141 B. Bhanu."Recognition of...Braunlage/ Harz . FRG, June 21 - July 2, 1082 Springer, Berlin, 1083. pp 10.1-124. [81 R.B. Cate, T.*1B. Dennis, J.T. Mallin, K.S. Nedelman, NEIL Trenchard, and...34Image, Sequence Processing and Dynamic Scene Analysis", Proceedings of NATO,. Advanced Study Institute, Braunlage/ Harz , FRG, June 21 - July 2, 1982

  1. Associative Processing Is Inherent in Scene Perception

    Science.gov (United States)

    Aminoff, Elissa M.; Tarr, Michael J.

    2015-01-01

    How are complex visual entities such as scenes represented in the human brain? More concretely, along what visual and semantic dimensions are scenes encoded in memory? One hypothesis is that global spatial properties provide a basis for categorizing the neural response patterns arising from scenes. In contrast, non-spatial properties, such as single objects, also account for variance in neural responses. The list of critical scene dimensions has continued to grow—sometimes in a contradictory manner—coming to encompass properties such as geometric layout, big/small, crowded/sparse, and three-dimensionality. We demonstrate that these dimensions may be better understood within the more general framework of associative properties. That is, across both the perceptual and semantic domains, features of scene representations are related to one another through learned associations. Critically, the components of such associations are consistent with the dimensions that are typically invoked to account for scene understanding and its neural bases. Using fMRI, we show that non-scene stimuli displaying novel associations across identities or locations recruit putatively scene-selective regions of the human brain (the parahippocampal/lingual region, the retrosplenial complex, and the transverse occipital sulcus/occipital place area). Moreover, we find that the voxel-wise neural patterns arising from these associations are significantly correlated with the neural patterns arising from everyday scenes providing critical evidence whether the same encoding principals underlie both types of processing. These neuroimaging results provide evidence for the hypothesis that the neural representation of scenes is better understood within the broader theoretical framework of associative processing. In addition, the results demonstrate a division of labor that arises across scene-selective regions when processing associations and scenes providing better understanding of the functional

  2. Line grouping using perceptual saliency and structure prediction for car detection in traffic scenes

    Science.gov (United States)

    Denasi, Sandra; Quaglia, Giorgio

    1993-08-01

    Autonomous and guide assisted vehicles make a heavy use of computer vision techniques to perceive the environment where they move. In this context, the European PROMETHEUS program is carrying on activities in order to develop autonomous vehicle monitoring that assists people to achieve safer driving. Car detection is one of the topics that are faced by the program. Our contribution proposes the development of this task in two stages: the localization of areas of interest and the formulation of object hypotheses. In particular, the present paper proposes a new approach that builds structural descriptions of objects from edge segmentations by using geometrical organization. This approach has been applied to the detection of cars in traffic scenes. We have analyzed images taken from a moving vehicle in order to formulate obstacle hypotheses: preliminary results confirm the efficiency of the method.

  3. Template construction grammar: from visual scene description to language comprehension and agrammatism.

    Science.gov (United States)

    Barrès, Victor; Lee, Jinyong

    2014-01-01

    How does the language system coordinate with our visual system to yield flexible integration of linguistic, perceptual, and world-knowledge information when we communicate about the world we perceive? Schema theory is a computational framework that allows the simulation of perceptuo-motor coordination programs on the basis of known brain operating principles such as cooperative computation and distributed processing. We present first its application to a model of language production, SemRep/TCG, which combines a semantic representation of visual scenes (SemRep) with Template Construction Grammar (TCG) as a means to generate verbal descriptions of a scene from its associated SemRep graph. SemRep/TCG combines the neurocomputational framework of schema theory with the representational format of construction grammar in a model linking eye-tracking data to visual scene descriptions. We then offer a conceptual extension of TCG to include language comprehension and address data on the role of both world knowledge and grammatical semantics in the comprehension performances of agrammatic aphasic patients. This extension introduces a distinction between heavy and light semantics. The TCG model of language comprehension offers a computational framework to quantitatively analyze the distributed dynamics of language processes, focusing on the interactions between grammatical, world knowledge, and visual information. In particular, it reveals interesting implications for the understanding of the various patterns of comprehension performances of agrammatic aphasics measured using sentence-picture matching tasks. This new step in the life cycle of the model serves as a basis for exploring the specific challenges that neurolinguistic computational modeling poses to the neuroinformatics community.

  4. Image enhancement for astronomical scenes

    Science.gov (United States)

    Lucas, Jacob; Calef, Brandoch; Knox, Keith

    2013-09-01

    Telescope images of astronomical objects and man-made satellites are frequently characterized by high dynamic range and low SNR. We consider the problem of how to enhance these images, with the aim of making them visually useful rather than radiometrically accurate. Standard contrast and histogram adjustment tends to strongly amplify noise in dark regions of the image. Sophisticated techniques have been developed to address this problem in the context of natural scenes. However, these techniques often misbehave when confronted with low-SNR scenes that are also mostly empty space. We compare two classes of algorithms: contrast-limited adaptive histogram equalization, which achieves spatial localization via a tiling of the image, and gradient-domain techniques, which perform localized contrast adjustment by non-linearly remapping the gradient of the image in a content-dependent manner. We extend these to include a priori knowledge of SNR and the processing (e.g. deconvolution) that was applied in the preparation of the image. The methods will be illustrated with images of satellites from a ground-based telescope.

  5. Asymmetries in the direction of saccades during perception of scenes and fractals: effects of image type and image features.

    Science.gov (United States)

    Foulsham, Tom; Kingstone, Alan

    2010-04-07

    The direction in which people tend to move their eyes when inspecting images can reveal the different influences on eye guidance in scene perception, and their time course. We investigated biases in saccade direction during a memory-encoding task with natural scenes and computer-generated fractals. Images were rotated to disentangle egocentric and image-based guidance. Saccades in fractals were more likely to be horizontal, regardless of orientation. In scenes, the first saccade often moved down and subsequent eye movements were predominantly vertical, relative to the scene. These biases were modulated by the distribution of visual features (saliency and clutter) in the scene. The results suggest that image orientation, visual features and the scene frame-of-reference have a rapid effect on eye guidance. Copyright 2010 Elsevier Ltd. All rights reserved.

  6. Scene analysis in the natural environment

    DEFF Research Database (Denmark)

    Lewicki, Michael S; Olshausen, Bruno A; Surlykke, Annemarie

    2014-01-01

    The problem of scene analysis has been studied in a number of different fields over the past decades. These studies have led to important insights into problems of scene analysis, but not all of these insights are widely appreciated, and there remain critical shortcomings in current approaches......, all exhibit behaviors that require robust solutions to scene analysis problems encountered in the natural environment. By examining the behaviors of these seemingly disparate animals, we emerge with a framework for studying scene analysis comprising four essential properties: (1) the ability to solve...

  7. Literacy in the contemporary scene

    Directory of Open Access Journals (Sweden)

    Angela B. Kleiman

    2014-11-01

    Full Text Available In this paper I examine the relationship between literacy and contemporaneity. I take as a point of departure for my discussion school literacy and its links with literacies in other institutions of the contemporary scene, in order to determine the relation between contemporary ends of reading and writing (in other words, the meaning of being literate in contemporary society and the practices and activities effectively realized at school in order to reach those objectives. Using various examples from teaching and learning situations, I discuss digital literacy practices and multimodal texts and multiliteracies from both printed and digital cultures. Throughout, I keep as a background for the discussion the functions and objectives of school literacy and the professional training of teachers who would like to be effective literacy agents in the contemporary world.

  8. On the relationship between children's perspective taking in complex scenes and their spatial drawing ability.

    Science.gov (United States)

    Ebersbach, Mirjam; Stiehler, Sophie; Asmus, Paula

    2011-09-01

    Depicting space and volume in drawings is challenging for young children in particular. It has been assumed that several cognitive skills may contribute to children's drawing. In the present study, we investigated the relationship between perspective-taking skills in complex scenes and the spatial characteristics in drawings of 5- to 9-year-olds (N= 121). Perspective taking was assessed by two tasks: (a) a visual task similar to the three-mountains task, in which the children had to select a three-dimensional model that showed the view on a scene from particular perspective and (b) a spatial construction task, in which children had to plastically reconstruct a three-dimensional scene as it would appear from a new point of view. In the drawing task, the children were asked to depict a three-dimensional scene exactly as it looked like from their own point of view. Several spatial features in the drawings were coded. The results suggested that children's spatial drawing and their perspective-taking skills were related. The axes system and the spatial relations between objects in the drawings in particular were predicted, beyond age, by certain measures of the two perspective-taking tasks. The results are discussed in the light of particular demands that might underlay both perspective taking and spatial drawing. ©2010 The British Psychological Society.

  9. Temporal evolution of the central fixation bias in scene viewing.

    Science.gov (United States)

    Rothkegel, Lars O M; Trukenbrod, Hans A; Schütt, Heiko H; Wichmann, Felix A; Engbert, Ralf

    2017-11-01

    When watching the image of a natural scene on a computer screen, observers initially move their eyes toward the center of the image-a reliable experimental finding termed central fixation bias. This systematic tendency in eye guidance likely masks attentional selection driven by image properties and top-down cognitive processes. Here, we show that the central fixation bias can be reduced by delaying the initial saccade relative to image onset. In four scene-viewing experiments we manipulated observers' initial gaze position and delayed their first saccade by a specific time interval relative to the onset of an image. We analyzed the distance to image center over time and show that the central fixation bias of initial fixations was significantly reduced after delayed saccade onsets. We additionally show that selection of the initial saccade target strongly depended on the first saccade latency. A previously published model of saccade generation was extended with a central activation map on the initial fixation whose influence declined with increasing saccade latency. This extension was sufficient to replicate the central fixation bias from our experiments. Our results suggest that the central fixation bias is generated by default activation as a response to the sudden image onset and that this default activation pattern decreases over time. Thus, it may often be preferable to use a modified version of the scene viewing paradigm that decouples image onset from the start signal for scene exploration to explicitly reduce the central fixation bias.

  10. Medial Temporal Lobe Contributions to Episodic Future Thinking: Scene Construction or Future Projection?

    Science.gov (United States)

    Palombo, D J; Hayes, S M; Peterson, K M; Keane, M M; Verfaellie, M

    2018-02-01

    Previous research has shown that the medial temporal lobes (MTL) are more strongly engaged when individuals think about the future than about the present, leading to the suggestion that future projection drives MTL engagement. However, future thinking tasks often involve scene processing, leaving open the alternative possibility that scene-construction demands, rather than future projection, are responsible for the MTL differences observed in prior work. This study explores this alternative account. Using functional magnetic resonance imaging, we directly contrasted MTL activity in 1) high scene-construction and low scene-construction imagination conditions matched in future thinking demands and 2) future-oriented and present-oriented imagination conditions matched in scene-construction demands. Consistent with the alternative account, the MTL was more active for the high versus low scene-construction condition. By contrast, MTL differences were not observed when comparing the future versus present conditions. Moreover, the magnitude of MTL activation was associated with the extent to which participants imagined a scene but was not associated with the extent to which participants thought about the future. These findings help disambiguate which component processes of imagination specifically involve the MTL. Published by Oxford University Press 2016.

  11. Do deep convolutional neural networks really need to be deep when applied for remote scene classification?

    Science.gov (United States)

    Luo, Chang; Wang, Jie; Feng, Gang; Xu, Suhui; Wang, Shiqiang

    2017-10-01

    Deep convolutional neural networks (CNNs) have been widely used to obtain high-level representation in various computer vision tasks. However, for remote scene classification, there are not sufficient images to train a very deep CNN from scratch. From two viewpoints of generalization power, we propose two promising kinds of deep CNNs for remote scenes and try to find whether deep CNNs need to be deep for remote scene classification. First, we transfer successful pretrained deep CNNs to remote scenes based on the theory that depth of CNNs brings the generalization power by learning available hypothesis for finite data samples. Second, according to the opposite viewpoint that generalization power of deep CNNs comes from massive memorization and shallow CNNs with enough neural nodes have perfect finite sample expressivity, we design a lightweight deep CNN (LDCNN) for remote scene classification. With five well-known pretrained deep CNNs, experimental results on two independent remote-sensing datasets demonstrate that transferred deep CNNs can achieve state-of-the-art results in an unsupervised setting. However, because of its shallow architecture, LDCNN cannot obtain satisfactory performance, regardless of whether in an unsupervised, semisupervised, or supervised setting. CNNs really need depth to obtain general features for remote scenes. This paper also provides baseline for applying deep CNNs to other remote sensing tasks.

  12. Application of multi-resolution 3D techniques in crime scene documentation with bloodstain pattern analysis.

    Science.gov (United States)

    Hołowko, Elwira; Januszkiewicz, Kamil; Bolewicki, Paweł; Sitnik, Robert; Michoński, Jakub

    2016-10-01

    In forensic documentation with bloodstain pattern analysis (BPA) it is highly desirable to obtain non-invasively overall documentation of a crime scene, but also register in high resolution single evidence objects, like bloodstains. In this study, we propose a hierarchical 3D scanning platform designed according to the top-down approach known from the traditional forensic photography. The overall 3D model of a scene is obtained via integration of laser scans registered from different positions. Some parts of a scene being particularly interesting are documented using midrange scanner, and the smallest details are added in the highest resolution as close-up scans. The scanning devices are controlled using developed software equipped with advanced algorithms for point cloud processing. To verify the feasibility and effectiveness of multi-resolution 3D scanning in crime scene documentation, our platform was applied to document a murder scene simulated by the BPA experts from the Central Forensic Laboratory of the Police R&D, Warsaw, Poland. Applying the 3D scanning platform proved beneficial in the documentation of a crime scene combined with BPA. The multi-resolution 3D model enables virtual exploration of a scene in a three-dimensional environment, distance measurement, and gives a more realistic preservation of the evidences together with their surroundings. Moreover, high-resolution close-up scans aligned in a 3D model can be used to analyze bloodstains revealed at the crime scene. The result of BPA such as trajectories, and the area of origin are visualized and analyzed in an accurate model of a scene. At this stage, a simplified approach considering the trajectory of blood drop as a straight line is applied. Although the 3D scanning platform offers a new quality of crime scene documentation with BPA, some of the limitations of the technique are also mentioned. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  13. On support relations and semantic scene graphs

    Science.gov (United States)

    Yang, Michael Ying; Liao, Wentong; Ackermann, Hanno; Rosenhahn, Bodo

    2017-09-01

    Scene understanding is one of the essential and challenging topics in computer vision and photogrammetry. Scene graph provides valuable information for such scene understanding. This paper proposes a novel framework for automatic generation of semantic scene graphs which interpret indoor environments. First, a Convolutional Neural Network is used to detect objects of interest in the given image. Then, the precise support relations between objects are inferred by taking two important auxiliary information in the indoor environments: the physical stability and the prior support knowledge between object categories. Finally, a semantic scene graph describing the contextual relations within a cluttered indoor scene is constructed. In contrast to the previous methods for extracting support relations, our approach provides more accurate results. Furthermore, we do not use pixel-wise segmentation to obtain objects, which is computation costly. We also propose different methods to evaluate the generated scene graphs, which lacks in this community. Our experiments are carried out on the NYUv2 dataset. The experimental results demonstrated that our approach outperforms the state-of-the-art methods in inferring support relations. The estimated scene graphs are accurately compared with ground truth.

  14. Auditory and visual scene analysis: an overview.

    Science.gov (United States)

    Kondo, Hirohito M; van Loon, Anouk M; Kawahara, Jun-Ichiro; Moore, Brian C J

    2017-02-19

    We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how 'scene analysis' is performed in the brain. Recent advances from both auditory and visual research suggest that the brain does not simply process the incoming scene properties. Rather, top-down processes such as attention, expectations and prior knowledge facilitate scene perception. Thus, scene analysis is linked not only with the extraction of stimulus features and formation and selection of perceptual objects, but also with selective attention, perceptual binding and awareness. This special issue covers novel advances in scene-analysis research obtained using a combination of psychophysics, computational modelling, neuroimaging and neurophysiology, and presents new empirical and theoretical approaches. For integrative understanding of scene analysis beyond and across sensory modalities, we provide a collection of 15 articles that enable comparison and integration of recent findings in auditory and visual scene analysis.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  15. Real-time scene generator

    Science.gov (United States)

    Lord, Eric; Shand, David J.; Cantle, Allan J.

    1996-05-01

    This paper describes the techniques which have been developed for an infra-red (IR) target, countermeasure and background image generation system working in real time for HWIL and Trial Proving applications. Operation is in the 3 to 5 and 8 to 14 micron bands. The system may be used to drive a scene projector (otherwise known as a thermal picture synthesizer) or for direct injection into equipment under test. The provision of realistic IR target and countermeasure trajectories and signatures, within representative backgrounds, enables the full performance envelope of a missile system to be evaluated. It also enables an operational weapon system to be proven in a trials environment without compromising safety. The most significant technique developed has been that of line by line synthesis. This minimizes the processing delays to the equivalent of 1.5 frames from input of target and sightline positions to the completion of an output image scan. Using this technique a scene generator has been produced for full closed loop HWIL performance analysis for the development of an air to air missile system. Performance of the synthesis system is as follows: 256 * 256 pixels per frame; 350 target polygons per frame; 100 Hz frame rate; and Gouraud shading, simple reflections, variable geometry targets and atmospheric scaling. A system using a similar technique has also bee used for direct insertion into the video path of a ground to air weapon system in live firing trials. This has provided realistic targets without degrading the closed loop performance. Delay of the modified video signal has been kept to less than 5 lines. The technique has been developed using a combination of 4 high speed Intel i860 RISC processors in parallel with the 4000 series XILINX field programmable gate arrays (FPGA). Start and end conditions for each line of target pixels are prepared and ordered in the I860. The merging with background pixels and output shading and scaling is then carried out in

  16. Probing the natural scene by echolocation in bats

    Directory of Open Access Journals (Sweden)

    Cynthia F Moss

    2010-08-01

    Full Text Available Bats echolocating in the natural environment face the formidable task of sorting signals from multiple auditory objects, echoes from obstacles, prey and the calls of conspecifics. Successful orientation in a complex environment depends on auditory information processing, along with adaptive vocal-motor behaviors and flight path control, which draw upon 3-D spatial perception, attention and memory. This article reviews field and laboratory studies that document adaptive sonar behaviors of echolocating bats, and point to the fundamental signal parameters they use to track and sort auditory objects in a dynamic environment. We suggest that adaptive sonar behavior provides a window to bats’ perception of complex auditory scenes.

  17. Probing the natural scene by echolocation in bats

    DEFF Research Database (Denmark)

    Moss, Cynthia F; Surlykke, Annemarie

    2010-01-01

    -motor behaviors and flight path control, which draw upon 3-D spatial perception, attention, and memory. This article reviews field and laboratory studies that document adaptive sonar behaviors of echolocating bats, and point to the fundamental signal parameters they use to track and sort auditory objects......Bats echolocating in the natural environment face the formidable task of sorting signals from multiple auditory objects, echoes from obstacles, prey, and the calls of conspecifics. Successful orientation in a complex environment depends on auditory information processing, along with adaptive vocal...... in a dynamic environment. We suggest that adaptive sonar behavior provides a window to bats' perception of complex auditory scenes....

  18. Scene classification using a hybrid generative/discriminative approach.

    Science.gov (United States)

    Bosch, Anna; Zisserman, Andrew; Muñoz, Xavier

    2008-04-01

    We investigate whether dimensionality reduction using a latent generative model is beneficial for the task of weakly supervised scene classification. In detail we are given a set of labelled images of scenes (e.g. coast, forest, city, river, etc) and our objective is to classify a new image into one of these categories. Our approach consists of first discovering latent "topics" using probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature here applied to a bag of visual words representation for each image, and subsequently training a multi-way classifier on the topic distribution vector for each image. We compare this approach to that of representing each image by a bag of visual words vector directly, and training a multi-way classifier on these vectors. To this end we introduce a novel vocabulary using dense colour SIFT descriptors, and then investigate the classification performance under changes in the size of the visual vocabulary, the number of latent topics learnt, and the type of discriminative classifier used (k-nearest neighbour or SVM). We achieve superior classification performance to recent publications that have used a bag of visual word representation, in all cases using the authors' own datasets and testing protocols. We also investigate the gain in adding spatial information. We show applications to image retrieval with relevance feedback and to scene classification in videos.

  19. Describing visual scenes: towards a neurolinguistics based on construction grammar.

    Science.gov (United States)

    Arbib, Michael A; Lee, Jinyong

    2008-08-15

    The present paper is part of a larger effort to locate the production and perception of language within the broader context of brain mechanisms for action and perception more generally. Here we model function in terms of the competition and cooperation of schemas. We use the task of describing visual scenes to explore the suitability of Construction Grammar as an appropriate framework for a schema-based linguistics. We recall the early VISIONS model of schema-based computer analysis of static visual scenes and then introduce SemRep as a graphical representation of dynamic visual scenes designed to support the generation of varied descriptions of episodes. We report preliminary results on implementing the production of sentences using Template Construction Grammar (TCG), a new form of construction grammar distinguished by its use of SemRep to express semantics. We summarize data on neural correlates relevant to future work on TCG within the context of neurolinguistics, and show how the relation between SemRep and TCG can serve as the basis for modeling language comprehension.

  20. Purification of crime scene DNA extracts using centrifugal filter devices.

    Science.gov (United States)

    Norén, Lina; Hedell, Ronny; Ansell, Ricky; Hedman, Johannes

    2013-04-24

    The success of forensic DNA analysis is limited by the size, quality and purity of biological evidence found at crime scenes. Sample impurities can inhibit PCR, resulting in partial or negative DNA profiles. Various DNA purification methods are applied to remove impurities, for example, employing centrifugal filter devices. However, irrespective of method, DNA purification leads to DNA loss. Here we evaluate the filter devices Amicon Ultra 30 K and Microsep 30 K with respect to recovery rate and general performance for various types of PCR-inhibitory crime scene samples. Recovery rates for DNA purification using Amicon Ultra 30 K and Microsep 30 K were gathered using quantitative PCR. Mock crime scene DNA extracts were analyzed using quantitative PCR and short tandem repeat (STR) profiling to test the general performance and inhibitor-removal properties of the two filter devices. Additionally, the outcome of long-term routine casework DNA analysis applying each of the devices was evaluated. Applying Microsep 30 K, 14 to 32% of the input DNA was recovered, whereas Amicon Ultra 30 K retained 62 to 70% of the DNA. The improved purity following filter purification counteracted some of this DNA loss, leading to slightly increased electropherogram peak heights for blood on denim (Amicon Ultra 30 K and Microsep 30 K) and saliva on envelope (Amicon Ultra 30 K). Comparing Amicon Ultra 30 K and Microsep 30 K for purification of DNA extracts from mock crime scene samples, the former generated significantly higher peak heights for rape case samples (P-values crime scene samples and for consistency between different PCR-based analysis systems, such as quantification and STR analysis. In order to maximize the possibility to obtain complete STR DNA profiles and to create an efficient workflow, the level of DNA purification applied should be correlated to the inhibitor-tolerance of the STR analysis system used.

  1. Evidencing a place for the hippocampus within the core scene processing network.

    Science.gov (United States)

    Hodgetts, C J; Shine, J P; Lawrence, A D; Downing, P E; Graham, K S

    2016-11-01

    Functional neuroimaging studies have identified several "core" brain regions that are preferentially activated by scene stimuli, namely posterior parahippocampal gyrus (PHG), retrosplenial cortex (RSC), and transverse occipital sulcus (TOS). The hippocampus (HC), too, is thought to play a key role in scene processing, although no study has yet investigated scene-sensitivity in the HC relative to these other "core" regions. Here, we characterised the frequency and consistency of individual scene-preferential responses within these regions by analysing a large dataset (n = 51) in which participants performed a one-back working memory task for scenes, objects, and scrambled objects. An unbiased approach was adopted by applying independently-defined anatomical ROIs to individual-level functional data across different voxel-wise thresholds and spatial filters. It was found that the majority of subjects had preferential scene clusters in PHG (max = 100% of participants), RSC (max = 76%), and TOS (max = 94%). A comparable number of individuals also possessed significant scene-related clusters within their individually defined HC ROIs (max = 88%), evidencing a HC contribution to scene processing. While probabilistic overlap maps of individual clusters showed that overlap "peaks" were close to those identified in group-level analyses (particularly for TOS and HC), inter-individual consistency varied across regions and statistical thresholds. The inter-regional and inter-individual variability revealed by these analyses has implications for how scene-sensitive cortex is localised and interrogated in functional neuroimaging studies, particularly in medial temporal lobe regions, such as the HC. Hum Brain Mapp 37:3779-3794, 2016. © 2016 Wiley Periodicals, Inc. © 2016 The Authors Human Brain Mapping Published by Wiley Periodicals, Inc.

  2. Charged particle analyzer PLAZMAG

    International Nuclear Information System (INIS)

    Apathy, Istvan; Endroeczy, Gabor; Szemerey, Istvan; Szendroe, Sandor

    1985-01-01

    The scientific task of the charged particle analyzer PLAZMAG, a part of the VEGA space probe, and the physical background of the measurements are described. The sensor of the device face the Sun and the comet Halley measuring the energy and mass spectrum of ion and electron components of energies lower than 25 keV. The tasks of the individual electronic parts, the design aspects and the modes of operation in different phases of the flight are dealt with. (author)

  3. Joint embeddings of scene graphs and images

    OpenAIRE

    Belilovsky, Eugene; Blaschko, Matthew; Kiros, Jamie Ryan; Urtasun, Raquel; Zemel, Richard

    2017-01-01

    Belilovsky E., Blaschko M., Kiros J.R., Urtasun R., Zemel R., ''Joint embeddings of scene graphs and images'', 5th international conference on learning representations workshop track - ICLR 2017, 5 pp., April 24-26, 2017, Toulon, France.

  4. Scene Integration for Online VR Advertising Clouds

    Directory of Open Access Journals (Sweden)

    Michael Kalochristianakis

    2014-12-01

    Full Text Available This paper presents a scene composition approach that allows the combinational use of standard three dimensional objects, called models, in order to create X3D scenes. The module is an integral part of a broader design aiming to construct large scale online advertising infrastructures that rely on virtual reality technologies. The architecture addresses a number of problems regarding remote rendering for low end devices and last but not least, the provision of scene composition and integration. Since viewers do not keep information regarding individual input models or scenes, composition requires the consideration of mechanisms that add state to viewing technologies. In terms of this work we extended a well-known, open source X3D authoring tool.

  5. Gaze Control in Complex Scene Perception

    National Research Council Canada - National Science Library

    Henderson, John

    2004-01-01

    .... The aim of the current project was to investigate the influence of semantic factors on human gaze control during the free viewing of complex, natural scenes, focusing on the extent to which initial...

  6. Influence of 3D effects on 1D aerosol retrievals in synthetic, partially clouded scenes

    NARCIS (Netherlands)

    Stap, F. A.; Hasekamp, O. P.; Emde, C.; Roeckmann, Thomas

    2016-01-01

    An important challenge in aerosol remote sensing is to retrieve aerosol properties in the vicinity of clouds and in cloud contaminated scenes. Satellite based multi-wavelength, multi-angular, photo-polarimetric instruments are particularly suited for this task as they have the ability to separate

  7. Complex Dynamic Scene Perception: Effects of Attentional Set on Perceiving Single and Multiple Event Types

    Science.gov (United States)

    Sanocki, Thomas; Sulman, Noah

    2013-01-01

    Three experiments measured the efficiency of monitoring complex scenes composed of changing objects, or events. All events lasted about 4 s, but in a given block of trials, could be of a single type (single task) or of multiple types (multitask, with a total of four event types). Overall accuracy of detecting target events amid distractors was…

  8. Gay and Lesbian Scene in Metelkova

    OpenAIRE

    Nataša Velikonja

    2013-01-01

    The article deals with the development of the gay and lesbian scene in ACC Metelkova, while specifying the preliminary aspects of establishing and building gay and lesbian activism associated with spatial issues. The struggle for space or occupying public space is vital for the gay and lesbian scene, as it provides not only the necessary socializing opportunities for gays and lesbians, but also does away with the historical hiding of homosexuality in the closet, in seclusion and silence. Beca...

  9. Transient analyzer

    International Nuclear Information System (INIS)

    Muir, M.D.

    1975-01-01

    The design and design philosophy of a high performance, extremely versatile transient analyzer is described. This sub-system was designed to be controlled through the data acquisition computer system which allows hands off operation. Thus it may be placed on the experiment side of the high voltage safety break between the experimental device and the control room. This analyzer provides control features which are extremely useful for data acquisition from PPPL diagnostics. These include dynamic sample rate changing, which may be intermixed with multiple post trigger operations with variable length blocks using normal, peak to peak or integrate modes. Included in the discussion are general remarks on the advantages of adding intelligence to transient analyzers, a detailed description of the characteristics of the PPPL transient analyzer, a description of the hardware, firmware, control language and operation of the PPPL transient analyzer, and general remarks on future trends in this type of instrumentation both at PPPL and in general

  10. Probing the time course of head-motion cues integration during auditory scene analysis

    Science.gov (United States)

    Kondo, Hirohito M.; Toshima, Iwaki; Pressnitzer, Daniel; Kashino, Makio

    2014-01-01

    The perceptual organization of auditory scenes is a hard but important problem to solve for human listeners. It is thus likely that cues from several modalities are pooled for auditory scene analysis, including sensory-motor cues related to the active exploration of the scene. We previously reported a strong effect of head motion on auditory streaming. Streaming refers to an experimental paradigm where listeners hear sequences of pure tones, and rate their perception of one or more subjective sources called streams. To disentangle the effects of head motion (changes in acoustic cues at the ear, subjective location cues, and motor cues), we used a robotic telepresence system, Telehead. We found that head motion induced perceptual reorganization even when the acoustic scene had not changed. Here we reanalyzed the same data to probe the time course of sensory-motor integration. We show that motor cues had a different time course compared to acoustic or subjective location cues: motor cues impacted perceptual organization earlier and for a shorter time than other cues, with successive positive and negative contributions to streaming. An additional experiment controlled for the effects of volitional anticipatory components, and found that arm or leg movements did not have any impact on scene analysis. These data provide a first investigation of the time course of the complex integration of sensory-motor cues in an auditory scene analysis task, and they suggest a loose temporal coupling between the different mechanisms involved. PMID:25009456

  11. Probing the time course of head-motion cues integration during auditory scene analysis

    Directory of Open Access Journals (Sweden)

    Hirohito M. Kondo

    2014-06-01

    Full Text Available The perceptual organization of auditory scenes is a hard but important problem to solve for human listeners. It is thus likely that cues from several modalities are pooled for auditory scene analysis, including sensory-motor cues related to the active exploration of the scene. We previously reported a strong effect of head motion on auditory streaming. Streaming refers to an experimental paradigm where listeners hear sequences of pure tones, and report their perception of one or more subjective sources called streams. To disentangle the effects of head motion (changes in acoustic cues at the ear, subjective location cues, and motor cues, we used a robotic telepresence system, Telehead. We found that head motion induced perceptual reorganization even when the acoustic scene had not changed. Here we reanalyzed the same data to probe the time course of sensory-motor integration. We show that motor cues had a different time course compared to acoustic or subjective location cues: motor cues impacted perceptual organization earlier and for a shorter time than other cues, with successive positive and negative contributions to streaming. An additional experiment controlled for the effects of volitional anticipatory components, and found that arm or leg movements did not have any impact on scene analysis. These data provide a first investigation of the time course of the complex integration of sensory-motor cues in an auditory scene analysis task, and they suggest a loose temporal coupling between the different mechanisms involved.

  12. Scene grammar shapes the way we interact with objects, strengthens memories, and speeds search.

    Science.gov (United States)

    Draschkow, Dejan; Võ, Melissa L-H

    2017-11-28

    Predictions of environmental rules (here referred to as "scene grammar") can come in different forms: seeing a toilet in a living room would violate semantic predictions, while finding a toilet brush next to the toothpaste would violate syntactic predictions. The existence of such predictions has usually been investigated by showing observers images containing such grammatical violations. Conversely, the generative process of creating an environment according to one's scene grammar and its effects on behavior and memory has received little attention. In a virtual reality paradigm, we either instructed participants to arrange objects according to their scene grammar or against it. Subsequently, participants' memory for the arrangements was probed using a surprise recall (Exp1), or repeated search (Exp2) task. As a result, participants' construction behavior showed strategic use of larger, static objects to anchor the location of smaller objects which are generally the goals of everyday actions. Further analysis of this scene construction data revealed possible commonalities between the rules governing word usage in language and object usage in naturalistic environments. Taken together, we revealed some of the building blocks of scene grammar necessary for efficient behavior, which differentially influence how we interact with objects and what we remember about scenes.

  13. [Medical management in the chemical terrorism scene].

    Science.gov (United States)

    Krivoy, Amir; Rotman, Eran; Layish, Ido; Goldberg, Avi; Horvitz, Ariel; Yehezkelli, Yoav

    2005-04-01

    The Tokyo subway sarin attack in March 1995 demonstrated the importance of preparedness toward a chemical terrorist attack. Emergency medical teams on the scene are valuable, beside the medical treatment of casualties, in the cognition of toxicant involvement and later in the recognition of the specific toxidrome involved. The chemical terrorism scene is a contaminated area; therefore, first responders have to be protected from both percutaneous and inhalational exposure to toxic materials. This protection is also against secondary evaporation (gas-off) from contaminated casualty, hence the importance of disrobing casualties on the scene as soon as possible. Once the recognition of toxicological involvement have been made, the next crucial decision is whether the clinical toxidrome is of cholinergic toxicity (e.g. organophosphate or carbamate intoxication) in which there are automatic injectors for treatment available on the scene, or any other toxidrome (such as irritation or vesicants) in which, beside general measures, like oxygen delivery and airway support, there is not a specific antidotal treatment on the scene. The clinical detection and identification of the chemical toxidrome involved is of utmost importance since it promotes the antidotal treatment quickly and efficiently. The key to the medical management of such events is based on decisions that have to be taken as soon as possible according to the clinical judgment of medical teams on the scene.

  14. Thermal resolution specification in infrared scene projectors

    Science.gov (United States)

    LaVeigne, Joe; Franks, Greg; Danielson, Tom

    2015-05-01

    Infrared scene projectors (IRSPs) are a key part of performing dynamic testing of infrared (IR) imaging systems. Two important properties of an IRSP system are apparent temperature and thermal resolution. Infrared scene projector technology continues to progress, with several systems capable of producing high apparent temperatures currently available or under development. These systems use different emitter pixel technologies, including resistive arrays, digital micro-mirror devices (DMDs), liquid crystals and LEDs to produce dynamic infrared scenes. A common theme amongst these systems is the specification of the bit depth of the read-in integrated circuit (RIIC) or projector engine , as opposed to specifying the desired thermal resolution as a function of radiance (or apparent temperature). For IRSPs, producing an accurate simulation of a realistic scene or scenario may require simulating radiance values that range over multiple orders of magnitude. Under these conditions, the necessary resolution or "step size" at low temperature values may be much smaller than what is acceptable at very high temperature values. A single bit depth value specified at the RIIC, especially when combined with variable transfer functions between commanded input and radiance output, may not offer the best representation of a customer's desired radiance resolution. In this paper, we discuss some of the various factors that affect thermal resolution of a scene projector system, and propose some specification guidelines regarding thermal resolution to help better define the real needs of an IR scene projector system.

  15. PROCRU: A model for analyzing crew procedures in approach to landing

    Science.gov (United States)

    Baron, S.; Muralidharan, R.; Lancraft, R.; Zacharias, G.

    1980-01-01

    A model for analyzing crew procedures in approach to landing is developed. The model employs the information processing structure used in the optimal control model and in recent models for monitoring and failure detection. Mechanisms are added to this basic structure to model crew decision making in this multi task environment. Decisions are based on probability assessments and potential mission impact (or gain). Sub models for procedural activities are included. The model distinguishes among external visual, instrument visual, and auditory sources of information. The external visual scene perception models incorporate limitations in obtaining information. The auditory information channel contains a buffer to allow for storage in memory until that information can be processed.

  16. History of reading struggles linked to enhanced learning in low spatial frequency scenes.

    Directory of Open Access Journals (Sweden)

    Matthew H Schneps

    Full Text Available People with dyslexia, who face lifelong struggles with reading, exhibit numerous associated low-level sensory deficits including deficits in focal attention. Countering this, studies have shown that struggling readers outperform typical readers in some visual tasks that integrate distributed information across an expanse. Though such abilities would be expected to facilitate scene memory, prior investigations using the contextual cueing paradigm failed to find corresponding advantages in dyslexia. We suggest that these studies were confounded by task-dependent effects exaggerating known focal attention deficits in dyslexia, and that, if natural scenes were used as the context, advantages would emerge. Here, we investigate this hypothesis by comparing college students with histories of severe lifelong reading difficulties (SR and typical readers (TR in contexts that vary attention load. We find no differences in contextual-cueing when spatial contexts are letter-like objects, or when contexts are natural scenes. However, the SR group significantly outperforms the TR group when contexts are low-pass filtered natural scenes [F(3, 39 = 3.15, p<.05]. These findings suggest that perception or memory for low spatial frequency components in scenes is enhanced in dyslexia. These findings are important because they suggest strengths for spatial learning in a population otherwise impaired, carrying implications for the education and support of students who face challenges in school.

  17. History of reading struggles linked to enhanced learning in low spatial frequency scenes.

    Science.gov (United States)

    Schneps, Matthew H; Brockmole, James R; Sonnert, Gerhard; Pomplun, Marc

    2012-01-01

    People with dyslexia, who face lifelong struggles with reading, exhibit numerous associated low-level sensory deficits including deficits in focal attention. Countering this, studies have shown that struggling readers outperform typical readers in some visual tasks that integrate distributed information across an expanse. Though such abilities would be expected to facilitate scene memory, prior investigations using the contextual cueing paradigm failed to find corresponding advantages in dyslexia. We suggest that these studies were confounded by task-dependent effects exaggerating known focal attention deficits in dyslexia, and that, if natural scenes were used as the context, advantages would emerge. Here, we investigate this hypothesis by comparing college students with histories of severe lifelong reading difficulties (SR) and typical readers (TR) in contexts that vary attention load. We find no differences in contextual-cueing when spatial contexts are letter-like objects, or when contexts are natural scenes. However, the SR group significantly outperforms the TR group when contexts are low-pass filtered natural scenes [F(3, 39) = 3.15, pmemory for low spatial frequency components in scenes is enhanced in dyslexia. These findings are important because they suggest strengths for spatial learning in a population otherwise impaired, carrying implications for the education and support of students who face challenges in school.

  18. The effects of scene characteristics, resolution, and compression on the ability to recognize objects in video

    Science.gov (United States)

    Dumke, Joel; Ford, Carolyn G.; Stange, Irena W.

    2011-03-01

    Public safety practitioners increasingly use video for object recognition tasks. These end users need guidance regarding how to identify the level of video quality necessary for their application. The quality of video used in public safety applications must be evaluated in terms of its usability for specific tasks performed by the end user. The Public Safety Communication Research (PSCR) project performed a subjective test as one of the first in a series to explore visual intelligibility in video-a user's ability to recognize an object in a video stream given various conditions. The test sought to measure the effects on visual intelligibility of three scene parameters (target size, scene motion, scene lighting), several compression rates, and two resolutions (VGA (640x480) and CIF (352x288)). Seven similarly sized objects were used as targets in nine sets of near-identical source scenes, where each set was created using a different combination of the parameters under study. Viewers were asked to identify the objects via multiple choice questions. Objective measurements were performed on each of the scenes, and the ability of the measurement to predict visual intelligibility was studied.

  19. How context information and target information guide the eyes from the first epoch of search in real-world scenes.

    Science.gov (United States)

    Spotorno, Sara; Malcolm, George L; Tatler, Benjamin W

    2014-02-11

    This study investigated how the visual system utilizes context and task information during the different phases of a visual search task. The specificity of the target template (the picture or the name of the target) and the plausibility of target position in real-world scenes were manipulated orthogonally. Our findings showed that both target template information and guidance of spatial context are utilized to guide eye movements from the beginning of scene inspection. In both search initiation and subsequent scene scanning, the availability of a specific visual template was particularly useful when the spatial context of the scene was misleading and the availability of a reliable scene context facilitated search mainly when the template was abstract. Target verification was affected principally by the level of detail of target template, and was quicker in the case of a picture cue. The results indicate that the visual system can utilize target template guidance and context guidance flexibly from the beginning of scene inspection, depending upon the amount and the quality of the available information supplied by either of these high-level sources. This allows for optimization of oculomotor behavior throughout the different phases of search within a real-world scene.

  20. Scene analysis in the natural environment

    Directory of Open Access Journals (Sweden)

    Michael S Lewicki

    2014-04-01

    Full Text Available The problem of scene analysis has been studied in a number of different fields over the past decades. These studies have led to a number of important insights into problems of scene analysis, but not all of these insights are widely appreciated. Despite this progress, there are also critical shortcomings in current approaches that hinder further progress. Here we take the view that scene analysis is a universal problem solved by all animals, and that we can gain new insight by studying the problems that animals face in complex natural environments. In particular, the jumping spider, songbird, echolocating bat, and electric fish, all exhibit behaviors that require robust solutions to scene analysis problems encountered in the natural environment. By examining the behaviors of these seemingly disparate animals, we emerge with a framework for studying analysis comprising four essential properties: 1 the ability to solve ill-posed problems, 2 the ability to integrate and store information across time and modality, 3 efficient recovery and representation of 3D scene structure, and 4 the use of optimal motor actions for acquiring information to progress towards behavioral goals.

  1. Maxwellian Eye Fixation during Natural Scene Perception

    Science.gov (United States)

    Duchesne, Jean; Bouvier, Vincent; Guillemé, Julien; Coubard, Olivier A.

    2012-01-01

    When we explore a visual scene, our eyes make saccades to jump rapidly from one area to another and fixate regions of interest to extract useful information. While the role of fixation eye movements in vision has been widely studied, their random nature has been a hitherto neglected issue. Here we conducted two experiments to examine the Maxwellian nature of eye movements during fixation. In Experiment 1, eight participants were asked to perform free viewing of natural scenes displayed on a computer screen while their eye movements were recorded. For each participant, the probability density function (PDF) of eye movement amplitude during fixation obeyed the law established by Maxwell for describing molecule velocity in gas. Only the mean amplitude of eye movements varied with expertise, which was lower in experts than novice participants. In Experiment 2, two participants underwent fixed time, free viewing of natural scenes and of their scrambled version while their eye movements were recorded. Again, the PDF of eye movement amplitude during fixation obeyed Maxwell's law for each participant and for each scene condition (normal or scrambled). The results suggest that eye fixation during natural scene perception describes a random motion regardless of top-down or of bottom-up processes. PMID:23226987

  2. Maxwellian Eye Fixation during Natural Scene Perception

    Directory of Open Access Journals (Sweden)

    Jean Duchesne

    2012-01-01

    Full Text Available When we explore a visual scene, our eyes make saccades to jump rapidly from one area to another and fixate regions of interest to extract useful information. While the role of fixation eye movements in vision has been widely studied, their random nature has been a hitherto neglected issue. Here we conducted two experiments to examine the Maxwellian nature of eye movements during fixation. In Experiment 1, eight participants were asked to perform free viewing of natural scenes displayed on a computer screen while their eye movements were recorded. For each participant, the probability density function (PDF of eye movement amplitude during fixation obeyed the law established by Maxwell for describing molecule velocity in gas. Only the mean amplitude of eye movements varied with expertise, which was lower in experts than novice participants. In Experiment 2, two participants underwent fixed time, free viewing of natural scenes and of their scrambled version while their eye movements were recorded. Again, the PDF of eye movement amplitude during fixation obeyed Maxwell’s law for each participant and for each scene condition (normal or scrambled. The results suggest that eye fixation during natural scene perception describes a random motion regardless of top-down or of bottom-up processes.

  3. Correlated Topic Vector for Scene Classification.

    Science.gov (United States)

    Wei, Pengxu; Qin, Fei; Wan, Fang; Zhu, Yi; Jiao, Jianbin; Ye, Qixiang

    2017-07-01

    Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features.

  4. Navigating the auditory scene: an expert role for the hippocampus.

    Science.gov (United States)

    Teki, Sundeep; Kumar, Sukhbinder; von Kriegstein, Katharina; Stewart, Lauren; Lyness, C Rebecca; Moore, Brian C J; Capleton, Brian; Griffiths, Timothy D

    2012-08-29

    Over a typical career piano tuners spend tens of thousands of hours exploring a specialized acoustic environment. Tuning requires accurate perception and adjustment of beats in two-note chords that serve as a navigational device to move between points in previously learned acoustic scenes. It is a two-stage process that depends on the following: first, selective listening to beats within frequency windows, and, second, the subsequent use of those beats to navigate through a complex soundscape. The neuroanatomical substrates underlying brain specialization for such fundamental organization of sound scenes are unknown. Here, we demonstrate that professional piano tuners are significantly better than controls matched for age and musical ability on a psychophysical task simulating active listening to beats within frequency windows that is based on amplitude modulation rate discrimination. Tuners show a categorical increase in gray matter volume in the right frontal operculum and right superior temporal lobe. Tuners also show a striking enhancement of gray matter volume in the anterior hippocampus, parahippocampal gyrus, and superior temporal gyrus, and an increase in white matter volume in the posterior hippocampus as a function of years of tuning experience. The relationship with gray matter volume is sensitive to years of tuning experience and starting age but not actual age or level of musicality. Our findings support a role for a core set of regions in the hippocampus and superior temporal cortex in skilled exploration of complex sound scenes in which precise sound "templates" are encoded and consolidated into memory over time in an experience-dependent manner.

  5. Navigating the auditory scene: an expert role for the hippocampus

    Science.gov (United States)

    Teki, Sundeep; Kumar, Sukhbinder; von Kriegstein, Katharina; Stewart, Lauren; Lyness, C. Rebecca; Moore, Brian C. J.; Capleton, Brian; Griffiths, Timothy D.

    2012-01-01

    Over a typical career piano tuners spend tens of thousands of hours exploring a specialized acoustic environment. Tuning requires accurate perception and adjustment of beats in two-note chords that serve as a navigational device to move between points in previously learned acoustic scenes. It is a two-stage process that depends on: firstly, selective listening to beats within frequency windows and, secondly, the subsequent use of those beats to navigate through a complex soundscape. The neuroanatomical substrates underlying brain specialization for such fundamental organization of sound scenes are unknown. Here, we demonstrate that professional piano tuners are significantly better than controls matched for age and musical ability on a psychophysical task simulating active listening to beats within frequency windows that is based on amplitude modulation rate discrimination. Tuners show a categorical increase in grey matter volume in the right frontal operculum and right superior temporal lobe. Tuners also show a striking enhancement of grey matter volume in the anterior hippocampus, parahippocampal gyrus, and superior temporal gyrus, and an increase in white matter volume in the posterior hippocampus as a function of years of tuning experience. The relationship with GM volume is sensitive to years of tuning experience and starting age but not actual age or level of musicality. Our findings support a role for a core set of regions in the hippocampus and superior temporal cortex in skilled exploration of complex sound scenes in which precise sound ‘templates’ are encoded and consolidated into memory over time in an experience-dependent manner. PMID:22933806

  6. Fractal-like image statistics in visual art: similarity to natural scenes.

    Science.gov (United States)

    Redies, Christoph; Hasenstein, Jens; Denzler, Joachim

    2007-01-01

    Both natural scenes and visual art are often perceived as esthetically pleasing. It is therefore conceivable that the two types of visual stimuli share statistical properties. For example, natural scenes display a Fourier power spectrum that tends to fall with spatial frequency according to a power-law. This result indicates that natural scenes have fractal-like, scale-invariant properties. In the present study, we asked whether visual art displays similar statistical properties by measuring their Fourier power spectra. Our analysis was restricted to graphic art from the Western hemisphere. For comparison, we also analyzed images, which generally display relatively low or no esthetic quality (household and laboratory objects, parts of plants, and scientific illustrations). Graphic art, but not the other image categories, resembles natural scenes in showing fractal-like, scale-invariant statistics. This property is universal in our sample of graphic art; it is independent of cultural variables, such as century and country of origin, techniques used or subject matter. We speculate that both graphic art and natural scenes share statistical properties because visual art is adapted to the structure of the visual system which, in turn, is adapted to process optimally the image statistics of natural scenes.

  7. Using selected scenes from Brazilian films to teach about substance use disorders, within medical education

    Directory of Open Access Journals (Sweden)

    João Mauricio Castaldelli-Maia

    Full Text Available CONTEXT AND OBJECTIVES: Themes like alcohol and drug abuse, relationship difficulties, psychoses, autism and personality dissociation disorders have been widely used in films. Psychiatry and psychiatric conditions in various cultural settings are increasingly taught using films. Many articles on cinema and psychiatry have been published but none have presented any methodology on how to select material. Here, the authors look at the portrayal of abusive use of alcohol and drugs during the Brazilian cinema revival period (1994 to 2008. DESIGN AND SETTING: Qualitative study at two universities in the state of São Paulo. METHODS: Scenes were selected from films available at rental stores and were analyzed using a specifically designed protocol. We assessed how realistic these scenes were and their applicability for teaching. One author selected 70 scenes from 50 films (graded for realism and teaching applicability > 8. These were then rated by another two judges. Rating differences among the three judges were assessed using nonparametric tests (P 8 were defined as "quality scenes". RESULTS: Thirty-nine scenes from 27 films were identified as "quality scenes". Alcohol, cannabis, cocaine, hallucinogens and inhalants were included in these. Signs and symptoms of intoxication, abusive/harmful use and dependence were shown. CONCLUSIONS: We have produced rich teaching material for discussing psychopathology relating to alcohol and drug use that can be used both at undergraduate and at postgraduate level. Moreover, it could be seen that certain drug use behavioral patterns are deeply rooted in some Brazilian films and groups.

  8. Single-View 3D Scene Reconstruction and Parsing by Attribute Grammar.

    Science.gov (United States)

    Liu, Xiaobai; Zhao, Yibiao; Zhu, Song-Chun

    2018-03-01

    In this paper, we present an attribute grammar for solving two coupled tasks: i) parsing a 2D image into semantic regions; and ii) recovering the 3D scene structures of all regions. The proposed grammar consists of a set of production rules, each describing a kind of spatial relation between planar surfaces in 3D scenes. These production rules are used to decompose an input image into a hierarchical parse graph representation where each graph node indicates a planar surface or a composite surface. Different from other stochastic image grammars, the proposed grammar augments each graph node with a set of attribute variables to depict scene-level global geometry, e.g., camera focal length, or local geometry, e.g., surface normal, contact lines between surfaces. These geometric attributes impose constraints between a node and its off-springs in the parse graph. Under a probabilistic framework, we develop a Markov Chain Monte Carlo method to construct a parse graph that optimizes the 2D image recognition and 3D scene reconstruction purposes simultaneously. We evaluated our method on both public benchmarks and newly collected datasets. Experiments demonstrate that the proposed method is capable of achieving state-of-the-art scene reconstruction of a single image.

  9. Acute stress influences the discrimination of complex scenes and complex faces in young healthy men.

    Science.gov (United States)

    Paul, M; Lech, R K; Scheil, J; Dierolf, A M; Suchan, B; Wolf, O T

    2016-04-01

    The stress-induced release of glucocorticoids has been demonstrated to influence hippocampal functions via the modulation of specific receptors. At the behavioral level stress is known to influence hippocampus dependent long-term memory. In recent years, studies have consistently associated the hippocampus with the non-mnemonic perception of scenes, while adjacent regions in the medial temporal lobe were associated with the perception of objects, and faces. So far it is not known whether and how stress influences non-mnemonic perceptual processes. In a behavioral study, fifty male participants were subjected either to the stressful socially evaluated cold-pressor test or to a non-stressful control procedure, before they completed a visual discrimination task, comprising scenes and faces. The complexity of the face and scene stimuli was manipulated in easy and difficult conditions. A significant three way interaction between stress, stimulus type and complexity was found. Stressed participants tended to commit more errors in the complex scenes condition. For complex faces a descriptive tendency in the opposite direction (fewer errors under stress) was observed. As a result the difference between the number of errors for scenes and errors for faces was significantly larger in the stress group. These results indicate that, beyond the effects of stress on long-term memory, stress influences the discrimination of spatial information, especially when the perception is characterized by a high complexity. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Adaptive attunement of selective covert attention to evolutionary-relevant emotional visual scenes.

    Science.gov (United States)

    Fernández-Martín, Andrés; Gutiérrez-García, Aída; Capafons, Juan; Calvo, Manuel G

    2017-05-01

    We investigated selective attention to emotional scenes in peripheral vision, as a function of adaptive relevance of scene affective content for male and female observers. Pairs of emotional-neutral images appeared peripherally-with perceptual stimulus differences controlled-while viewers were fixating on a different stimulus in central vision. Early selective orienting was assessed by the probability of directing the first fixation towards either scene, and the time until first fixation. Emotional scenes selectively captured covert attention even when they were task-irrelevant, thus revealing involuntary, automatic processing. Sex of observers and specific emotional scene content (e.g., male-to-female-aggression, families and babies, etc.) interactively modulated covert attention, depending on adaptive priorities and goals for each sex, both for pleasant and unpleasant content. The attentional system exhibits domain-specific and sex-specific biases and attunements, probably rooted in evolutionary pressures to enhance reproductive and protective success. Emotional cues selectively capture covert attention based on their bio-social significance. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. Binary Format for Scene (BIFS): combining MPEG-4 media to build rich multimedia services

    Science.gov (United States)

    Signes, Julien

    1998-12-01

    In this paper, we analyze the design concepts and some technical details behind the MPEG-4 standard, particularly the scene description layer, commonly known as the Binary Format for Scene (BIFS). We show how MPEG-4 may ease multimedia proliferation by offering a unique, optimized multimedia platform. Lastly, we analyze the potential of the technology for creating rich multimedia applications on various networks and platforms. An e-commerce application example is detailed, highlighting the benefits of the technology. Compression results show how rich applications may be built even on very low bit rate connections.

  12. Eye tracking to evaluate evidence recognition in crime scene investigations.

    Science.gov (United States)

    Watalingam, Renuka Devi; Richetelli, Nicole; Pelz, Jeff B; Speir, Jacqueline A

    2017-11-01

    Crime scene analysts are the core of criminal investigations; decisions made at the scene greatly affect the speed of analysis and the quality of conclusions, thereby directly impacting the successful resolution of a case. If an examiner fails to recognize the pertinence of an item on scene, the analyst's theory regarding the crime will be limited. Conversely, unselective evidence collection will most likely include irrelevant material, thus increasing a forensic laboratory's backlog and potentially sending the investigation into an unproductive and costly direction. Therefore, it is critical that analysts recognize and properly evaluate forensic evidence that can assess the relative support of differing hypotheses related to event reconstruction. With this in mind, the aim of this study was to determine if quantitative eye tracking data and qualitative reconstruction accuracy could be used to distinguish investigator expertise. In order to assess this, 32 participants were successfully recruited and categorized as experts or trained novices based on their practical experiences and educational backgrounds. Each volunteer then processed a mock crime scene while wearing a mobile eye tracker, wherein visual fixations, durations, search patterns, and reconstruction accuracy were evaluated. The eye tracking data (dwell time and task percentage on areas of interest or AOIs) were compared using Earth Mover's Distance (EMD) and the Needleman-Wunsch (N-W) algorithm, revealing significant group differences for both search duration (EMD), as well as search sequence (N-W). More specifically, experts exhibited greater dissimilarity in search duration, but greater similarity in search sequences than their novice counterparts. In addition to the quantitative visual assessment of examiner variability, each participant's reconstruction skill was assessed using a 22-point binary scoring system, in which significant group differences were detected as a function of total

  13. Analyzing near-infrared images for utility assessment

    Science.gov (United States)

    Salamati, Neda; Sadeghipoor, Zahra; Süsstrunk, Sabine

    2011-03-01

    Visual cognition is of significant importance in certain imaging applications, such as security and surveillance. In these applications, an important issue is to determine the cognition threshold, which is the maximum distortion level that can be applied to the images while still ensuring that enough information is conveyed to recognize the scene. The cognition task is usually studied with images that represent the scene in the visible part of the spectrum. In this paper, our goal is to evaluate the usefulness of another scene representation. To this end, we study the performance of near-infrared (NIR) images in cognition. Since surface reflections in the NIR part of the spectrum is material dependent, an object made of a specific material is more probable to have uniform response in the NIR images. Consequently, edges in the NIR images are likely to correspond to the physical boundaries of the objects, which are considered to be the most useful information for cognition. This feature of the NIR images leads to the hypothesis that NIR is better than a visible scene representation to be used in cognition tasks. To test this hypothesis, we compared the cognition thresholds of NIR and visible images performing a subjective study on 11 scenes. The images were compressed with different compression factors using JPEG2000 compression. The results of this subjective test show that recognizing 8 out of the 11 scenes is significantly easier based on the NIR images when compared to their visible counterparts.

  14. 3-D Scene Reconstruction from Aerial Imagery

    Science.gov (United States)

    2012-03-01

    navigate along interstate routes at speeds in excess of 110 mph, and the inclusion of the first down line in televised football games [28]. These...roughly 2000 feet above the target based on the Sadr City scene dimensions and scaling fac - tors. Images were rendered at a resolution of 1000×1000 as

  15. India's Arrival on the Modern Mathematical Scene

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 17; Issue 9. India's Arrival on the Modern Mathematical Scene. S G Dani. General Article Volume 17 Issue 9 September ... Author Affiliations. S G Dani1. Department of Mathematics, Indian Institute of Technology, Bombay Powai, Mumbai 400 076, India.

  16. OpenSceneGraph 3 Cookbook

    CERN Document Server

    Wang, Rui

    2012-01-01

    This is a cookbook full of recipes with practical examples enriched with code and the required screenshots for easy and quick comprehension. You should be familiar with the basic concepts of the OpenSceneGraph API and should be able to write simple programs. Some OpenGL and math knowledge will help a lot, too.

  17. Behind the scenes at the LHC inauguration

    CERN Multimedia

    2008-01-01

    On 21 October the LHC inauguration ceremony will take place and people from all over CERN have been busy preparing. With delegations from 38 countries attending, including ministers and heads of state, the Bulletin has gone behind the scenes to see what it takes to put together an event of this scale.

  18. The light field in natural scenes

    NARCIS (Netherlands)

    Muryy, A.A.

    2009-01-01

    This thesis focuses on the properties of light fields with respect to object appearance. More specifically, our interest was mainly directed to the structure and spatial variation of light fields in natural scenes. We approached the structure of light fields by means of spherical harmonics which

  19. Auditory and visual scene analysis : an overview

    NARCIS (Netherlands)

    Kondo, Hirohito M; van Loon, Anouk M; Kawahara, Jun-Ichiro; Moore, Brian C J

    2017-01-01

    We perceive the world as stable and composed of discrete objects even though auditory and visual inputs are often ambiguous owing to spatial and temporal occluders and changes in the conditions of observation. This raises important questions regarding where and how 'scene analysis' is performed in

  20. The primal scene and symbol formation.

    Science.gov (United States)

    Niedecken, Dietmut

    2016-06-01

    This article discusses the meaning of the primal scene for symbol formation by exploring its way of processing in a child's play. The author questions the notion that a sadomasochistic way of processing is the only possible one. A model of an alternative mode of processing is being presented. It is suggested that both ways of processing intertwine in the "fabric of life" (D. Laub). Two clinical vignettes, one from an analytic child psychotherapy and the other from the analysis of a 30 year-old female patient, illustrate how the primal scene is being played out in the form of a terzet. The author explores whether the sadomasochistic way of processing actually precedes the "primal scene as a terzet". She discusses if it could even be regarded as a precondition for the formation of the latter or, alternatively, if the "combined parent-figure" gives rise to ways of processing. The question is being left open. Finally, it is shown how both modes of experiencing the primal scene underlie the discoursive and presentative symbol formation, respectively. Copyright © 2015 Institute of Psychoanalysis.

  1. Scene independent real-time indirect illumination

    DEFF Research Database (Denmark)

    Frisvad, Jeppe Revall; Christensen, Niels Jørgen; Falster, Peter

    2005-01-01

    A novel method for real-time simulation of indirect illumination is presented in this paper. The method, which we call Direct Radiance Mapping (DRM), is based on basal radiance calculations and does not impose any restrictions on scene geometry or dynamics. This makes the method tractable for rea...

  2. Age-related changes in visual exploratory behavior in a natural scene setting.

    Science.gov (United States)

    Hamel, Johanna; De Beukelaer, Sophie; Kraft, Antje; Ohl, Sven; Audebert, Heinrich J; Brandt, Stephan A

    2013-01-01

    Diverse cognitive functions decline with increasing age, including the ability to process central and peripheral visual information in a laboratory testing situation (useful visual field of view). To investigate whether and how this influences activities of daily life, we studied age-related changes in visual exploratory behavior in a natural scene setting: a driving simulator paradigm of variable complexity was tested in subjects of varying ages with simultaneous eye- and head-movement recordings via a head-mounted camera. Detection and reaction times were also measured by visual fixation and manual reaction. We considered video computer game experience as a possible influence on performance. Data of 73 participants of varying ages were analyzed, driving two different courses. We analyzed the influence of route difficulty level, age, and eccentricity of test stimuli on oculomotor and driving behavior parameters. No significant age effects were found regarding saccadic parameters. In the older subjects head-movements increasingly contributed to gaze amplitude. More demanding courses and more peripheral stimuli locations induced longer reaction times in all age groups. Deterioration of the functionally useful visual field of view with increasing age was not suggested in our study group. However, video game-experienced subjects revealed larger saccade amplitudes and a broader distribution of fixations on the screen. They reacted faster to peripheral objects suggesting the notion of a general detection task rather than perceiving driving as a central task. As the video game-experienced population consisted of younger subjects, our study indicates that effects due to video game experience can easily be misinterpreted as age effects if not accounted for. We therefore view it as essential to consider video game experience in all testing methods using virtual media.

  3. Age-related changes in visual exploratory behavior in a natural scene setting

    Directory of Open Access Journals (Sweden)

    Johanna eHamel

    2013-06-01

    Full Text Available Diverse cognitive functions decline with increasing age, including the ability to process central and peripheral visual information in a laboratory testing situation (useful visual field of view. To investigate whether and how this influences activities of daily life, we studied age-related changes in visual exploratory behavior in a natural scene setting: a driving simulator paradigm of variable complexity was tested in subjects of varying ages with simultaneous eye- and head-movement recordings via a head-mounted camera. Detection and reaction times were also measured by visual fixation and manual reaction. We considered video computer game experience as a possible influence on performance. Data of 73 participants of varying ages were analyzed, driving two different courses. We analyzed the influence of route difficulty level, age and eccentricity of test stimuli on oculomotor and driving behavior parameters. No significant age effects were found regarding saccadic parameters. In the older subjects head-movements increasingly contributed to gaze amplitude. More demanding courses and more peripheral stimuli locations, induced longer reaction times in all age groups. Deterioration of the functionally useful visual field of view with increasing age was not suggested in our study group. However, video game-experienced subjects revealed larger saccade amplitudes and a broader distribution of fixations on the screen. They reacted faster to peripheral objects suggesting the notion of a general detection task rather than perceiving driving as a central task. As the video game experienced population consisted of younger subjects, our study indicates that effects due to video game experience can easily be misinterpreted as age effects if not accounted for. We therefore view it as essential to consider video game experience in all testing methods using virtual media.

  4. Infrared image synthesis for railway scene

    Science.gov (United States)

    Jiang, Zhaoyi; Wang, Xun; Ling, Yun

    2009-10-01

    Imaging guiding and machine vision in the infrared (IR) of military targets on land backgrounds are an extensively studied subject. The railway scene as an important traffic infrastructure usually plays a decisive role in land wars. Real IR images are expensive to obtain, and many researchers cannot afford them. The solution to this problem is generating realistic images at various conditions in computer. In this paper, a physics based model of infrared image synthesis for railway scene is proposed. A method for generating a thermal image of railway scene obtained by a forward-looking infrared (FLIR) sensor is described here. It consists of an integrated process based on thermal model of railway, atmospheric transmission and IR sensor effect. At first, a simple numeric model is proposed to calculate the temperature distribution of railway scene based on theoretical analysis of heat transfer. The typical structure of a railway can be divided into two main components. To simplify the thermal model, two parts of railway are processed independent. The distribution of track section temperature is considered in detail and not considered conduction along the track line. The ballast base is discretized into one-dimension multiple layers. Then we focus on the emissivtiy of steel track, which is a dominant factor in railway IR simulation. The value of emissivtiy is mainly determined by surface status of track. So the infrared radiation from track surface is calculated by Stephan-Boltzmann Law. Some results of synthesis image of railway scene in atmospheric window are shown finally. The generating images of railway are in good accordance with real images.

  5. Land-use Scene Classification in High-Resolution Remote Sensing Images by Multiscale Deeply Described Correlatons

    Science.gov (United States)

    Qi, K.; Qingfeng, G.

    2017-12-01

    With the popular use of High-Resolution Satellite (HRS) images, more and more research efforts have been placed on land-use scene classification. However, it makes the task difficult with HRS images for the complex background and multiple land-cover classes or objects. This article presents a multiscale deeply described correlaton model for land-use scene classification. Specifically, the convolutional neural network is introduced to learn and characterize the local features at different scales. Then, learnt multiscale deep features are explored to generate visual words. The spatial arrangement of visual words is achieved through the introduction of adaptive vector quantized correlograms at different scales. Experiments on two publicly available land-use scene datasets demonstrate that the proposed model is compact and yet discriminative for efficient representation of land-use scene images, and achieves competitive classification results with the state-of-art methods.

  6. Study on general design of dual-DMD based infrared two-band scene simulation system

    Science.gov (United States)

    Pan, Yue; Qiao, Yang; Xu, Xi-ping

    2017-02-01

    Mid-wave infrared(MWIR) and long-wave infrared(LWIR) two-band scene simulation system is a kind of testing equipment that used for infrared two-band imaging seeker. Not only it would be qualified for working waveband, but also realize the essence requests that infrared radiation characteristics should correspond to the real scene. Past single-digital micromirror device (DMD) based infrared scene simulation system does not take the huge difference between targets and background radiation into account, and it cannot realize the separated modulation to two-band light beam. Consequently, single-DMD based infrared scene simulation system cannot accurately express the thermal scene model that upper-computer built, and it is not that practical. To solve the problem, we design a dual-DMD based, dual-channel, co-aperture, compact-structure infrared two-band scene simulation system. The operating principle of the system is introduced in detail, and energy transfer process of the hardware-in-the-loop simulation experiment is analyzed as well. Also, it builds the equation about the signal-to-noise ratio of infrared detector in the seeker, directing the system overall design. The general design scheme of system is given, including the creation of infrared scene model, overall control, optical-mechanical structure design and image registration. By analyzing and comparing the past designs, we discuss the arrangement of optical engine framework in the system. According to the main content of working principle and overall design, we summarize each key techniques in the system.

  7. Intelligence-led crime scene processing. Part II: Intelligence and crime scene examination.

    Science.gov (United States)

    Ribaux, Olivier; Baylon, Amélie; Lock, Eric; Delémont, Olivier; Roux, Claude; Zingg, Christian; Margot, Pierre

    2010-06-15

    A better integration of the information conveyed by traces within intelligence-led framework would allow forensic science to participate more intensively to security assessments through forensic intelligence (part I). In this view, the collection of data by examining crime scenes is an entire part of intelligence processes. This conception frames our proposal for a model that promotes to better use knowledge available in the organisation for driving and supporting crime scene examination. The suggested model also clarifies the uncomfortable situation of crime scene examiners who must simultaneously comply with justice needs and expectations, and serve organisations that are mostly driven by broader security objectives. It also opens new perspective for forensic science and crime scene investigation, by the proposal to follow other directions than the traditional path suggested by dominant movements in these fields. (c) 2010 Elsevier Ireland Ltd. All rights reserved.

  8. Intelligence-led crime scene processing. Part II : Intelligence and crime scene examination.

    OpenAIRE

    Ribaux, O.; Baylon, A.; Lock, E.; Delémont, O.; Roux, C.; Zingg, C.; Margot, P.

    2010-01-01

    A better integration of the information conveyed by traces within intelligence-led framework would allow forensic science to participate more intensively to security assessments through forensic intelligence (part I). In this view, the collection of data by examining crime scenes is an entire part of intelligence processes. This conception frames our proposal for a model that promotes to better use knowledge available in the organisation for driving and supporting crime scene examinatio...

  9. Computational Auditory Scene Analysis Based Perceptual and Neural Principles

    National Research Council Canada - National Science Library

    Wang, DeLiang

    2004-01-01

    .... This fundamental process of auditory perception is called auditory scene analysis. of particular importance in auditory scene analysis is the separation of speech from interfering sounds, or speech segregation...

  10. The time course of natural scene perception with reduced attention

    NARCIS (Netherlands)

    Groen, I.I.A.; Ghebreab, S.; Lamme, V.A.F.; Scholte, H.S.

    Attention is thought to impose an informational bottleneck on vision by selecting particular information from visual scenes for enhanced processing. Behavioral evidence suggests, however, that some scene information is extracted even when attention is directed elsewhere. Here, we investigated the

  11. Relaxation with Immersive Natural Scenes Presented Using Virtual Reality.

    Science.gov (United States)

    Anderson, Allison P; Mayer, Michael D; Fellows, Abigail M; Cowan, Devin R; Hegel, Mark T; Buckey, Jay C

    2017-06-01

    Virtual reality (VR) can provide exposure to nature for those living in isolated confined environments. We evaluated VR-presented natural settings for reducing stress and improving mood. There were 18 participants (9 men, 9 women), ages 32 ± 12 yr, who viewed three 15-min 360° scenes (an indoor control, rural Ireland, and remote beaches). Subjects were mentally stressed with arithmetic before scenes. Electrodermal activity (EDA) and heart rate variability measured psycho-physiological arousal. The Positive and Negative Affect Schedule and the 15-question Modified Reality Judgment and Presence Questionnaire (MRJPQ) measured mood and scene quality. Reductions in EDA from baseline were greater at the end of the natural scenes compared to the control scene (-0.59, -0.52, and 0.32 μS, respectively). The natural scenes reduced negative affect from baseline ( 1.2 and 1.1 points), but the control scene did not ( 0.4 points). MRJPQ scores for the control scene were lower than both natural scenes (4.9, 6.7, and 6.5 points, respectively). Within the two natural scenes, the preferred scene reduced negative affect ( 2.4 points) more than the second choice scene ( 1.8 points) and scored higher on the MRJPQ (6.8 vs. 6.4 points). Natural scene VR provided relaxation both objectively and subjectively, and scene preference had a significant effect on mood and perception of scene quality. VR may enable relaxation for people living in isolated confined environments, particularly when matched to personal preferences.Anderson AP, Mayer MD, Fellows AM, Cowan DR, Hegel MT, Buckey JC. Relaxation with immersive natural scenes presented using virtual reality. Aerosp Med Hum Perform. 2017; 88(6):520526.

  12. The effect of distraction on change detection in crowded acoustic scenes.

    Science.gov (United States)

    Petsas, Theofilos; Harrison, Jemma; Kashino, Makio; Furukawa, Shigeto; Chait, Maria

    2016-11-01

    In this series of behavioural experiments we investigated the effect of distraction on the maintenance of acoustic scene information in short-term memory. Stimuli are artificial acoustic 'scenes' composed of several (up to twelve) concurrent tone-pip streams ('sources'). A gap (1000 ms) is inserted partway through the 'scene'; Changes in the form of an appearance of a new source or disappearance of an existing source, occur after the gap in 50% of the trials. Listeners were instructed to monitor the unfolding 'soundscapes' for these events. Distraction was measured by presenting distractor stimuli during the gap. Experiments 1 and 2 used a dual task design where listeners were required to perform a task with varying attentional demands ('High Demand' vs. 'Low Demand') on brief auditory (Experiment 1a) or visual (Experiment 1b) signals presented during the gap. Experiments 2 and 3 required participants to ignore distractor sounds and focus on the change detection task. Our results demonstrate that the maintenance of scene information in short-term memory is influenced by the availability of attentional and/or processing resources during the gap, and that this dependence appears to be modality specific. We also show that these processes are susceptible to bottom up driven distraction even in situations when the distractors are not novel, but occur on each trial. Change detection performance is systematically linked with the, independently determined, perceptual salience of the distractor sound. The findings also demonstrate that the present task may be a useful objective means for determining relative perceptual salience. Copyright © 2016 The Authors. Published by Elsevier B.V. All rights reserved.

  13. Mission Driven Scene Understanding: Dynamic Environments

    Science.gov (United States)

    2016-06-01

    International Society for Optical Engineering; c2001. 14. Siva P, Russell C, Xiang T, Agapito L. Looking beyond the image : Unsupervised learning for...60,000 was 0.0818 and the training error rate was 0.0273, which means that the CNN had “ learned ” to assign the correct class label to an image , when... images is necessary for scene understanding. Such dynamic environmental conditions (e.g., changing illumination, precipitation, and vegetation) can

  14. Gay and Lesbian Scene in Metelkova

    Directory of Open Access Journals (Sweden)

    Nataša Velikonja

    2013-09-01

    Full Text Available The article deals with the development of the gay and lesbian scene in ACC Metelkova, while specifying the preliminary aspects of establishing and building gay and lesbian activism associated with spatial issues. The struggle for space or occupying public space is vital for the gay and lesbian scene, as it provides not only the necessary socializing opportunities for gays and lesbians, but also does away with the historical hiding of homosexuality in the closet, in seclusion and silence. Because of their autonomy and long-term, continuous existence, homo-clubs at Metelkova contributed to the consolidation of the gay and lesbian scene in Slovenia and significantly improved the opportunities for cultural, social and political expression of gays and lesbians. Such a synthesis of the cultural, social and political, further intensified in Metelkova, and characterizes the gay and lesbian community in Slovenia from the very outset of gay and lesbian activism in 1984. It is this long-term synthesis that keeps this community in Slovenia so vital and politically resilient.

  15. Clandestine laboratory scene investigation and processing using portable GC/MS

    Science.gov (United States)

    Matejczyk, Raymond J.

    1997-02-01

    This presentation describes the use of portable gas chromatography/mass spectrometry for on-scene investigation and processing of clandestine laboratories. Clandestine laboratory investigations present special problems to forensic investigators. These crime scenes contain many chemical hazards that must be detected, identified and collected as evidence. Gas chromatography/mass spectrometry performed on-scene with a rugged, portable unit is capable of analyzing a variety of matrices for drugs and chemicals used in the manufacture of illicit drugs, such as methamphetamine. Technologies used to detect various materials at a scene have particular applications but do not address the wide range of samples, chemicals, matrices and mixtures that exist in clan labs. Typical analyses performed by GC/MS are for the purpose of positively establishing the identity of starting materials, chemicals and end-product collected from clandestine laboratories. Concerns for the public and investigator safety and the environment are also important factors for rapid on-scene data generation. Here is described the implementation of a portable multiple-inlet GC/MS system designed for rapid deployment to a scene to perform forensic investigations of clandestine drug manufacturing laboratories. GC/MS has long been held as the 'gold standard' in performing forensic chemical analyses. With the capability of GC/MS to separate and produce a 'chemical fingerprint' of compounds, it is utilized as an essential technique for detecting and positively identifying chemical evidence. Rapid and conclusive on-scene analysis of evidence will assist the forensic investigators in collecting only pertinent evidence thereby reducing the amount of evidence to be transported, reducing chain of custody concerns, reducing costs and hazards, maintaining sample integrity and speeding the completion of the investigative process.

  16. Object detection in natural scenes: Independent effects of spatial and category-based attention.

    Science.gov (United States)

    Stein, Timo; Peelen, Marius V

    2017-04-01

    Humans are remarkably efficient in detecting highly familiar object categories in natural scenes, with evidence suggesting that such object detection can be performed in the (near) absence of attention. Here we systematically explored the influences of both spatial attention and category-based attention on the accuracy of object detection in natural scenes. Manipulating both types of attention additionally allowed for addressing how these factors interact: whether the requirement for spatial attention depends on the extent to which observers are prepared to detect a specific object category-that is, on category-based attention. The results showed that the detection of targets from one category (animals or vehicles) was better than the detection of targets from two categories (animals and vehicles), demonstrating the beneficial effect of category-based attention. This effect did not depend on the semantic congruency of the target object and the background scene, indicating that observers attended to visual features diagnostic of the foreground target objects from the cued category. Importantly, in three experiments the detection of objects in scenes presented in the periphery was significantly impaired when observers simultaneously performed an attentionally demanding task at fixation, showing that spatial attention affects natural scene perception. In all experiments, the effects of category-based attention and spatial attention on object detection performance were additive rather than interactive. Finally, neither spatial nor category-based attention influenced metacognitive ability for object detection performance. These findings demonstrate that efficient object detection in natural scenes is independently facilitated by spatial and category-based attention.

  17. Fast Binary Coding for the Scene Classification of High-Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Fan Hu

    2016-06-01

    Full Text Available Scene classification of high-resolution remote sensing (HRRS imagery is an important task in the intelligent processing of remote sensing images and has attracted much attention in recent years. Although the existing scene classification methods, e.g., the bag-of-words (BOW model and its variants, can achieve acceptable performance, these approaches strongly rely on the extraction of local features and the complicated coding strategy, which are usually time consuming and demand much expert effort. In this paper, we propose a fast binary coding (FBC method, to effectively generate efficient discriminative scene representations of HRRS images. The main idea is inspired by the unsupervised feature learning technique and the binary feature descriptions. More precisely, equipped with the unsupervised feature learning technique, we first learn a set of optimal “filters” from large quantities of randomly-sampled image patches and then obtain feature maps by convolving the image scene with the learned filters. After binarizing the feature maps, we perform a simple hashing step to convert the binary-valued feature map to the integer-valued feature map. Finally, statistical histograms computed on the integer-valued feature map are used as global feature representations of the scenes of HRRS images, similar to the conventional BOW model. The analysis of the algorithm complexity and experiments on HRRS image datasets demonstrate that, in contrast with existing scene classification approaches, the proposed FBC has much faster computational speed and achieves comparable classification performance. In addition, we also propose two extensions to FBC, i.e., the spatial co-occurrence matrix and different visual saliency maps, for further improving its final classification accuracy.

  18. Peripersonal versus extrapersonal visual scene information for egocentric direction and position perception.

    Science.gov (United States)

    Nakashima, Ryoichi; Kumada, Takatsune

    2017-03-22

    When perceiving the visual environment, people simultaneously perceive their own direction and position in the environment (i.e., egocentric spatial perception). This study investigated what visual information in a scene is necessary for egocentric spatial perceptions. In two perception tasks (the egocentric direction and position perception tasks), observers viewed two static road images presented sequentially. In Experiment 1, the critical manipulation involved an occluded region in the road image; an extrapersonal region (far-occlusion) and a peripersonal region (near-occlusion). Egocentric direction perception was the poorer in the far-occlusion condition than in the no-occlusion condition, and egocentric position perceptions were poorer in the far- and near-occlusion conditions than in the no-occlusion condition. In Experiment 2, we conducted the same tasks manipulating the observers' gaze location in a scene; an extrapersonal region (far-gaze), a peripersonal region (near-gaze) and the intermediate region between the former two (middle-gaze). Egocentric direction perception performance was the best in the far-gaze condition, and egocentric position perception performances were not different among gaze location conditions. These results suggest that egocentric direction perception is based on fine visual information about the extrapersonal region in a road landscape, and egocentric position perception is based on information about the entire visual scene.

  19. Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Fan Hu

    2015-11-01

    Full Text Available Learning efficient image representations is at the core of the scene classification task of remote sensing imagery. The existing methods for solving the scene classification task, based on either feature coding approaches with low-level hand-engineered features or unsupervised feature learning, can only generate mid-level image features with limited representative ability, which essentially prevents them from achieving better performance. Recently, the deep convolutional neural networks (CNNs, which are hierarchical architectures trained on large-scale datasets, have shown astounding performance in object recognition and detection. However, it is still not clear how to use these deep convolutional neural networks for high-resolution remote sensing (HRRS scene classification. In this paper, we investigate how to transfer features from these successfully pre-trained CNNs for HRRS scene classification. We propose two scenarios for generating image features via extracting CNN features from different layers. In the first scenario, the activation vectors extracted from fully-connected layers are regarded as the final image features; in the second scenario, we extract dense features from the last convolutional layer at multiple scales and then encode the dense features into global image features through commonly used feature coding approaches. Extensive experiments on two public scene classification datasets demonstrate that the image features obtained by the two proposed scenarios, even with a simple linear classifier, can result in remarkable performance and improve the state-of-the-art by a significant margin. The results reveal that the features from pre-trained CNNs generalize well to HRRS datasets and are more expressive than the low- and mid-level features. Moreover, we tentatively combine features extracted from different CNN models for better performance.

  20. Multiscale analysis of depth images from natural scenes: Scaling in the depth of the woods

    International Nuclear Information System (INIS)

    Chéné, Yann; Belin, Étienne; Rousseau, David; Chapeau-Blondeau, François

    2013-01-01

    We analyze an ensemble of images from outdoor natural scenes and consisting of pairs of a standard gray-level luminance image associated with a depth image of the same scene, delivered by a recently introduced low-cost sensor for joint imaging of depth and luminance. We specially focus on statistical analysis of multiscale and fractal properties in the natural images. Two methodologies are implemented for this purpose, and examining the distribution of contrast upon coarse-graining at increasing scales, and the orientationally averaged power spectrum tied to spatial frequencies. Both methodologies confirm, on another independent dataset here, the presence of fractal scale invariance in the luminance natural images, as previously reported. Both methodologies here also reveal the presence of fractal scale invariance in the novel data formed by depth images from natural scenes. The multiscale analysis is confronted on luminance images and on the novel depth images together with an analysis of their statistical correlation. The results, especially the new results on the multiscale analysis of depth images, consolidate the importance and extend the multiplicity of aspects of self-similarity and fractal scale invariance properties observable in the constitution of images from natural scenes. Such results are useful to better understanding and modeling of the (multiscale) structure of images from natural scenes, with relevance to image processing algorithms and to visual perception. The approach also contains potentialities for the fractal characterization of three-dimensional natural structures and their interaction with light

  1. Frontal eye fields involved in shifting frame of reference within working memory for scenes

    DEFF Research Database (Denmark)

    Wallentin, Mikkel; Roepstorff, Andreas; Burgess, Neil

    2008-01-01

    Working memory (WM) evoked by linguistic cues for allocentric spatial and egocentric spatial aspects of a visual scene was investigated by correlating fMRI BOLD signal (or "activation") with performance on a spatial-relations task. Subjects indicated the relative positions of a person or object...... during shifting reference frames in representational space. Analysis of actual eye movements in 3 subjects revealed no difference between egocentric and allocentric recall tasks where visual stimuli were also absent. Thus, the FEF machinery for directing eye movements may also be involved in changing...

  2. Applying artificial vision models to human scene understanding.

    Science.gov (United States)

    Aminoff, Elissa M; Toneva, Mariya; Shrivastava, Abhinav; Chen, Xinlei; Misra, Ishan; Gupta, Abhinav; Tarr, Michael J

    2015-01-01

    How do we understand the complex patterns of neural responses that underlie scene understanding? Studies of the network of brain regions held to be scene-selective-the parahippocampal/lingual region (PPA), the retrosplenial complex (RSC), and the occipital place area (TOS)-have typically focused on single visual dimensions (e.g., size), rather than the high-dimensional feature space in which scenes are likely to be neurally represented. Here we leverage well-specified artificial vision systems to explicate a more complex understanding of how scenes are encoded in this functional network. We correlated similarity matrices within three different scene-spaces arising from: (1) BOLD activity in scene-selective brain regions; (2) behavioral measured judgments of visually-perceived scene similarity; and (3) several different computer vision models. These correlations revealed: (1) models that relied on mid- and high-level scene attributes showed the highest correlations with the patterns of neural activity within the scene-selective network; (2) NEIL and SUN-the models that best accounted for the patterns obtained from PPA and TOS-were different from the GIST model that best accounted for the pattern obtained from RSC; (3) The best performing models outperformed behaviorally-measured judgments of scene similarity in accounting for neural data. One computer vision method-NEIL ("Never-Ending-Image-Learner"), which incorporates visual features learned as statistical regularities across web-scale numbers of scenes-showed significant correlations with neural activity in all three scene-selective regions and was one of the two models best able to account for variance in the PPA and TOS. We suggest that these results are a promising first step in explicating more fine-grained models of neural scene understanding, including developing a clearer picture of the division of labor among the components of the functional scene-selective brain network.

  3. Successful scene encoding in presymptomatic early-onset Alzheimer’s disease

    Science.gov (United States)

    Quiroz, Yakeel T.; Willment, Kim Celone; Castrillon, Gabriel; Muniz, Martha; Lopera, Francisco; Budson, Andrew; Stern, Chantal E.

    2016-01-01

    Background Brain regions critical to episodic memory are altered during the preclinical stages of Alzheimer’s disease (AD). However, reliable means of identifying cognitively-normal individuals at higher risk to develop AD have not been established. Objective To examine whether fMRI can detect early functional changes associated with scene encoding in a group of presymptomatic Presenilin-1 (PSEN1) E280A mutation carriers. Methods Participants were 39 young, cognitively-normal individuals from an autosomal dominant early-onset AD kindred, located in Antioquia, Colombia. Participants performed an fMRI scene encoding task and a post-scan subsequent memory test. Results PSEN1 mutation carriers exhibited hyperactivation within medial temporal lobe regions during successful scene encoding (hippocampal formation, parahippocampal gyrus) compared to age-matched non-carriers. Conclusion Hyperactivation in medial temporal lobe regions during scene encoding is seen in individuals genetically-determined to develop AD years before their clinical onset. Our findings will guide future research with the ultimate goal of using functional neuroimaging in the early detection of preclinical AD. PMID:26401774

  4. Camera pose estimation for augmented reality in a small indoor dynamic scene

    Science.gov (United States)

    Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad

    2017-09-01

    Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.

  5. Performativity and Genesis of the Scene

    Directory of Open Access Journals (Sweden)

    Sílvia Fernandes

    2013-07-01

    Full Text Available From the recognition of the tension between reality and fiction in contemporary theatre, generally defined as theatre of the real, we intend to make an intersection of this phenomenon with the theoretical field of performativity, which focuses on the work in process, dynamic transformation and experience. The intention is to associate the theory of performativity to observations about the latest work of Theatre Vertigo, directed by Antonio Araujo, Bom Retiro 958 metros. The use of genetic ways to approach theatre will serve as a motto to interpret some aspects of the creative process and the scene.

  6. Image policy, subjectivation and argument scenes

    Directory of Open Access Journals (Sweden)

    Ângela Cristina Salgueiro Marques

    2014-12-01

    Full Text Available This paper is aimed at discussing, with focus on Jacques Rancière, how an image policy can be noticed in the creative production of scenes of dissent from which the political agent emerge, appears and constitute himself in a process of subjectivation. The political and critical power of the image is linked to survival acts: operations and attempts that enable to resist to captures, silences and excesses comitted by the media discourses, by the social institutions and by the State.

  7. Lateralized discrimination of emotional scenes in peripheral vision.

    Science.gov (United States)

    Calvo, Manuel G; Rodríguez-Chinea, Sandra; Fernández-Martín, Andrés

    2015-03-01

    This study investigates whether there is lateralized processing of emotional scenes in the visual periphery, in the absence of eye fixations; and whether this varies with emotional valence (pleasant vs. unpleasant), specific emotional scene content (babies, erotica, human attack, mutilation, etc.), and sex of the viewer. Pairs of emotional (positive or negative) and neutral photographs were presented for 150 ms peripherally (≥6.5° away from fixation). Observers judged on which side the emotional picture was located. Low-level image properties, scene visual saliency, and eye movements were controlled. Results showed that (a) correct identification of the emotional scene exceeded the chance level; (b) performance was more accurate and faster when the emotional scene appeared in the left than in the right visual field; (c) lateralization was equivalent for females and males for pleasant scenes, but was greater for females and unpleasant scenes; and (d) lateralization occurred similarly for different emotional scene categories. These findings reveal discrimination between emotional and neutral scenes, and right brain hemisphere dominance for emotional processing, which is modulated by sex of the viewer and scene valence, and suggest that coarse affective significance can be extracted in peripheral vision.

  8. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  9. Efficient 3D scene modeling and mosaicing

    CERN Document Server

    Nicosevici, Tudor

    2013-01-01

    This book proposes a complete pipeline for monocular (single camera) based 3D mapping of terrestrial and underwater environments. The aim is to provide a solution to large-scale scene modeling that is both accurate and efficient. To this end, we have developed a novel Structure from Motion algorithm that increases mapping accuracy by registering camera views directly with the maps. The camera registration uses a dual approach that adapts to the type of environment being mapped.   In order to further increase the accuracy of the resulting maps, a new method is presented, allowing detection of images corresponding to the same scene region (crossovers). Crossovers then used in conjunction with global alignment methods in order to highly reduce estimation errors, especially when mapping large areas. Our method is based on Visual Bag of Words paradigm (BoW), offering a more efficient and simpler solution by eliminating the training stage, generally required by state of the art BoW algorithms.   Also, towards dev...

  10. NEGOTIATING PLACE AND GENDERED VIOLENCE IN CANADA’S LARGEST OPEN DRUG SCENE

    Science.gov (United States)

    McNeil, Ryan; Shannon, Kate; Shaver, Laura; Kerr, Thomas; Small, Will

    2014-01-01

    Background Vancouver’s Downtown Eastside is home to Canada’s largest street-based drug scene and only supervised injection facility (Insite). High levels of violence among men and women have been documented in this neighbourhood. This study was undertaken to explore the role of violence in shaping the socio-spatial relations of women and ‘marginal men’ (i.e., those occupying subordinate positions within the drug scene) in the Downtown Eastside, including access to Insite. Methods Semi-structured qualitative interviews were conducted with 23 people who inject drugs (PWID) recruited through the Vancouver Area Network of Drug Users, a local drug user organization. Interviews included a mapping exercise. Interview transcripts and maps were analyzed thematically, with an emphasis on how gendered violence shaped participants’ spatial practices. Results Hegemonic forms of masculinity operating within the Downtown Eastside framed the everyday violence experienced by women and marginal men. This violence shaped the spatial practices of women and marginal men, in that they avoided drug scene milieus where they had experienced violence or that they perceived to be dangerous. Some men linked their spatial restrictions to the perceived 'dope quality' of neighbourhood drug dealers to maintain claims to dominant masculinities while enacting spatial strategies to promote safety. Environmental supports provided by health and social care agencies were critical in enabling women and marginal men to negotiate place and survival within the context of drug scene violence. Access to Insite did not motivate participants to enter into “dangerous” drug scene milieus but they did venture into these areas if necessary to obtain drugs or generate income. Conclusion Gendered violence is critical in restricting the geographies of men and marginal men within the street-based drug scene. There is a need to scale up existing environmental interventions, including supervised injection

  11. Negotiating place and gendered violence in Canada's largest open drug scene.

    Science.gov (United States)

    McNeil, Ryan; Shannon, Kate; Shaver, Laura; Kerr, Thomas; Small, Will

    2014-05-01

    Vancouver's Downtown Eastside is home to Canada's largest street-based drug scene and only supervised injection facility (Insite). High levels of violence among men and women have been documented in this neighbourhood. This study was undertaken to explore the role of violence in shaping the socio-spatial relations of women and 'marginal men' (i.e., those occupying subordinate positions within the drug scene) in the Downtown Eastside, including access to Insite. Semi-structured qualitative interviews were conducted with 23 people who inject drugs (PWID) recruited through the Vancouver Area Network of Drug Users, a local drug user organization. Interviews included a mapping exercise. Interview transcripts and maps were analyzed thematically, with an emphasis on how gendered violence shaped participants' spatial practices. Hegemonic forms of masculinity operating within the Downtown Eastside framed the everyday violence experienced by women and marginal men. This violence shaped the spatial practices of women and marginal men, in that they avoided drug scene milieus where they had experienced violence or that they perceived to be dangerous. Some men linked their spatial restrictions to the perceived 'dope quality' of neighbourhood drug dealers to maintain claims to dominant masculinities while enacting spatial strategies to promote safety. Environmental supports provided by health and social care agencies were critical in enabling women and marginal men to negotiate place and survival within the context of drug scene violence. Access to Insite did not motivate participants to enter into "dangerous" drug scene milieus but they did venture into these areas if necessary to obtain drugs or generate income. Gendered violence is critical in restricting the geographies of men and marginal men within the street-based drug scene. There is a need to scale up existing environmental interventions, including supervised injection services, to minimize violence and potential drug

  12. Spectral feature characterization methods for blood stain detection in crime scene backgrounds

    Science.gov (United States)

    Yang, Jie; Mathew, Jobin J.; Dube, Roger R.; Messinger, David W.

    2016-05-01

    Blood stains are one of the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Blood spectral signatures containing unique reflectance or absorption features are important both for forensic on-site investigation and laboratory testing. They can be used for target detection and identification applied to crime scene hyperspectral imagery, and also be utilized to analyze the spectral variation of blood on various backgrounds. Non-blood stains often mislead the detection and can generate false alarms at a real crime scene, especially for dark and red backgrounds. This paper measured the reflectance of liquid blood and 9 kinds of non-blood samples in the range of 350 nm - 2500 nm in various crime scene backgrounds, such as pure samples contained in petri dish with various thicknesses, mixed samples with different colors and materials of fabrics, and mixed samples with wood, all of which are examined to provide sub-visual evidence for detecting and recognizing blood from non-blood samples in a realistic crime scene. The spectral difference between blood and non-blood samples are examined and spectral features such as "peaks" and "depths" of reflectance are selected. Two blood stain detection methods are proposed in this paper. The first method uses index to denote the ratio of "depth" minus "peak" over"depth" add"peak" within a wavelength range of the reflectance spectrum. The second method uses relative band depth of the selected wavelength ranges of the reflectance spectrum. Results show that the index method is able to discriminate blood from non-blood samples in most tested crime scene backgrounds, but is not able to detect it from black felt. Whereas the relative band depth method is able to discriminate blood from non-blood samples on all of the tested background material types and colors.

  13. Influence of 3D effects on 1D aerosol retrievals in synthetic, partially clouded scenes

    International Nuclear Information System (INIS)

    Stap, F.A.; Hasekamp, O.P.; Emde, C.; Röckmann, T.

    2016-01-01

    An important challenge in aerosol remote sensing is to retrieve aerosol properties in the vicinity of clouds and in cloud contaminated scenes. Satellite based multi-wavelength, multi-angular, photo-polarimetric instruments are particularly suited for this task as they have the ability to separate scattering by aerosol and cloud particles. Simultaneous aerosol/cloud retrievals using 1D radiative transfer codes cannot account for 3D effects such as shadows, cloud induced enhancements and darkening of cloud edges. In this study we investigate what errors are introduced on the retrieved optical and micro-physical aerosol properties, when these 3D effects are neglected in retrievals where the partial cloud cover is modeled using the Independent Pixel Approximation. To this end a generic, synthetic data set of PARASOL like observations for 3D scenes with partial, liquid water cloud cover is created. It is found that in scenes with random cloud distributions (i.e. broken cloud fields) and either low cloud optical thickness or low cloud fraction, the inversion algorithm can fit the observations and retrieve optical and micro-physical aerosol properties with sufficient accuracy. In scenes with non-random cloud distributions (e.g. at the edge of a cloud field) the inversion algorithm can fit the observations, however, here the retrieved real part of the refractive indices of both modes is biased. - Highlights: • An algorithm for retrieval of both aerosol and cloud properties is presented. • Radiative transfer models of 3D, partially clouded scenes are simulated. • Errors introduced in the retrieved aerosol properties are discussed.

  14. Detection of Street Light Poles in Road Scenes from Mobile LIDAR Mapping Data for its Applications

    Science.gov (United States)

    Talebi Nahr, S.; Saadatseresht, M.; Talebi, J.

    2017-09-01

    Identification of street light poles is very significant and crucial for intelligent transportation systems. Automatic detection and extraction of street light poles are a challenging task in road scenes. This is mainly because of complex road scenes. Nowadays mobile laser scanners have been used to acquire three-dimensional geospatial data of roadways over a large area at a normal driving speed. With respect to the high density of such data, new and beneficial algorithms are needed to extract objects from these data. In this article, our proposed algorithm for extraction of street light poles consists of five main steps: 1. Preprocessing, 2. Ground removal, 3. 3D connected components analysis, 4. Local geometric feature generation, 5. Extraction of street light poles using Bhattacharya distance metric. The proposed algorithm is tested on two rural roadways, called Area1 and Area2. Evaluation results for Area1 report 0.80, 0.72 and 0.62 for completeness, correctness and quality, respectively.

  15. Explaining scene composition using kinematic chains of humans: application to Portuguese tiles history

    Science.gov (United States)

    da Silva, Nuno Pinho; Marques, Manuel; Carneiro, Gustavo; Costeira, João P.

    2011-03-01

    Painted tile panels (Azulejos) are one of the most representative Portuguese forms of art. Most of these panels are inspired on, and sometimes are literal copies of, famous paintings, or prints of those paintings. In order to study the Azulejos, art historians need to trace these roots. To do that they manually search art image databases, looking for images similar to the representation on the tile panel. This is an overwhelming task that should be automated as much as possible. Among several cues, the pose of humans and the general composition of people in a scene is quite discriminative. We build an image descriptor, combining the kinematic chain of each character, and contextual information about their composition, in the scene. Given a query image, our system computes its similarity profile over the database. Using nearest neighbors in the space of the descriptors, the proposed system retrieves the prints that most likely inspired the tiles' work.

  16. The functional consequences of social distraction: Attention and memory for complex scenes.

    Science.gov (United States)

    Doherty, Brianna Ruth; Patai, Eva Zita; Duta, Mihaela; Nobre, Anna Christina; Scerif, Gaia

    2017-01-01

    Cognitive scientists have long proposed that social stimuli attract visual attention even when task irrelevant, but the consequences of this privileged status for memory are unknown. To address this, we combined computational approaches, eye-tracking methodology, and individual-differences measures. Participants searched for targets in scenes containing social or non-social distractors equated for low-level visual salience. Subsequent memory precision for target locations was tested. Individual differences in autistic traits and social anxiety were also measured. Eye-tracking revealed significantly more attentional capture to social compared to non-social distractors. Critically, memory precision for target locations was poorer for social scenes. This effect was moderated by social anxiety, with anxious individuals remembering target locations better under conditions of social distraction. These findings shed further light onto the privileged attentional status of social stimuli and its functional consequences on memory across individuals. Copyright © 2016. Published by Elsevier B.V.

  17. Global scene layout modulates contextual learning in change detection.

    Science.gov (United States)

    Conci, Markus; Müller, Hermann J

    2014-01-01

    Change in the visual scene often goes unnoticed - a phenomenon referred to as "change blindness." This study examined whether the hierarchical structure, i.e., the global-local layout of a scene can influence performance in a one-shot change detection paradigm. To this end, natural scenes of a laid breakfast table were presented, and observers were asked to locate the onset of a new local object. Importantly, the global structure of the scene was manipulated by varying the relations among objects in the scene layouts. The very same items were either presented as global-congruent (typical) layouts or as global-incongruent (random) arrangements. Change blindness was less severe for congruent than for incongruent displays, and this congruency benefit increased with the duration of the experiment. These findings show that global layouts are learned, supporting detection of local changes with enhanced efficiency. However, performance was not affected by scene congruency in a subsequent control experiment that required observers to localize a static discontinuity (i.e., an object that was missing from the repeated layouts). Our results thus show that learning of the global layout is particularly linked to the local objects. Taken together, our results reveal an effect of "global precedence" in natural scenes. We suggest that relational properties within the hierarchy of a natural scene are governed, in particular, by global image analysis, reducing change blindness for local objects through scene learning.

  18. Research on hyperspectral dynamic infrared scene simulation technology

    Science.gov (United States)

    Wang, Jun; Hu, Yu; Ding, Na; Sun, Kefeng; Sun, Dandan; Xie, Junhu; Wu, Wenli; Gao, Jiaobo

    2015-02-01

    The paper presents a hardware in loop dynamic IR scene simulation technology for IR hyperspectral imaging system. Along with fleetly development of new type EO detecting, remote sensing and hyperspectral imaging technique, not only static parameters' calibration of hyperspectral IR imaging system but also dynamic parameters' testing and evaluation are required, thus hyperspectral dynamic IR simulation and evaluation become more and more important. Hyperspectral dynamic IR scene projector utilizes hyperspectral space and time domain features controlling spectrum and time synchronously to realize hardware in loop simulation. Hyperspectral IR target and background simulating image can be gained by the accomplishment of 3D model and IR characteristic romancing, hyperspectral dynamic IR scene is produced by image converting device. The main parameters of a developed hyperspectral dynamic IR scene projector: wave band range is 3~5μm, 8~12μm Field of View (FOV) is 8°; spatial resolution is 1024×768 spectrum resolution is 1%~2%. IR source and simulating scene features should be consistent with spectrum characters of target, and different spectrum channel's images can be gotten from calibration. A hyperspectral imaging system splits light with dispersing type grating, pushbrooms and collects the output signal of dynamic IR scene projector. With hyperspectral scene spectrum modeling, IR features romancing, atmosphere transmission feature modeling and IR scene projecting, target and scene in outfield can be simulated ideally, simulation and evaluation of IR hyperspectral imaging system's dynamic features are accomplished in laboratory.

  19. A STEP TOWARDS DYNAMIC SCENE ANALYSIS WITH ACTIVE MULTI-VIEW RANGE IMAGING SYSTEMS

    Directory of Open Access Journals (Sweden)

    M. Weinmann

    2012-07-01

    Full Text Available Obtaining an appropriate 3D description of the local environment remains a challenging task in photogrammetric research. As terrestrial laser scanners (TLSs perform a highly accurate, but time-dependent spatial scanning of the local environment, they are only suited for capturing static scenes. In contrast, new types of active sensors provide the possibility of simultaneously capturing range and intensity information by images with a single measurement, and the high frame rate also allows for capturing dynamic scenes. However, due to the limited field of view, one observation is not sufficient to obtain a full scene coverage and therefore, typically, multiple observations are collected from different locations. This can be achieved by either placing several fixed sensors at different known locations or by using a moving sensor. In the latter case, the relation between different observations has to be estimated by using information extracted from the captured data and then, a limited field of view may lead to problems if there are too many moving objects within it. Hence, a moving sensor platform with multiple and coupled sensor devices offers the advantages of an extended field of view which results in a stabilized pose estimation, an improved registration of the recorded point clouds and an improved reconstruction of the scene. In this paper, a new experimental setup for investigating the potentials of such multi-view range imaging systems is presented which consists of a moving cable car equipped with two synchronized range imaging devices. The presented setup allows for monitoring in low altitudes and it is suitable for getting dynamic observations which might arise from moving cars or from moving pedestrians. Relying on both 3D geometry and 2D imagery, a reliable and fully automatic approach for co-registration of captured point cloud data is presented which is essential for a high quality of all subsequent tasks. The approach involves using

  20. The Sport Expert's Attention Superiority on Skill-related Scene Dynamic by The Activation of Left Medial Frontal Gyrus: An ERP and LORETA Study.

    Science.gov (United States)

    He, Mengyang; Qi, Changzhu; Lu, Yang; Song, Amanda; Hayat, Saba Z; Xu, Xia

    2018-03-07

    Extensive studies have shown that a sports expert is superior to a sports novice in visually perceptual-cognitive processes of sports scene information, however the attentional and neural basis of it has not been thoroughly explored. The present study examined whether a sport expert has the attentional superiority on scene information relevant to his/her sport skill, and explored what factor drives this superiority. To address this problem, EEGs were recorded as participants passively viewed sport scenes (tennis vs. non-tennis) and negative emotional faces in the context of a visual attention task, where the pictures of sport scenes or of negative emotional faces randomly followed the pictures with overlapping sport scenes and negative emotional faces. ERP results showed that for experts, the evoked potential of attentional competition elicited by the overlap of tennis scene was significantly larger than that evoked by the overlap of non-tennis scene, while this effect was absent for novices. The LORETA showed that the experts' left medial frontal gyrus (MFG) cortex was significantly more active as compared to the right MFG when processing the overlap of tennis scene, but the lateralization effect was not significant in novices. Those results indicate that experts have attentional superiority on skill-related scene information, despite intruding the scene through negative emotional faces that are prone to cause negativity bias towards their visual field as a strong distractor. This superiority is actuated by the activation of left MFG cortex and probably due to self-reference. Copyright © 2018. Published by Elsevier Ltd.

  1. Algorithms for Graph Rigidity and Scene Analysis

    DEFF Research Database (Denmark)

    Berg, Alex Rune; Jordán, Tibor

    2003-01-01

    We investigate algorithmic questions and structural problems concerning graph families defined by `edge-counts'. Motivated by recent developments in the unique realization problem of graphs, we give an efficient algorithm to compute the rigid, redundantly rigid, M-connected, and globally rigid...... components of a graph. Our algorithm is based on (and also extends and simplifies) the idea of Hendrickson and Jacobs, as it uses orientations as the main algorithmic tool. We also consider families of bipartite graphs which occur in parallel drawings and scene analysis. We verify a conjecture of Whiteley...... by showing that 2d-connected bipartite graphs are d-tight. We give a new algorithm for finding a maximal d-sharp subgraph. We also answer a question of Imai and show that finding a maximum size d-sharp subgraph is NP-hard....

  2. Wall grid structure for interior scene synthesis

    KAUST Repository

    Xu, Wenzhuo

    2015-02-01

    We present a system for automatically synthesizing a diverse set of semantically valid, and well-arranged 3D interior scenes for a given empty room shape. Unlike existing work on layout synthesis, that typically knows potentially needed 3D models and optimizes their location through cost functions, our technique performs the retrieval and placement of 3D models by discovering the relationships between the room space and the models\\' categories. This is enabled by a new analytical structure, called Wall Grid Structure, which jointly considers the categories and locations of 3D models. Our technique greatly reduces the amount of user intervention and provides users with suggestions and inspirations. We demonstrate the applicability of our approach on three types of scenarios: conference rooms, living rooms and bedrooms.

  3. The scene is set for ALICE

    CERN Multimedia

    2008-01-01

    Now that the electromagnetic calorimeter support and the mini space frame have been installed, practically all ALICE’s infrastructure is in place. The calorimeter support, an austenitic stainless steel shell weighing 30 tonnes, was slid gently inside the detector, in between the face of the magnet and the space frame. With the completion of two major installation projects, the scene is finally set for the ALICE experiment…or at least it nearly is, as a few design studies, minor installation jobs and measurements still need to be carried out before the curtain can finally be raised. The experiment’s chief engineer Diego Perini confirms: "All the heavy infrastructure for ALICE has been in place and ready for the grand opening since December 2007." The next step will be the installation of additional modules on the TOF and TRD detectors between January and March 2008, and physicists have already started testing the equipment with co...

  4. Repfinder: Finding approximately repeated scene elements for image editing

    KAUST Repository

    Cheng, Ming-Ming

    2010-07-26

    Repeated elements are ubiquitous and abundant in both manmade and natural scenes. Editing such images while preserving the repetitions and their relations is nontrivial due to overlap, missing parts, deformation across instances, illumination variation, etc. Manually enforcing such relations is laborious and error-prone. We propose a novel framework where user scribbles are used to guide detection and extraction of such repeated elements. Our detection process, which is based on a novel boundary band method, robustly extracts the repetitions along with their deformations. The algorithm only considers the shape of the elements, and ignores similarity based on color, texture, etc. We then use topological sorting to establish a partial depth ordering of overlapping repeated instances. Missing parts on occluded instances are completed using information from other instances. The extracted repeated instances can then be seamlessly edited and manipulated for a variety of high level tasks that are otherwise difficult to perform. We demonstrate the versatility of our framework on a large set of inputs of varying complexity, showing applications to image rearrangement, edit transfer, deformation propagation, and instance replacement. © 2010 ACM.

  5. Visuomotor crowding: the resolution of grasping in cluttered scenes

    Directory of Open Access Journals (Sweden)

    Paul F Bulakowski

    2009-11-01

    Full Text Available Reaching toward a cup of coffee while reading the newspaper becomes exceedingly difficult when other objects are nearby. Although much is known about the precision of visual perception in cluttered scenes, relatively little is understood about acting within these environments—the spatial resolution of visuomotor behavior. When the number and density of objects overwhelm visual processing, crowding results, which serves as a bottleneck for object recognition. Despite crowding, featural information of the ensemble persists, thereby supporting texture perception. While texture is beneficial for visual perception, it is relatively uninformative for guiding the metrics of grasping. Therefore, it would be adaptive if the visual and visuomotor systems utilized the clutter differently. Using an orientation task, we measured the effect of crowding on vision and visually guided grasping and found that the density of clutter similarly limited discrimination performance. However, while vision integrates the surround to compute a texture, action discounts this global information. We propose that this dissociation reflects an optimal use of information by each system.

  6. POTENTIALS OF IMAGE BASED ACTIVE RANGING TO CAPTURE DYNAMIC SCENES

    Directory of Open Access Journals (Sweden)

    B. Jutzi

    2012-09-01

    Full Text Available Obtaining a 3D description of man-made and natural environments is a basic task in Computer Vision and Remote Sensing. To this end, laser scanning is currently one of the dominating techniques to gather reliable 3D information. The scanning principle inherently needs a certain time interval to acquire the 3D point cloud. On the other hand, new active sensors provide the possibility of capturing range information by images with a single measurement. With this new technique image-based active ranging is possible which allows capturing dynamic scenes, e.g. like walking pedestrians in a yard or moving vehicles. Unfortunately most of these range imaging sensors have strong technical limitations and are not yet sufficient for airborne data acquisition. It can be seen from the recent development of highly specialized (far-range imaging sensors – so called flash-light lasers – that most of the limitations could be alleviated soon, so that future systems will be equipped with improved image size and potentially expanded operating range. The presented work is a first step towards the development of methods capable for application of range images in outdoor environments. To this end, an experimental setup was set up for investigating these proposed possibilities. With the experimental setup a measurement campaign was carried out and first results will be presented within this paper.

  7. Structural content in paintings: artists overregularize oriented content of paintings relative to the typical natural scene bias.

    Science.gov (United States)

    Schweinhart, April M; Essock, Edward A

    2013-01-01

    Natural scenes tend to be biased in both scale (1/f) and orientation (H > V > O; horizontal > vertical > oblique), and the human visual system has similar biases that serve to partially 'undo' (ie whiten) the resultant representation. The present approach to investigating this relationship considers content in works of art-scenes produced for processing by the human visual system. We analyzed the content of images by a method that minimizes errors inherent in some prior analysis methods. In the first experiment museum paintings were considered by comparing the amplitude spectrum of landscape paintings, natural scene photos, portrait paintings, and photos of faces. In the second experiment we obtained photos of paintings at the time they were produced by local artists and compared structural content in matched photos which contained the same scenes that the artists had painted. Results show that artists produce paintings with both the 1/f bias of scale and the horizontal-effect bias of orientation (H > V > O). More importantly, results from both experiments show that artists overregularize the structure in their works: they impose the natural-scene horizontal effect at all structural scales and in all types of subject matter even though, in the real world, the pattern of anisotropy differs considerably across spatial scale and between faces and natural scenes. It appears that artists unconsciously overregularize the oriented structure in their works to make it conform more uniformly to the 'expected' canonical ideal.

  8. A cardinal orientation bias in scene-selective visual cortex.

    Science.gov (United States)

    Nasr, Shahin; Tootell, Roger B H

    2012-10-24

    It has long been known that human vision is more sensitive to contours at cardinal (horizontal and vertical) orientations, compared with oblique orientations; this is the "oblique effect." However, the real-world relevance of the oblique effect is not well understood. Experiments here suggest that this effect is linked to scene perception, via a common bias in the image statistics of scenes. This statistical bias for cardinal orientations is found in many "carpentered environments" such as buildings and indoor scenes, and some natural scenes. In Experiment 1, we confirmed the presence of a perceptual oblique effect in a specific set of scene stimuli. Using those scenes, we found that a well known "scene-selective" visual cortical area (the parahippocampal place area; PPA) showed distinctively higher functional magnetic resonance imaging (fMRI) activity to cardinal versus oblique orientations. This fMRI-based oblique effect was not observed in other cortical areas (including scene-selective areas transverse occipital sulcus and retrosplenial cortex), although all three scene-selective areas showed the expected inversion effect to scenes. Experiments 2 and 3 tested for an analogous selectivity for cardinal orientations using computer-generated arrays of simple squares and line segments, respectively. The results confirmed the preference for cardinal orientations in PPA, thus demonstrating that the oblique effect can also be produced in PPA by simple geometrical images, with statistics similar to those in scenes. Thus, PPA shows distinctive fMRI selectivity for cardinal orientations across a broad range of stimuli, which may reflect a perceptual oblique effect.

  9. Decoding individual natural scene representations during perception and imagery

    Directory of Open Access Journals (Sweden)

    Matthew Robert Johnson

    2014-02-01

    Full Text Available We used a multi-voxel classification analysis of functional magnetic resonance imaging (fMRI data to determine to what extent item-specific information about complex natural scenes is represented in several category-selective areas of human extrastriate visual cortex during visual perception and visual mental imagery. Participants in the scanner either viewed or were instructed to visualize previously memorized natural scene exemplars, and the neuroimaging data were subsequently subjected to a multi-voxel pattern analysis (MVPA using a support vector machine (SVM classifier. We found that item-specific information was represented in multiple scene-selective areas: the occipital place area (OPA, parahippocampal place area (PPA, retrosplenial cortex (RSC, and a scene-selective portion of the precuneus/intraparietal sulcus region (PCu/IPS. Furthermore, item-specific information from perceived scenes was re-instantiated during mental imagery of the same scenes. These results support findings from previous decoding analyses for other types of visual information and/or brain areas during imagery or working memory, and extend them to the case of visual scenes (and scene-selective cortex. Taken together, such findings support models suggesting that reflective mental processes are subserved by the re-instantiation of perceptual information in high-level visual cortex. We also examined activity in the fusiform face area (FFA and found that it, too, contained significant item-specific scene information during perception, but not during mental imagery. This suggests that although decodable scene-relevant activity occurs in FFA during perception, FFA activity may not be a necessary (or even relevant component of one’s mental representation of visual scenes.

  10. Functional Organization of the Parahippocampal Cortex: Dissociable Roles for Context Representations and the Perception of Visual Scenes.

    Science.gov (United States)

    Baumann, Oliver; Mattingley, Jason B

    2016-02-24

    The human parahippocampal cortex has been ascribed central roles in both visuospatial and mnemonic processes. More specifically, evidence suggests that the parahippocampal cortex subserves both the perceptual analysis of scene layouts as well as the retrieval of associative contextual memories. It remains unclear, however, whether these two functional roles can be dissociated within the parahippocampal cortex anatomically. Here, we provide evidence for a dissociation between neural activation patterns associated with visuospatial analysis of scenes and contextual mnemonic processing along the parahippocampal longitudinal axis. We used fMRI to measure parahippocampal responses while participants engaged in a task that required them to judge the contextual relatedness of scene and object pairs, which were presented either as words or pictures. Results from combined factorial and conjunction analyses indicated that the posterior section of parahippocampal cortex is driven predominantly by judgments associated with pictorial scene analysis, whereas its anterior section is more active during contextual judgments regardless of stimulus category (scenes vs objects) or modality (word vs picture). Activation maxima associated with visuospatial and mnemonic processes were spatially segregated, providing support for the existence of functionally distinct subregions along the parahippocampal longitudinal axis and suggesting that, in humans, the parahippocampal cortex serves as a functional interface between perception and memory systems. Copyright © 2016 the authors 0270-6474/16/362536-07$15.00/0.

  11. Ambient visual information confers a context-specific, long-term benefit on memory for haptic scenes.

    Science.gov (United States)

    Pasqualotto, Achille; Finucane, Ciara M; Newell, Fiona N

    2013-09-01

    We investigated the effects of indirect, ambient visual information on haptic spatial memory. Using touch only, participants first learned an array of objects arranged in a scene and were subsequently tested on their recognition of that scene which was always hidden from view. During haptic scene exploration, participants could either see the surrounding room or were blindfolded. We found a benefit in haptic memory performance only when ambient visual information was available in the early stages of the task but not when participants were initially blindfolded. Specifically, when ambient visual information was available a benefit on performance was found in a subsequent block of trials during which the participant was blindfolded (Experiment 1), and persisted over a delay of one week (Experiment 2). However, we found that the benefit for ambient visual information did not transfer to a novel environment (Experiment 3). In Experiment 4 we further investigated the nature of the visual information that improved haptic memory and found that geometric information about a surrounding (virtual) room rather than isolated object landmarks, facilitated haptic scene memory. Our results suggest that vision improves haptic memory for scenes by providing an environment-centred, allocentric reference frame for representing object location through touch. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Forensic botany as a useful tool in the crime scene: Report of a case.

    Science.gov (United States)

    Margiotta, Gabriele; Bacaro, Giovanni; Carnevali, Eugenia; Severini, Simona; Bacci, Mauro; Gabbrielli, Mario

    2015-08-01

    The ubiquitous presence of plant species makes forensic botany useful for many criminal cases. Particularly, bryophytes are useful for forensic investigations because many of them are clonal and largely distributed. Bryophyte shoots can easily become attached to shoes and clothes and it is possible to be found on footwear, providing links between crime scene and individuals. We report a case of suicide of a young girl happened in Siena, Tuscany, Italia. The cause of traumatic injuries could be ascribed to suicide, to homicide, or to accident. In absence of eyewitnesses who could testify the dynamics of the event, the crime scene investigation was fundamental to clarify the accident. During the scene analysis, some fragments of Tortula muralis Hedw. and Bryum capillare Hedw were found. The fragments were analyzed by a bryologists in order to compare them with the moss present on the stairs that the victim used immediately before the death. The analysis of these bryophytes found at the crime scene allowed to reconstruct the accident. Even if this evidence, of course, is circumstantial, it can be useful in forensic cases, together with the other evidences, to reconstruct the dynamics of events. Copyright © 2015 Elsevier Ltd and Faculty of Forensic and Legal Medicine. All rights reserved.

  13. Being There: (Re)Making the Assessment Scene

    Science.gov (United States)

    Gallagher, Chris W.

    2011-01-01

    I use Burkean analysis to show how neoliberalism undermines faculty assessment expertise and underwrites testing industry expertise in the current assessment scene. Contending that we cannot extricate ourselves from our limited agency in this scene until we abandon the familiar "stakeholder" theory of power, I propose a rewriting of the…

  14. Mental Layout Extrapolations Prime Spatial Processing of Scenes

    Science.gov (United States)

    Gottesman, Carmela V.

    2011-01-01

    Four experiments examined whether scene processing is facilitated by layout representation, including layout that was not perceived but could be predicted based on a previous partial view (boundary extension). In a priming paradigm (after Sanocki, 2003), participants judged objects' distances in photographs. In Experiment 1, full scenes (target),…

  15. 47 CFR 80.1127 - On-scene communications.

    Science.gov (United States)

    2010-10-01

    ....1127 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Global Maritime Distress and Safety System (GMDSS) Operating Procedures for Distress and Safety Communications § 80.1127 On-scene communications. (a) On-scene communications...

  16. Visual search for arbitrary objects in real scenes

    Science.gov (United States)

    Alvarez, George A.; Rosenholtz, Ruth; Kuzmova, Yoana I.; Sherman, Ashley M.

    2011-01-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target. PMID:21671156

  17. Children's Development of Analogical Reasoning: Insights from Scene Analogy Problems

    Science.gov (United States)

    Richland, Lindsey E.; Morrison, Robert G.; Holyoak, Keith J.

    2006-01-01

    We explored how relational complexity and featural distraction, as varied in scene analogy problems, affect children's analogical reasoning performance. Results with 3- and 4-year-olds, 6- and 7-year-olds, 9- to 11-year-olds, and 13- and 14-year-olds indicate that when children can identify the critical structural relations in a scene analogy…

  18. Emotional Scene Content Drives the Saccade Generation System Reflexively

    Science.gov (United States)

    Nummenmaa, Lauri; Hyona, Jukka; Calvo, Manuel G.

    2009-01-01

    The authors assessed whether parafoveal perception of emotional content influences saccade programming. In Experiment 1, paired emotional and neutral scenes were presented to parafoveal vision. Participants performed voluntary saccades toward either of the scenes according to an imperative signal (color cue). Saccadic reaction times were faster…

  19. Dynamic Frames Based Generation of 3D Scenes and Applications

    Directory of Open Access Journals (Sweden)

    Danijel Radošević

    2015-05-01

    Full Text Available Modern graphic/programming tools like Unity enables the possibility of creating 3D scenes as well as making 3D scene based program applications, including full physical model, motion, sounds, lightning effects etc. This paper deals with the usage of dynamic frames based generator in the automatic generation of 3D scene and related source code. The suggested model enables the possibility to specify features of the 3D scene in a form of textual specification, as well as exporting such features from a 3D tool. This approach enables higher level of code generation flexibility and the reusability of the main code and scene artifacts in a form of textual templates. An example of the generated application is presented and discussed.

  20. Interindividual variation in fornix microstructure and macrostructure is related to visual discrimination accuracy for scenes but not faces.

    Science.gov (United States)

    Postans, Mark; Hodgetts, Carl J; Mundy, Matthew E; Jones, Derek K; Lawrence, Andrew D; Graham, Kim S

    2014-09-03

    Transection of the nonhuman primate fornix has been shown to impair learning of configurations of spatial features and object-in-scene memory. Although damage to the human fornix also results in memory impairment, it is not known whether there is a preferential involvement of this white-matter tract in spatial learning, as implied by animal studies. Diffusion-weighted MR images were obtained from healthy participants who had completed versions of a task in which they made rapid same/different discriminations to two categories of highly visually similar stimuli: (1) virtual reality scene pairs; and (2) face pairs. Diffusion-MRI measures of white-matter microstructure [fractional anisotropy (FA) and mean diffusivity (MD)] and macrostructure (tissue volume fraction, f) were then extracted from the fornix of each participant, which had been reconstructed using a deterministic tractography protocol. Fornix MD and f measures correlated with scene, but not face, discrimination accuracy in both discrimination tasks. A complementary voxelwise analysis using tract-based spatial statistics suggested the crus of the fornix as a focus for this relationship. These findings extend previous reports of spatial learning impairments after fornix transection in nonhuman primates, critically highlighting the fornix as a source of interindividual variation in scene discrimination in humans. Copyright © 2014 Postans et al.

  1. Severe scene learning impairment, but intact recognition memory, after cholinergic depletion of inferotemporal cortex followed by fornix transection.

    Science.gov (United States)

    Browning, Philip G F; Gaffan, David; Croxson, Paula L; Baxter, Mark G

    2010-02-01

    To examine the generality of cholinergic involvement in visual memory in primates, we trained macaque monkeys either on an object-in-place scene learning task or in delayed nonmatching-to-sample (DNMS). Each monkey received either selective cholinergic depletion of inferotemporal cortex (including the entorhinal cortex and perirhinal cortex) with injections of the immunotoxin ME20.4-saporin or saline injections as a control and was postoperatively retested. Cholinergic depletion of inferotemporal cortex was without effect on either task. Each monkey then received fornix transection because previous studies have shown that multiple disconnections of temporal cortex can produce synergistic impairments in memory. Fornix transection mildly impaired scene learning in monkeys that had received saline injections but severely impaired scene learning in monkeys that had received cholinergic lesions of inferotemporal cortex. This synergistic effect was not seen in monkeys performing DNMS. These findings confirm a synergistic interaction in a macaque monkey model of episodic memory between connections carried by the fornix and cholinergic input to the inferotemporal cortex. They support the notion that the mnemonic functions tapped by scene learning and DNMS have dissociable neural substrates. Finally, cholinergic depletion of inferotemporal cortex, in this study, appears insufficient to impair memory functions dependent on an intact inferotemporal cortex.

  2. Crimes Scenes as Augmented Reality, off-screen, online and offline

    OpenAIRE

    Sandvik, Kjetil; Waade, Anne Marit

    2008-01-01

    Our field of investigation is site specific realism in crimefiction and spatial production as media specific features.We analyze the (re)production of crime scenes in respectivelycrime series, computer games and tourist practice,and relate this to the ideas of augmented reality. Using a distinctionbetween places as locations situated in the physicalworld and spaces as imagined or virtuallocations as our point of departure, this paper investigates how placesin various ways have become augmente...

  3. Eye movements, visual search and scene memory, in an immersive virtual environment.

    Directory of Open Access Journals (Sweden)

    Dmitry Kit

    Full Text Available Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.

  4. Comparison of the effects of mobile technology AAC apps on programming visual scene displays.

    Science.gov (United States)

    Caron, Jessica; Light, Janice; Davidoff, Beth E; Drager, Kathryn D R

    2017-12-01

    Parents and professionals who work with individuals who use augmentative and alternative communication (AAC) face tremendous time pressures, especially when programming vocabulary in AAC technologies. System design (from programming functions to layout options) necessitates a range of skills related to operational competence and can impose intensive training demands for communication partners. In fact, some AAC applications impose considerable learning demands, which can lead to increased time to complete the same programming tasks. A within-subject design was used to investigate the comparative effects of three visual scene display AAC apps (GoTalk Now, AutisMate, EasyVSD) on the programming times for three off-line programming activities, by adults who were novices to programming AAC apps. The results indicated all participants were able to create scenes and add hotspots during off-line programming tasks with minimal self-guided training. The AAC app that had the least number of programming steps, EasyVSD, resulted in the fastest completion times across the three programming tasks. These results suggest that by simplifying the operational requirements of AAC apps the programming time is reduced, which may allow partners to better support individuals who use AAC.

  5. Registration of eye reflection and scene images using an aspherical eye model.

    Science.gov (United States)

    Nakazawa, Atsushi; Nitschke, Christian; Nishida, Toyoaki

    2016-11-01

    This paper introduces an image registration algorithm between an eye reflection and a scene image. Although there are currently a large number of image registration algorithms, this task remains difficult due to nonlinear distortions at the eye surface and large amounts of noise, such as iris texture, eyelids, eyelashes, and their shadows. To overcome this issue, we developed an image registration method combining an aspherical eye model that simulates nonlinear distortions considering eye geometry and a two-step iterative registration strategy that obtains dense correspondence of the feature points to achieve accurate image registrations for the entire image region. We obtained a database of eye reflection and scene images featuring four subjects in indoor and outdoor scenes and compared the registration performance with different asphericity conditions. Results showed that the proposed approach can perform accurate registration with an average accuracy of 1.05 deg by using the aspherical cornea model. This work is relevant for eye image analysis in general, enabling novel applications and scenarios.

  6. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    Science.gov (United States)

    Yebes, J. Javier; Bergasa, Luis M.; García-Garrido, Miguel Ángel

    2015-01-01

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM. PMID:25903553

  7. Integration of virtual and real scenes within an integral 3D imaging environment

    Science.gov (United States)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  8. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes.

    Science.gov (United States)

    Yebes, J Javier; Bergasa, Luis M; García-Garrido, Miguel Ángel

    2015-04-20

    Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles). In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity), while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  9. Visual Object Recognition with 3D-Aware Features in KITTI Urban Scenes

    Directory of Open Access Journals (Sweden)

    J. Javier Yebes

    2015-04-01

    Full Text Available Driver assistance systems and autonomous robotics rely on the deployment of several sensors for environment perception. Compared to LiDAR systems, the inexpensive vision sensors can capture the 3D scene as perceived by a driver in terms of appearance and depth cues. Indeed, providing 3D image understanding capabilities to vehicles is an essential target in order to infer scene semantics in urban environments. One of the challenges that arises from the navigation task in naturalistic urban scenarios is the detection of road participants (e.g., cyclists, pedestrians and vehicles. In this regard, this paper tackles the detection and orientation estimation of cars, pedestrians and cyclists, employing the challenging and naturalistic KITTI images. This work proposes 3D-aware features computed from stereo color images in order to capture the appearance and depth peculiarities of the objects in road scenes. The successful part-based object detector, known as DPM, is extended to learn richer models from the 2.5D data (color and disparity, while also carrying out a detailed analysis of the training pipeline. A large set of experiments evaluate the proposals, and the best performing approach is ranked on the KITTI website. Indeed, this is the first work that reports results with stereo data for the KITTI object challenge, achieving increased detection ratios for the classes car and cyclist compared to a baseline DPM.

  10. a Low-Cost Panoramic Camera for the 3d Documentation of Contaminated Crime Scenes

    Science.gov (United States)

    Abate, D.; Toschi, I.; Sturdy-Colls, C.; Remondino, F.

    2017-11-01

    Crime scene documentation is a fundamental task which has to be undertaken in a fast, accurate and reliable way, highlighting evidence which can be further used for ensuring justice for victims and for guaranteeing the successful prosecution of perpetrators. The main focus of this paper is on the documentation of a typical crime scene and on the rapid recording of any possible contamination that could have influenced its original appearance. A 3D reconstruction of the environment is first generated by processing panoramas acquired with the low-cost Ricoh Theta 360 camera, and further analysed to highlight potentials and limits of this emerging and consumer-grade technology. Then, a methodology is proposed for the rapid recording of changes occurring between the original and the contaminated crime scene. The approach is based on an automatic 3D feature-based data registration, followed by a cloud-to-cloud distance computation, given as input the 3D point clouds generated before and after e.g. the misplacement of evidence. All the algorithms adopted for panoramas pre-processing, photogrammetric 3D reconstruction, 3D geometry registration and analysis, are presented and currently available in open-source or low-cost software solutions.

  11. Use of an Infrared Thermometer with Laser Targeting in Morphological Scene Change Detection for Fire Detection

    Science.gov (United States)

    Tickle, Andrew J.; Singh, Harjap; Grindley, Josef E.

    2013-06-01

    Morphological Scene Change Detection (MSCD) is a process typically tasked at detecting relevant changes in a guarded environment for security applications. This can be implemented on a Field Programmable Gate Array (FPGA) by a combination of binary differences based around exclusive-OR (XOR) gates, mathematical morphology and a crucial threshold setting. This is a robust technique and can be applied many areas from leak detection to movement tracking, and further augmented to perform additional functions such as watermarking and facial detection. Fire is a severe problem, and in areas where traditional fire alarm systems are not installed or feasible, it may not be detected until it is too late. Shown here is a way of adapting the traditional Morphological Scene Change Detector (MSCD) with a temperature sensor so if both the temperature sensor and scene change detector are triggered, there is a high likelihood of fire present. Such a system would allow integration into autonomous mobile robots so that not only security patrols could be undertaken, but also fire detection.

  12. Subjective emotional over-arousal to neutral social scenes in paranoid schizophrenia.

    Science.gov (United States)

    Haralanova, Evelina; Haralanov, Svetlozar; Beraldi, Anna; Möller, Hans-Jürgen; Hennig-Fast, Kristina

    2012-02-01

    From the clinical practice and some experimental studies, it is apparent that paranoid schizophrenia patients tend to assign emotional salience to neutral social stimuli. This aberrant cognitive bias has been conceptualized to result from increased emotional arousal, but direct empirical data are scarce. The aim of the present study was to quantify the subjective emotional arousal (SEA) evoked by emotionally non-salient (neutral) compared to emotionally salient (negative) social stimuli in schizophrenia patients and healthy controls. Thirty male inpatients with paranoid schizophrenia psychosis and 30 demographically matched healthy controls rated their level of SEA in response to neutral and negative social scenes from the International Affective Picture System and the Munich Affective Picture System. Schizophrenia patients compared to healthy controls had an increased overall SEA level. This relatively higher SEA was evoked only by the neutral but not by the negative social scenes. To our knowledge, the present study is the first designed to directly demonstrate subjective emotional over-arousal to neutral social scenes in paranoid schizophrenia. This finding might explain previous clinical and experimental data and could be viewed as the missing link between the primary neurobiological and secondary psychological mechanisms of paranoid psychotic-symptom formation. Furthermore, despite being very short and easy to perform, the task we used appeared to be sensitive enough to reveal emotional dysregulation, in terms of emotional disinhibition/hyperactivation in paranoid schizophrenia patients. Thus, it could have further research and clinical applications, including as a neurobehavioral probe for imaging studies.

  13. Crime scene investigation (as seen on TV).

    Science.gov (United States)

    Durnal, Evan W

    2010-06-15

    A mysterious green ooze is injected into a brightly illuminated and humming machine; 10s later, a printout containing a complete biography of the substance is at the fingertips of an attractive young investigator who exclaims "we found it!" We have all seen this event occur countless times on any and all of the three CSI dramas, Cold Cases, Crossing Jordans, and many more. With this new style of "infotainment" (Surette, 2007), comes an increasingly blurred line between the hard facts of reality and the soft, quick solutions of entertainment. With these advances in technology, how can crime rates be anything but plummeting as would-be criminals cringe at the idea of leaving the smallest speck of themselves at a crime scene? Surely there are very few serious crimes that go unpunished in today's world of high-tech, fast-paced gadgetry. Science and technology have come a great distance since Sir Arthur Conan Doyle first described the first famous forensic scientist (Sherlock Holmes), but still have light-years to go. (c) 2010. Published by Elsevier Ireland Ltd.

  14. The primal scene and Picasso's Guernica.

    Science.gov (United States)

    Hartke, R

    2000-02-01

    The author examines a group of works by Picasso dating from the late 1930s in terms of the artist's experiences as documented by his biographers and of primal-scene fantasies as described in the field of psychoanalysis by, in particular, Freud and Klein. Pointing out that the artist himself is on record as inviting such a consideration, he contends that these fantasies constitute the latent motivating force behind one of Picasso's most famous paintings, the mural Guernica, and a number of other productions from the same period. Biographical accounts are drawn upon to show how aspects of his inner world are revealed in the specific works described and reproduced in this paper. The role of women is shown to have been particularly relevant. The author demonstrates how Picasso's constant pattern of triangular relationships culminated in his personal crisis of 1935, which, together with the Spanish Civil War, reflecting as it did the conflicts of his internal and external relations, contributed to the production of the works in this group. The artist is seen as attempting to work through and make reparation for envious attacks on the parental objects, but it is pointed out that art works should not be assessed by the criterion of therapeutic change.

  15. The role of gist in scene recognition.

    Science.gov (United States)

    Sampanes, Anthony Chad; Tseng, Philip; Bridgeman, Bruce

    2008-09-01

    Studies of change blindness suggest that we bring only a few attended features of a scene, plus a gist, from one visual fixation to the next. We examine the role of gist by substituting an original image with a second image in which a substitution of one object changes the gist, compared with a third image in which a substitution of that object does not change the gist. Small perceptual changes that affect gist were more rapidly detected than perceptual changes that do not affect gist. When the images were scrambled to remove meaning, this difference disappeared for seven of the nine sets, indicating that gist and not image features dominated the result. In a final experiment a natural image was masked with an 8x8 checker pattern, and progressively substituted by squares of a new natural image of the same gist. Spatial jitter prevented fixation on the same square for the sequence of 12 changes. Observers detected a change in an average of 2.1 out of 7 sequences, indicating strong change blindness for images of the same gist but completely different local features. We conclude that gist is automatically encoded, separately from specific features.

  16. Art Toys in the contemporary art scene

    Directory of Open Access Journals (Sweden)

    Laura Sernissi

    2014-03-01

    Full Text Available The Art Toys phenomenon, better known as Art Toy Movement, was born in China in the mid-nineties and quickly spread out to the rest of the world. The toys are an artistic production of serial sculpture, made by handcrafts or on an industrial scale. There are several types of toys, such as custom toys and canvas toys, synonyms of designer toys, although they are often defined according to the constituent material, such as vinyl toys (plastic and plush toys (fabric. Art toys are the heirs of an already pop-surrealist and neo-pop circuit, which since the eighties of the twentieth century has pervaded the Japanese-American art scene, winking to the playful spirit of the avant-garde of the early century. Some psychoanalytic, pedagogical and anthropological studies about “play theories”, may also help us to understand and identify these heterogeneous products as real works of art and not simply as collectible toys.

  17. A spectral-structural bag-of-features scene classifier for very high spatial resolution remote sensing imagery

    Science.gov (United States)

    Zhao, Bei; Zhong, Yanfei; Zhang, Liangpei

    2016-06-01

    Land-use classification of very high spatial resolution remote sensing (VHSR) imagery is one of the most challenging tasks in the field of remote sensing image processing. However, the land-use classification is hard to be addressed by the land-cover classification techniques, due to the complexity of the land-use scenes. Scene classification is considered to be one of the expected ways to address the land-use classification issue. The commonly used scene classification methods of VHSR imagery are all derived from the computer vision community that mainly deal with terrestrial image recognition. Differing from terrestrial images, VHSR images are taken by looking down with airborne and spaceborne sensors, which leads to the distinct light conditions and spatial configuration of land cover in VHSR imagery. Considering the distinct characteristics, two questions should be answered: (1) Which type or combination of information is suitable for the VHSR imagery scene classification? (2) Which scene classification algorithm is best for VHSR imagery? In this paper, an efficient spectral-structural bag-of-features scene classifier (SSBFC) is proposed to combine the spectral and structural information of VHSR imagery. SSBFC utilizes the first- and second-order statistics (the mean and standard deviation values, MeanStd) as the statistical spectral descriptor for the spectral information of the VHSR imagery, and uses dense scale-invariant feature transform (SIFT) as the structural feature descriptor. From the experimental results, the spectral information works better than the structural information, while the combination of the spectral and structural information is better than any single type of information. Taking the characteristic of the spatial configuration into consideration, SSBFC uses the whole image scene as the scope of the pooling operator, instead of the scope generated by a spatial pyramid (SP) commonly used in terrestrial image classification. The experimental

  18. Advanced radiometric and interferometric milimeter-wave scene simulations

    Science.gov (United States)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  19. Text string detection from natural scenes by structure-based partition and grouping.

    Science.gov (United States)

    Yi, Chucai; Tian, YingLi

    2011-09-01

    Text information in natural scene images serves as important clues for many image-based applications such as scene understanding, content-based image retrieval, assistive navigation, and automatic geocoding. However, locating text from a complex background with multiple colors is a challenging task. In this paper, we explore a new framework to detect text strings with arbitrary orientations in complex natural scene images. Our proposed framework of text string detection consists of two steps: 1) image partition to find text character candidates based on local gradient features and color uniformity of character components and 2) character candidate grouping to detect text strings based on joint structural features of text characters in each text string such as character size differences, distances between neighboring characters, and character alignment. By assuming that a text string has at least three characters, we propose two algorithms of text string detection: 1) adjacent character grouping method and 2) text line grouping method. The adjacent character grouping method calculates the sibling groups of each character candidate as string segments and then merges the intersecting sibling groups into text string. The text line grouping method performs Hough transform to fit text line among the centroids of text candidates. Each fitted text line describes the orientation of a potential text string. The detected text string is presented by a rectangle region covering all characters whose centroids are cascaded in its text line. To improve efficiency and accuracy, our algorithms are carried out in multi-scales. The proposed methods outperform the state-of-the-art results on the public Robust Reading Dataset, which contains text only in horizontal orientation. Furthermore, the effectiveness of our methods to detect text strings with arbitrary orientations is evaluated on the Oriented Scene Text Dataset collected by ourselves containing text strings in nonhorizontal

  20. Gestalt-like constraints produce veridical (Euclidean) percepts of 3D indoor scenes.

    Science.gov (United States)

    Kwon, TaeKyu; Li, Yunfeng; Sawada, Tadamasa; Pizlo, Zygmunt

    2016-09-01

    This study, which was influenced a lot by Gestalt ideas, extends our prior work on the role of a priori constraints in the veridical perception of 3D shapes to the perception of 3D scenes. Our experiments tested how human subjects perceive the layout of a naturally-illuminated indoor scene that contains common symmetrical 3D objects standing on a horizontal floor. In one task, the subject was asked to draw a top view of a scene that was viewed either monocularly or binocularly. The top views the subjects reconstructed were configured accurately except for their overall size. These size errors varied from trial to trial, and were shown most-likely to result from the presence of a response bias. There was little, if any, evidence of systematic distortions of the subjects' perceived visual space, the kind of distortions that have been reported in numerous experiments run under very unnatural conditions. This shown, we proceeded to use Foley's (Vision Research 12 (1972) 323-332) isosceles right triangle experiment to test the intrinsic geometry of visual space directly. This was done with natural viewing, with the impoverished viewing conditions Foley had used, as well as with a number of intermediate viewing conditions. Our subjects produced very accurate triangles when the viewing conditions were natural, but their performance deteriorated systematically as the viewing conditions were progressively impoverished. Their perception of visual space became more compressed as their natural visual environment was degraded. Once this was shown, we developed a computational model that emulated the most salient features of our psychophysical results. We concluded that human observers see 3D scenes veridically when they view natural 3D objects within natural 3D environments. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  1. Gestalt-like constraints produce veridical (Euclidean) percepts of 3D indoor scenes

    Science.gov (United States)

    Kwon, TaeKyu; Li, Yunfeng; Sawada, Tadamasa; Pizlo, Zygmunt

    2015-01-01

    This study, which was influenced a lot by Gestalt ideas, extends our prior work on the role of a priori constraints in the veridical perception of 3D shapes to the perception of 3D scenes. Our experiments tested how human subjects perceive the layout of a naturally-illuminated indoor scene that contains common symmetrical 3D objects standing on a horizontal floor. In one task, the subject was asked to draw a top view of a scene that was viewed either monocularly or binocularly. The top views the subjects reconstructed were configured accurately except for their overall size. These size errors varied from trial to trial, and were shown most-likely to result from the presence of a response bias. There was little, if any, evidence of systematic distortions of the subjects’ perceived visual space, the kind of distortions that have been reported in numerous experiments run under very unnatural conditions. This shown, we proceeded to use Foley’s (Vision Research 12 (1972) 323–332) isosceles right triangle experiment to test the intrinsic geometry of visual space directly. This was done with natural viewing, with the impoverished viewing conditions Foley had used, as well as with a number of intermediate viewing conditions. Our subjects produced very accurate triangles when the viewing conditions were natural, but their performance deteriorated systematically as the viewing conditions were progressively impoverished. Their perception of visual space became more compressed as their natural visual environment was degraded. Once this was shown, we developed a computational model that emulated the most salient features of our psychophysical results. We concluded that human observers see 3D scenes veridically when they view natural 3D objects within natural 3D environments. PMID:26525845

  2. Detection of appearing and disappearing objects in complex acoustic scenes.

    Directory of Open Access Journals (Sweden)

    Francisco Cervantes Constantino

    Full Text Available The ability to detect sudden changes in the environment is critical for survival. Hearing is hypothesized to play a major role in this process by serving as an "early warning device," rapidly directing attention to new events. Here, we investigate listeners' sensitivity to changes in complex acoustic scenes-what makes certain events "pop-out" and grab attention while others remain unnoticed? We use artificial "scenes" populated by multiple pure-tone components, each with a unique frequency and amplitude modulation rate. Importantly, these scenes lack semantic attributes, which may have confounded previous studies, thus allowing us to probe low-level processes involved in auditory change perception. Our results reveal a striking difference between "appear" and "disappear" events. Listeners are remarkably tuned to object appearance: change detection and identification performance are at ceiling; response times are short, with little effect of scene-size, suggesting a pop-out process. In contrast, listeners have difficulty detecting disappearing objects, even in small scenes: performance rapidly deteriorates with growing scene-size; response times are slow, and even when change is detected, the changed component is rarely successfully identified. We also measured change detection performance when a noise or silent gap was inserted at the time of change or when the scene was interrupted by a distractor that occurred at the time of change but did not mask any scene elements. Gaps adversely affected the processing of item appearance but not disappearance. However, distractors reduced both appearance and disappearance detection. Together, our results suggest a role for neural adaptation and sensitivity to transients in the process of auditory change detection, similar to what has been demonstrated for visual change detection. Importantly, listeners consistently performed better for item addition (relative to deletion across all scene interruptions used

  3. Three-dimensional measurement system for crime scene documentation

    Science.gov (United States)

    Adamczyk, Marcin; Hołowko, Elwira; Lech, Krzysztof; Michoński, Jakub; MÄ czkowski, Grzegorz; Bolewicki, Paweł; Januszkiewicz, Kamil; Sitnik, Robert

    2017-10-01

    Three dimensional measurements (such as photogrammetry, Time of Flight, Structure from Motion or Structured Light techniques) are becoming a standard in the crime scene documentation process. The usage of 3D measurement techniques provide an opportunity to prepare more insightful investigation and helps to show every trace in the context of the entire crime scene. In this paper we would like to present a hierarchical, three-dimensional measurement system that is designed for crime scenes documentation process. Our system reflects the actual standards in crime scene documentation process - it is designed to perform measurement in two stages. First stage of documentation, the most general, is prepared with a scanner with relatively low spatial resolution but also big measuring volume - it is used for the whole scene documentation. Second stage is much more detailed: high resolution but smaller size of measuring volume for areas that required more detailed approach. The documentation process is supervised by a specialised application CrimeView3D, that is a software platform for measurements management (connecting with scanners and carrying out measurements, automatic or semi-automatic data registration in the real time) and data visualisation (3D visualisation of documented scenes). It also provides a series of useful tools for forensic technicians: virtual measuring tape, searching for sources of blood spatter, virtual walk on the crime scene and many others. In this paper we present our measuring system and the developed software. We also provide an outcome from research on metrological validation of scanners that was performed according to VDI/VDE standard. We present a CrimeView3D - a software-platform that was developed to manage the crime scene documentation process. We also present an outcome from measurement sessions that were conducted on real crime scenes with cooperation with Technicians from Central Forensic Laboratory of Police.

  4. AR goggles make crime scene investigation a desk job

    OpenAIRE

    Aron, Jacob; NORTHFIELD, Dean

    2012-01-01

    CRIME scene investigators could one day help solve murders without leaving the office. A pair of augmented reality glasses could allow local police to virtually tag objects in a crime scene, and build a clean record of the scene in 3D video before evidence is removed for processing.\\ud The system, being developed by Oytun Akman and colleagues at the Delft University of Technology in the Netherlands, consists of a head-mounted display receiving 3D video from a pair of attached cameras controll...

  5. Picture models for 2-scene comics creating system

    Directory of Open Access Journals (Sweden)

    Miki UENO

    2015-03-01

    Full Text Available Recently, computer understanding pictures and stories becomes one of the most important research topics in computer science. However, there are few researches about human like understanding by computers because pictures have not certain format and contain more lyric aspect than that of natural laguage. For picture understanding, a comic is the suitable target because it is consisted by clear and simple plot of stories and separated scenes.In this paper, we propose 2 different types of picture models for 2-scene comics creating system. We also show the method of the application of 2-scene comics creating system by means of proposed picture model.

  6. Learning to Model Task-Oriented Attention.

    Science.gov (United States)

    Zou, Xiaochun; Zhao, Xinbo; Wang, Jian; Yang, Yongjia

    2016-01-01

    For many applications in graphics, design, and human computer interaction, it is essential to understand where humans look in a scene with a particular task. Models of saliency can be used to predict fixation locations, but a large body of previous saliency models focused on free-viewing task. They are based on bottom-up computation that does not consider task-oriented image semantics and often does not match actual eye movements. To address this problem, we collected eye tracking data of 11 subjects when they performed some particular search task in 1307 images and annotation data of 2,511 segmented objects with fine contours and 8 semantic attributes. Using this database as training and testing examples, we learn a model of saliency based on bottom-up image features and target position feature. Experimental results demonstrate the importance of the target information in the prediction of task-oriented visual attention.

  7. Embryo disposition and the new death scene

    Directory of Open Access Journals (Sweden)

    Ellison, David

    2011-01-01

    Full Text Available In the IVF clinic - a place designed principally for the production and implantation of embryos - scientists and IVF recipients are faced with decisions regarding the disposition of frozen embryos. At this time there are hundred of thousands of cryopreserved embryos awaiting such determinations. They may be thawed for transfer to the woman herself, they may be donated for research or for use by other infertile couples, they may remain in frozen storage, or they may variously be discarded by being allowed to 'succumb', or 'perish'. Where the choice is discard, some IVF clients have chosen to formalise the process through ceremony. A new language is emerging in response to the desires of the would-be-parents who might wish to characterise the discard experience as a ‘good death’. This article examines the procedure known as ‘compassionate transfer’ where the embryo to be discarded is placed in the woman’s vagina where it is clear that it will not develop further. An alternate method has the embryo transferred in the usual manner but without the benefit of fertility-enhancing hormones at a point in the cycle unreceptive to implantation. The embryo destined for disposal is thus removed from the realm of technological possibility and ‘returned’ to the female body for a homely death. While debates continue about whether or not embryos constitute life, new practices are developing in response to the emotional experience of embryo discard. We argue that compassionate transfer is a death scene taking shape. In this article, we take the measure of this new death scene’s fabrication, and consider the form, significance, and legal complexity of its ceremonies.

  8. Stakeholder Positioning and Cultural Diversity in the Creative Sector: A Case Study of the London Modern Architecture Scene

    NARCIS (Netherlands)

    Aalbers, H.L.; Kamp, A.; Erbe, N.; Normore, A.H.

    2015-01-01

    This chapter explores the antecedents of stakeholder positioning in the creative sector, a sector well known for its diversity in organizational cultures. Drawing from empirical data collected at the heart of London's modern architecture scene we analyze the interactive process of commissioned

  9. Ocfentanil overdose fatality in the recreational drug scene.

    Science.gov (United States)

    Coopman, Vera; Cordonnier, Jan; De Leeuw, Marc; Cirimele, Vincent

    2016-09-01

    This paper describes the first reported death involving ocfentanil, a potent synthetic opioid and structure analogue of fentanyl abused as a new psychoactive substance in the recreational drug scene. A 17-year-old man with a history of illegal substance abuse was found dead in his home after snorting a brown powder purchased over the internet with bitcoins. Acetaminophen, caffeine and ocfentanil were identified in the powder by gas chromatography mass spectrometry and reversed-phase liquid chromatography with diode array detector. Quantitation of ocfentanil in biological samples was performed using a target analysis based on liquid-liquid extraction and ultra performance liquid chromatography tandem mass spectrometry. In the femoral blood taken at the external body examination, the following concentrations were measured: ocfentanil 15.3μg/L, acetaminophen 45mg/L and caffeine 0.23mg/L. Tissues sampled at autopsy were analyzed to study the distribution of ocfentanil. The comprehensive systematic toxicological analysis on the post-mortem blood and tissue samples was negative for other compounds. Based on circumstantial evidence, autopsy findings and the results of the toxicological analysis, the medical examiner concluded that the cause of death was an acute intoxication with ocfentanil. The manner of death was assumed to be accidental after snorting the powder. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  10. Performance of autofocusing schemes for single target and populated scenes behind unknown walls

    Science.gov (United States)

    Ahmad, Fauzia; Amin, Moeness G.

    2007-04-01

    The quality and reliability of through-the-wall radar imagery is governed, among other things, by the knowledge of the wall characteristics. Ambiguity in wall characteristics has a two-fold effect. It smears and blurs the image, and also shifts the imaged target positions. Higher order standardized moments have been shown to be suitable measures of the degree of smearing and blurriness of through-the-wall images. These moments can be used to tune the wall variables to achieve autofocusing. It is noted that the solution to the autofocusing problem is not unique, and there exist several assumed wall characteristics, in addition to the exact, that lead to similar focused images. In this paper, we analyze the dependency of the estimated autofocusing wall parameters on the imaged scene, specifically target density and location, in the presence of single uniform wall. We consider single and multiple target cases with different scene complexity and population. Supporting simulation results are also provided.

  11. Hybrid infrared scene projector (HIRSP): a high dynamic range infrared scene projector, part II

    Science.gov (United States)

    Cantey, Thomas M.; Bowden, Mark; Cosby, David; Ballard, Gary

    2008-04-01

    This paper is a continuation of the merging of two dynamic infrared scene projector technologies to provide a unique and innovative solution for the simulation of high dynamic temperature ranges for testing infrared imaging sensors. This paper will present some of the challenges and performance issues encountered in implementing this unique projector system into a Hardware-in-the-Loop (HWIL) simulation facility. The projection system combines the technologies of a Honeywell BRITE II extended voltage range emissive resistor array device and an optically scanned laser diode array projector (LDAP). The high apparent temperature simulations are produced from the luminescent infrared radiation emitted by the high power laser diodes. The hybrid infrared projector system is being integrated into an existing HWIL simulation facility and is used to provide real-world high radiance imagery to an imaging infrared unit under test. The performance and operation of the projector is presented demonstrating the merit and success of the hybrid approach. The high dynamic range capability simulates a 250 Kelvin apparent background temperature to 850 Kelvin maximum apparent temperature signatures. This is a large increase in radiance projection over current infrared scene projection capabilities.

  12. Downhole Fluid Analyzer Development

    Energy Technology Data Exchange (ETDEWEB)

    Bill Turner

    2006-11-28

    A novel fiber optic downhole fluid analyzer has been developed for operation in production wells. This device will allow real-time determination of the oil, gas and water fractions of fluids from different zones in a multizone or multilateral completion environment. The device uses near infrared spectroscopy and induced fluorescence measurement to unambiguously determine the oil, water and gas concentrations at all but the highest water cuts. The only downhole components of the system are the fiber optic cable and windows. All of the active components--light sources, sensors, detection electronics and software--will be located at the surface, and will be able to operate multiple downhole probes. Laboratory testing has demonstrated that the sensor can accurately determine oil, water and gas fractions with a less than 5 percent standard error. Once installed in an intelligent completion, this sensor will give the operating company timely information about the fluids arising from various zones or multilaterals in a complex completion pattern, allowing informed decisions to be made on controlling production. The research and development tasks are discussed along with a market analysis.

  13. Day-24: Energy Balance Model for Infrared Scene Generation

    National Research Council Canada - National Science Library

    Sutherland, Robert

    2002-01-01

    .... Meteorological inputs are required at only one key location inside the scene area. These inputs include air temperature, wind speed, relative humidity, cloud cover, and subsurface "deep soil" temperature...

  14. Earth Virtual-Environment Immersive Scene Display System, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — In response to the NASA need for a free-standing immersive virtual scene display system interfaced with an exercise treadmill to mimic terrestrial exercise...

  15. Scene Classification Using High Spatial Resolution Multispectral Data

    National Research Council Canada - National Science Library

    Garner, Jamada

    2002-01-01

    ...), High-spatial resolution (8-meter), 4-color MSI data from IKONOS provide a new tool for scene classification, The utility of these data are studied for the purpose of classifying the Elkhorn Slough and surrounding wetlands in central...

  16. Radiative transfer model for heterogeneous 3-D scenes

    Science.gov (United States)

    Kimes, D. S.; Kirchner, J. A.

    1982-01-01

    A general mathematical framework for simulating processes in heterogeneous 3-D scenes is presented. Specifically, a model was designed and coded for application to radiative transfers in vegetative scenes. The model is unique in that it predicts (1) the directional spectral reflectance factors as a function of the sensor's azimuth and zenith angles and the sensor's position above the canopy, (2) the spectral absorption as a function of location within the scene, and (3) the directional spectral radiance as a function of the sensor's location within the scene. The model was shown to follow known physical principles of radiative transfer. Initial verification of the model as applied to a soybean row crop showed that the simulated directional reflectance data corresponded relatively well in gross trends to the measured data. However, the model can be greatly improved by incorporating more sophisticated and realistic anisotropic scattering algorithms

  17. Robust scene stitching in large scale mobile mapping

    OpenAIRE

    Schouwenaars, Filip; Timofte, Radu; Van Gool, Luc

    2013-01-01

    Schouwenaars F., Timofte R., Van Gool L., ''Robust scene stitching in large scale mobile mapping'', 24th British machine vision conference - BMVC 2013, 11 pp., September 9-13, 2013, Bristol, United Kingdom.

  18. Exploration of Crime-Scene Characteristics in Juvenile Homicide in the French-Speaking Part of Belgium.

    Science.gov (United States)

    Gerard, F Jeane; Whitfield, Kate C; Browne, Kevin D

    2017-04-01

    This study explores modeling crime-scene characteristics of juvenile homicide in the French-speaking part of Belgium. Multidimensional scaling analysis was carried out on crime-scene characteristics derived from the court files of 67 individuals under 22 years old, who had been charged with murder or attempted murder (1995-2009). Three thematic regions (Expressive: multiple offenders; Instrumental: theft; Instrumental: sex/forensic awareness) distinguished types of aggression displayed during the offense. These themes reaffirm that the expressive-instrumental differentiation found in general homicide studies is valuable when attempting to discriminate juvenile homicides. The proposed framework was found useful to classify the offenses, as 84% of homicides were assigned to a dominant theme. Additionally, associations between crime-scene characteristics and offenders' characteristics were analyzed, but no associations were found, therefore failing to provide empirical support for the homology assumption. Cultural comparisons, as well as the influence of age on the thematic structure are discussed.

  19. Dynamic IR scene projector using the digital micromirror device

    Science.gov (United States)

    Gao, JiaoBo; Wang, Jun; Yang, Bin; Wang, JiLong; Wang, WeiNa; Xie, JunHu; Hu, Yu

    2005-01-01

    We have developed a new dynamic infrared scene projector using the Texas Instruments Digital Micromirror Device (DMD) which has been modified to project images which are suitable for testing sensor and seekers operating in the UV, visible, and IR wavebands. This paper provides an overview of the design and performance of the projection system, as well as example imagery from prototype projector systems. The dynamic IR scene contains 1024×768 pixels and can be updated at a rate of approximately 85 Hz.

  20. Open revolver cylinder at the suicide death scene.

    Science.gov (United States)

    Wetli, Charles V; Krivosta, George; Sturiano, Jack V

    2002-09-01

    Revolvers with an open cylinder were found at three death scenes of apparently self-inflicted gunshot wounds. All three handguns were Smith & Wesson.38 or.357 revolvers. Investigation revealed that firing the gun with the thumb on the cylinder release latch could disengage the cylinder. A combination of gravity and recoil impact against the thumb would open the cylinder and even allow the casing and the unspent cartridges to fall from the gun, creating a confusing death scene.

  1. Adaptive scene-dependent filters in online learning environments

    OpenAIRE

    Götting, Michael; Steil, Jochen J.; Wersing, Heiko; Körner, Edgar; Ritter, Helge

    2006-01-01

    In this paper we propose the Adaptive Scene Dependent Filters (ASDF) to enhance the online learning capabilities of an object recognition system in real-world scenes. The ASDF method proposed extends the idea of unsupervised segmentation to a flexible, highly dynamic image segmentation architecture. We combine unsupervised segmentation to define coherent groups of pixels with a recombination step using top-down information to determine which segments belong together to the object. ...

  2. Crime Scene Reconstruction Using a Fully Geomatic Approach

    Directory of Open Access Journals (Sweden)

    Andrea Lingua

    2008-10-01

    Full Text Available This paper is focused on two main topics: crime scene reconstruction, based on a geomatic approach, and crime scene analysis, through GIS based procedures. According to the experience of the authors in performing forensic analysis for real cases, the aforesaid topics will be examined with the specific goal of verifying the relationship of human walk paths at a crime scene with blood patterns on the floor. In order to perform such analyses, the availability of pictures taken by first aiders is mandatory, since they provide information about the crime scene before items are moved or interfered with. Generally, those pictures are affected by large geometric distortions, thus - after a brief description of the geomatic techniques suitable for the acquisition of reference data (total station surveying, photogrammetry and laser scanning - it will be shown the developed methodology, based on photogrammetric algorithms, aimed at calibrating, georeferencing and mosaicking the available images acquired on the scene. The crime scene analysis is based on a collection of GIS functionalities for simulating human walk movements and creating a statistically significant sample. The developed GIS software component will be described in detail, showing how the analysis of this statistical sample of simulated human walks allows to rigorously define the probability of performing a certain walk path without touching the bloodstains on the floor.

  3. Crime Scene Reconstruction Using a Fully Geomatic Approach.

    Science.gov (United States)

    Agosto, Eros; Ajmar, Andrea; Boccardo, Piero; Giulio Tonolo, Fabio; Lingua, Andrea

    2008-10-08

    This paper is focused on two main topics: crime scene reconstruction, based on a geomatic approach, and crime scene analysis, through GIS based procedures. According to the experience of the authors in performing forensic analysis for real cases, the aforesaid topics will be examined with the specific goal of verifying the relationship of human walk paths at a crime scene with blood patterns on the floor. In order to perform such analyses, the availability of pictures taken by first aiders is mandatory, since they provide information about the crime scene before items are moved or interfered with. Generally, those pictures are affected by large geometric distortions, thus - after a brief description of the geomatic techniques suitable for the acquisition of reference data (total station surveying, photogrammetry and laser scanning) - it will be shown the developed methodology, based on photogrammetric algorithms, aimed at calibrating, georeferencing and mosaicking the available images acquired on the scene. The crime scene analysis is based on a collection of GIS functionalities for simulating human walk movements and creating a statistically significant sample. The developed GIS software component will be described in detail, showing how the analysis of this statistical sample of simulated human walks allows to rigorously define the probability of performing a certain walk path without touching the bloodstains on the floor.

  4. Helicopter Scene Response for Stroke Patients: A 5-Year Experience.

    Science.gov (United States)

    Hawk, Andrew; Marco, Catherine; Huang, Matt; Chow, Bonnie

    The purpose of this study was to examine the usefulness of an emergency medical service (EMS)-requested air medical helicopter response directly to the scene for a patient with clinical evidence of an ischemic cerebrovascular accident (CVA) and transport to a regional comprehensive CVA center. CareFlight, an air medical critical care transportation service, is based in Dayton, OH. The 3 CareFlight helicopters are geographically located and provided transport to all CVA scene patients in this study. A retrospective chart review was completed for all CareFlight CVA scene flights for 5 years (2011-2015). A total of 136 adult patients were transported. EMS criteria included CVA symptom presence for less than 3 hours or awoke abnormal, nonhypoglycemia, and a significantly positive Cincinnati Prehospital Stroke Scale. The majority of patients (75%) met all 3 EMS CVA scene criteria; 27.5% of these patients received peripheral tissue plasminogen activator, and 9.8% underwent a neurointerventional procedure. Using a 3-step EMS triage for acute CVA, air medical transport from the scene to a comprehensive stroke center allowed for the timely administration of tissue plasminogen activator and/or a neurointerventional procedure in a substantive percentage of patients. Further investigation into air medical scene response for acute stroke is warranted. Copyright © 2016 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.

  5. EigenScape: A Database of Spatial Acoustic Scene Recordings

    Directory of Open Access Journals (Sweden)

    Marc Ciufo Green

    2017-11-01

    Full Text Available The classification of acoustic scenes and events is an emerging area of research in the field of machine listening. Most of the research conducted so far uses spectral features extracted from monaural or stereophonic audio rather than spatial features extracted from multichannel recordings. This is partly due to the lack thus far of a substantial body of spatial recordings of acoustic scenes. This paper formally introduces EigenScape, a new database of fourth-order Ambisonic recordings of eight different acoustic scene classes. The potential applications of a spatial machine listening system are discussed before detailed information on the recording process and dataset are provided. A baseline spatial classification system using directional audio coding (DirAC techniques is detailed and results from this classifier are presented. The classifier is shown to give good overall scene classification accuracy across the dataset, with 7 of 8 scenes being classified with an accuracy of greater than 60% with an 11% improvement in overall accuracy compared to use of Mel-frequency cepstral coefficient (MFCC features. Further analysis of the results shows potential improvements to the classifier. It is concluded that the results validate the new database and show that spatial features can characterise acoustic scenes and as such are worthy of further investigation.

  6. Political conservatism predicts asymmetries in emotional scene memory.

    Science.gov (United States)

    Mills, Mark; Gonzalez, Frank J; Giuseffi, Karl; Sievert, Benjamin; Smith, Kevin B; Hibbing, John R; Dodd, Michael D

    2016-06-01

    Variation in political ideology has been linked to differences in attention to and processing of emotional stimuli, with stronger responses to negative versus positive stimuli (negativity bias) the more politically conservative one is. As memory is enhanced by attention, such findings predict that memory for negative versus positive stimuli should similarly be enhanced the more conservative one is. The present study tests this prediction by having participants study 120 positive, negative, and neutral scenes in preparation for a subsequent memory test. On the memory test, the same 120 scenes were presented along with 120 new scenes and participants were to respond whether a scene was old or new. Results on the memory test showed that negative scenes were more likely to be remembered than positive scenes, though, this was true only for political conservatives. That is, a larger negativity bias was found the more conservative one was. The effect was sizeable, explaining 45% of the variance across subjects in the effect of emotion. These findings demonstrate that the relationship between political ideology and asymmetries in emotion processing extend to memory and, furthermore, suggest that exploring the extent to which subject variation in interactions among emotion, attention, and memory is predicted by conservatism may provide new insights into theories of political ideology. Published by Elsevier B.V.

  7. Visual analyzer as anticipatory system (functional organization)

    Science.gov (United States)

    Kirvelis, Dobilas

    2000-05-01

    Hypothetical functional organization of the visual analyzer is presented. The interpretation of visual perception, anatomic and morphological structure of visual systems of animals, neuro-physiological, psychological and psycho-physiological data in the light of a number of the theoretical solutions of image recognition and visual processes simulation enable active information processing. The activities in special areas of cortex are as follows: focused attention, prediction with analysis of visual scenes and synthesis, predictive mental images. In the projection zone of visual cortex Area Streata or V1 a "sensory" screen (SS) and "reconstruction" screen (RS) are supposed to exist. The functional structure of visual analyzer consist of: analysis of visual scenes projected onto SS; "tracing" of images; preliminary recognition; reversive image reconstruction onto RS; comparison of images projected onto SS with images reconstructed onto RS; and "correction" of preliminary recognition. Special attention is paid to the quasiholographical principles of the neuronal organization within the brain, of the image "tracing," and of reverse image reconstruction. Tachistoscopic experiments revealed that the duration of one such hypothesis-testing cycle of the human visual analyzers is about 8-10 milliseconds.

  8. A Task-Based Approach to Analyzing Processes

    National Research Council Canada - National Science Library

    Stone, Brice

    1999-01-01

    As much of corporate America has embraced business process reengineering, the Government Performance and Results Act of 1993 and the Department of Defense Corporate Information Management Initiative...

  9. Left Superior Temporal Gyrus Is Coupled to Attended Speech in a Cocktail-Party Auditory Scene.

    Science.gov (United States)

    Vander Ghinst, Marc; Bourguignon, Mathieu; Op de Beeck, Marc; Wens, Vincent; Marty, Brice; Hassid, Sergio; Choufani, Georges; Jousmäki, Veikko; Hari, Riitta; Van Bogaert, Patrick; Goldman, Serge; De Tiège, Xavier

    2016-02-03

    Using a continuous listening task, we evaluated the coupling between the listener's cortical activity and the temporal envelopes of different sounds in a multitalker auditory scene using magnetoencephalography and corticovocal coherence analysis. Neuromagnetic signals were recorded from 20 right-handed healthy adult humans who listened to five different recorded stories (attended speech streams), one without any multitalker background (No noise) and four mixed with a "cocktail party" multitalker background noise at four signal-to-noise ratios (5, 0, -5, and -10 dB) to produce speech-in-noise mixtures, here referred to as Global scene. Coherence analysis revealed that the modulations of the attended speech stream, presented without multitalker background, were coupled at ∼0.5 Hz to the activity of both superior temporal gyri, whereas the modulations at 4-8 Hz were coupled to the activity of the right supratemporal auditory cortex. In cocktail party conditions, with the multitalker background noise, the coupling was at both frequencies stronger for the attended speech stream than for the unattended Multitalker background. The coupling strengths decreased as the Multitalker background increased. During the cocktail party conditions, the ∼0.5 Hz coupling became left-hemisphere dominant, compared with bilateral coupling without the multitalker background, whereas the 4-8 Hz coupling remained right-hemisphere lateralized in both conditions. The brain activity was not coupled to the multitalker background or to its individual talkers. The results highlight the key role of listener's left superior temporal gyri in extracting the slow ∼0.5 Hz modulations, likely reflecting the attended speech stream within a multitalker auditory scene. When people listen to one person in a "cocktail party," their auditory cortex mainly follows the attended speech stream rather than the entire auditory scene. However, how the brain extracts the attended speech stream from the whole

  10. Traffic Command Gesture Recognition for Virtual Urban Scenes Based on a Spatiotemporal Convolution Neural Network

    Directory of Open Access Journals (Sweden)

    Chunyong Ma

    2018-01-01

    Full Text Available Intelligent recognition of traffic police command gestures increases authenticity and interactivity in virtual urban scenes. To actualize real-time traffic gesture recognition, a novel spatiotemporal convolution neural network (ST-CNN model is presented. We utilized Kinect 2.0 to construct a traffic police command gesture skeleton (TPCGS dataset collected from 10 volunteers. Subsequently, convolution operations on the locational change of each skeletal point were performed to extract temporal features, analyze the relative positions of skeletal points, and extract spatial features. After temporal and spatial features based on the three-dimensional positional information of traffic police skeleton points were extracted, the ST-CNN model classified positional information into eight types of Chinese traffic police gestures. The test accuracy of the ST-CNN model was 96.67%. In addition, a virtual urban traffic scene in which real-time command tests were carried out was set up, and a real-time test accuracy rate of 93.0% was achieved. The proposed ST-CNN model ensured a high level of accuracy and robustness. The ST-CNN model recognized traffic command gestures, and such recognition was found to control vehicles in virtual traffic environments, which enriches the interactive mode of the virtual city scene. Traffic command gesture recognition contributes to smart city construction.

  11. Classifying homicide offenders and predicting their characteristics from crime scene behavior.

    Science.gov (United States)

    Santtila, Pekka; Häkkänen, Helinä; Canter, David; Elfgren, Thomas

    2003-04-01

    A theoretical distinction between instrumental and expressive aggression was used in analyzing offender characteristics and their associations with crime scene actions in Finnish homicides. Twenty-one variables reflecting the offenders' criminal activity, previous relationships with intimates and victims, and general social and psychological adjustment were derived from files of single-offender/single-victim homicides occurring between 1980 and 1994 (n = 502). Additionally, three variables describing post-offense actions and police interview behavior were included. A multidimensional scaling procedure was used to investigate the interrelationships between the variables. A distinction between expressive and instrumental characteristics was observable in the empirical structure, which was divided into three subthemes of Instrumental, Expressive: Blood, and Expressive: Intimate. Associations between the characteristics with five previously identified subthemes of crime scene actions were computed. In addition, the subthemes of crime scene actions were related to post-offense actions and police interview behavior, with Expressive themes being associated with less denial as well as a greater likelihood of surrendering and confession. The practical usefulness for police investigations and theoretical implications of the results are discussed.

  12. Representation of Gravity-Aligned Scene Structure in Ventral Pathway Visual Cortex.

    Science.gov (United States)

    Vaziri, Siavash; Connor, Charles E

    2016-03-21

    The ventral visual pathway in humans and non-human primates is known to represent object information, including shape and identity [1]. Here, we show the ventral pathway also represents scene structure aligned with the gravitational reference frame in which objects move and interact. We analyzed shape tuning of recently described macaque monkey ventral pathway neurons that prefer scene-like stimuli to objects [2]. Individual neurons did not respond to a single shape class, but to a variety of scene elements that are typically aligned with gravity: large planes in the orientation range of ground surfaces under natural viewing conditions, planes in the orientation range of ceilings, and extended convex and concave edges in the orientation range of wall/floor/ceiling junctions. For a given neuron, these elements tended to share a common alignment in eye-centered coordinates. Thus, each neuron integrated information about multiple gravity-aligned structures as they would be seen from a specific eye and head orientation. This eclectic coding strategy provides only ambiguous information about individual structures but explicit information about the environmental reference frame and the orientation of gravity in egocentric coordinates. In the ventral pathway, this could support perceiving and/or predicting physical events involving objects subject to gravity, recognizing object attributes like animacy based on movement not caused by gravity, and/or stabilizing perception of the world against changes in head orientation [3-5]. Our results, like the recent discovery of object weight representation [6], imply that the ventral pathway is involved not just in recognition, but also in physical understanding of objects and scenes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. A view not to be missed: Salient scene content interferes with cognitive restoration

    Science.gov (United States)

    Van der Jagt, Alexander P. N.; Craig, Tony; Brewer, Mark J.; Pearson, David G.

    2017-01-01

    Attention Restoration Theory (ART) states that built scenes place greater load on attentional resources than natural scenes. This is explained in terms of "hard" and "soft" fascination of built and natural scenes. Given a lack of direct empirical evidence for this assumption we propose that perceptual saliency of scene content can function as an empirically derived indicator of fascination. Saliency levels were established by measuring speed of scene category detection using a Go/No-Go detection paradigm. Experiment 1 shows that built scenes are more salient than natural scenes. Experiment 2 replicates these findings using greyscale images, ruling out a colour-based response strategy, and additionally shows that built objects in natural scenes affect saliency to a greater extent than the reverse. Experiment 3 demonstrates that the saliency of scene content is directly linked to cognitive restoration using an established restoration paradigm. Overall, these findings demonstrate an important link between the saliency of scene content and related cognitive restoration. PMID:28723975

  14. Task-oriented rehabilitation robotics.

    Science.gov (United States)

    Schweighofer, Nicolas; Choi, Younggeun; Winstein, Carolee; Gordon, James

    2012-11-01

    Task-oriented training is emerging as the dominant and most effective approach to motor rehabilitation of upper extremity function after stroke. Here, the authors propose that the task-oriented training framework provides an evidence-based blueprint for the design of task-oriented robots for the rehabilitation of upper extremity function in the form of three design principles: skill acquisition of functional tasks, active participation training, and individualized adaptive training. The previous robotic systems that incorporate elements of task-oriented trainings are then reviewed. Finally, the authors critically analyze their own attempt to design and test the feasibility of a TOR robot, ADAPT (Adaptive and Automatic Presentation of Tasks), which incorporates the three design principles. Because of its task-oriented training-based design, ADAPT departs from most other current rehabilitation robotic systems: it presents realistic functional tasks in which the task goal is constantly adapted, so that the individual actively performs doable but challenging tasks without physical assistance. To maximize efficacy for a large clinical population, the authors propose that future task-oriented robots need to incorporate yet-to-be developed adaptive task presentation algorithms that emphasize acquisition of fine motor coordination skills while minimizing compensatory movements.

  15. Radio Wave Propagation Scene Partitioning for High-Speed Rails

    Directory of Open Access Journals (Sweden)

    Bo Ai

    2012-01-01

    Full Text Available Radio wave propagation scene partitioning is necessary for wireless channel modeling. As far as we know, there are no standards of scene partitioning for high-speed rail (HSR scenarios, and therefore we propose the radio wave propagation scene partitioning scheme for HSR scenarios in this paper. Based on our measurements along the Wuhan-Guangzhou HSR, Zhengzhou-Xian passenger-dedicated line, Shijiazhuang-Taiyuan passenger-dedicated line, and Beijing-Tianjin intercity line in China, whose operation speeds are above 300 km/h, and based on the investigations on Beijing South Railway Station, Zhengzhou Railway Station, Wuhan Railway Station, Changsha Railway Station, Xian North Railway Station, Shijiazhuang North Railway Station, Taiyuan Railway Station, and Tianjin Railway Station, we obtain an overview of HSR propagation channels and record many valuable measurement data for HSR scenarios. On the basis of these measurements and investigations, we partitioned the HSR scene into twelve scenarios. Further work on theoretical analysis based on radio wave propagation mechanisms, such as reflection and diffraction, may lead us to develop the standard of radio wave propagation scene partitioning for HSR. Our work can also be used as a basis for the wireless channel modeling and the selection of some key techniques for HSR systems.

  16. Video Pedestrian Detection Based on Orthogonal Scene Motion Pattern

    Directory of Open Access Journals (Sweden)

    Jianming Qu

    2014-01-01

    Full Text Available In fixed video scenes, scene motion patterns can be a very useful prior knowledge for pedestrian detection which is still a challenge at present. A new approach of cascade pedestrian detection using an orthogonal scene motion pattern model in a general density video is developed in this paper. To statistically model the pedestrian motion pattern, a probability grid overlaying the whole scene is set up to partition the scene into paths and holding areas. Features extracted from different pattern areas are classified by a group of specific strategies. Instead of using a unitary classifier, the employed classifier is composed of two directional subclassifiers trained, respectively, with different samples which are selected by two orthogonal directions. Considering that the negative images from the detection window scanning are much more than the positive ones, the cascade AdaBoost technique is adopted by the subclassifiers to reduce the negative image computations. The proposed approach is proved effectively by static classification experiments and surveillance video experiments.

  17. Emotional Semantic Recognition of Visual Scene in Flash Animation

    Directory of Open Access Journals (Sweden)

    Shi Lin

    2018-01-01

    Full Text Available Based on the organization structure of the Flash animation files, we first use the edge density method to segment the Flash animation to obtain the visual scenes, then extract the visual features such as color and texture as the input parameters of BP neural network, and set up the sample database. Secondly, we choose a suitable model for emotion classification, use eight kinds of emotional adjectives to describe the emotion of Flash animation, such as warm, delightful, exaggerated, funny, desolate, dreary, complex, and illusory, and mark the emotion value of the visual scene in the sample database and so use it as the output parameter of the BP neural network. Finally, we use BP neural network with appropriate transfer function and learning function for training to obtain the rules for mapping from visual features of the visual scene to semantic space and, at last, complete the automatic classification work of emotional semantic of the visual scene. We used the algorithm to carry on the emotional semantics recognition to 5012 visual scenes, and the experiment effect is good. The results of our study can be used in the classification, retrieval, and other fields of Flash animation based on emotional semantics.

  18. Selective looking at natural scenes: Hedonic content and gender☆

    Science.gov (United States)

    Bradley, Margaret M.; Costa, Vincent D.; Lang, Peter J.

    2015-01-01

    Choice viewing behavior when looking at affective scenes was assessed to examine differences due to hedonic content and gender by monitoring eye movements in a selective looking paradigm. On each trial, participants viewed a pair of pictures that included a neutral picture together with an affective scene depicting either contamination, mutilation, threat, food, nude males, or nude females. The duration of time that gaze was directed to each picture in the pair was determined from eye fixations. Results indicated that viewing choices varied with both hedonic content and gender. Initially, gaze duration for both men and women was heightened when viewing all affective contents, but was subsequently followed by significant avoidance of scenes depicting contamination or nude males. Gender differences were most pronounced when viewing pictures of nude females, with men continuing to devote longer gaze time to pictures of nude females throughout viewing, whereas women avoided scenes of nude people, whether male or female, later in the viewing interval. For women, reported disgust of sexual activity was also inversely related to gaze duration for nude scenes. Taken together, selective looking as indexed by eye movements reveals differential perceptual intake as a function of specific content, gender, and individual differences. PMID:26156939

  19. Gordon Craig's Scene Project: a history open to revision

    Directory of Open Access Journals (Sweden)

    Luiz Fernando

    2014-09-01

    Full Text Available The article proposes a review of Gordon Craig’s Scene project, an invention patented in 1910 and developed until 1922. Craig himself kept an ambiguous position whether it was an unfulfilled project or not. His son and biographer Edward Craig sustained that Craig’s original aims were never achieved because of technical limitation, and most of the scholars who examined the matter followed this position. Departing from the actual screen models saved in the Bibliothèque Nationale de France, Craig’s original notebooks, and a short film from 1963, I defend that the patented project and the essay published in 1923 mean, indeed, the materialisation of the dreamed device of the thousand scenes in one scene

  20. The contributions of color to recognition memory for natural scenes.

    Science.gov (United States)

    Wichmann, Felix A; Sharpe, Lindsay T; Gegenfurtner, Karl R

    2002-05-01

    The authors used a recognition memory paradigm to assess the influence of color information on visual memory for images of natural scenes. Subjects performed 5%-10% better for colored than for black-and-white images independent of exposure duration. Experiment 2 indicated little influence of contrast once the images were suprathreshold, and Experiment 3 revealed that performance worsened when images were presented in color and tested in black and white, or vice versa, leading to the conclusion that the surface property color is part of the memory representation. Experiments 4 and 5 exclude the possibility that the superior recognition memory for colored images results solely from attentional factors or saliency. Finally, the recognition memory advantage disappears for falsely colored images of natural scenes: The improvement in recognition memory depends on the color congruence of presented images with learned knowledge about the color gamut found within natural scenes. The results can be accounted for within a multiple memory systems framework.

  1. Narrative Collage of Image Collections by Scene Graph Recombination.

    Science.gov (United States)

    Fang, Fei; Yi, Miao; Feng, Hui; Hu, Shenghong; Xiao, Chunxia

    2017-10-04

    Narrative collage is an interesting image editing art to summarize the main theme or storyline behind an image collection. We present a novel method to generate narrative images with plausible semantic scene structures. To achieve this goal, we introduce a layer graph and a scene graph to represent relative depth order and semantic relationship between image objects, respectively. We firstly cluster the input image collection to select representative images, and then extract a group of semantic salient objects from each representative image. Both Layer graphs and scene graphs are constructed and combined according to our specific rules for reorganizing the extracted objects in every image. We design an energy model to appropriately locate every object on the final canvas. Experiment results show that our method can produce competitive narrative collage result and works well on a wide range of image collections.

  2. Foggy Scene Rendering Based on Transmission Map Estimation

    Directory of Open Access Journals (Sweden)

    Fan Guo

    2014-01-01

    Full Text Available Realistic rendering of foggy scene is important in game development and virtual reality. Traditional methods have many parameters to control or require a long time to compute, and they are usually limited to depicting a homogeneous fog without considering the foggy scene with heterogeneous fog. In this paper, a new rendering method based on transmission map estimation is proposed. We first generate perlin noise image as the density distribution texture of heterogeneous fog. Then we estimate the transmission map using the Markov random field (MRF model and the bilateral filter. Finally, virtual foggy scene is realistically rendered with the generated perlin noise image and the transmission map according to the atmospheric scattering model. Experimental results show that the rendered results of our approach are quite satisfactory.

  3. A Grouped Threshold Approach for Scene Identification in AVHRR Imagery

    Science.gov (United States)

    Baum, Bryan A.; Trepte, Qing

    1999-01-01

    The authors propose a grouped threshold method for scene identification in Advanced Very High Resolution Radiometer imagery that may contain clouds, fire, smoke, or snow. The philosophy of the approach is to build modules that contain groups of spectral threshold tests that are applied concurrently, not sequentially, to each pixel in an image. The purpose of each group of tests is to identify uniquely a specific class in the image, such as smoke. A strength of this approach is that insight into the limits used in the threshold tests may be gained through the use of radiative transfer theory. Methodology and examples are provided for two different scenes, one containing clouds, forest fires, and smoke; and the other containing clouds over snow in the central United States. For both scenes, a limited amount of supporting information is provided by surface observers.

  4. Exploring eye movements in patients with glaucoma when viewing a driving scene.

    Directory of Open Access Journals (Sweden)

    David P Crabb

    Full Text Available BACKGROUND: Glaucoma is a progressive eye disease and a leading cause of visual disability. Automated assessment of the visual field determines the different stages in the disease process: it would be desirable to link these measurements taken in the clinic with patient's actual function, or establish if patients compensate for their restricted field of view when performing everyday tasks. Hence, this study investigated eye movements in glaucomatous patients when viewing driving scenes in a hazard perception test (HPT. METHODOLOGY/PRINCIPAL FINDINGS: The HPT is a component of the UK driving licence test consisting of a series of short film clips of various traffic scenes viewed from the driver's perspective each containing hazardous situations that require the camera car to change direction or slow down. Data from nine glaucomatous patients with binocular visual field defects and ten age-matched control subjects were considered (all experienced drivers. Each subject viewed 26 different films with eye movements simultaneously monitored by an eye tracker. Computer software was purpose written to pre-process the data, co-register it to the film clips and to quantify eye movements and point-of-regard (using a dynamic bivariate contour ellipse analysis. On average, and across all HPT films, patients exhibited different eye movement characteristics to controls making, for example, significantly more saccades (P<0.001; 95% confidence interval for mean increase: 9.2 to 22.4%. Whilst the average region of 'point-of-regard' of the patients did not differ significantly from the controls, there were revealing cases where patients failed to see a hazard in relation to their binocular visual field defect. CONCLUSIONS/SIGNIFICANCE: Characteristics of eye movement patterns in patients with bilateral glaucoma can differ significantly from age-matched controls when viewing a traffic scene. Further studies of eye movements made by glaucomatous patients could

  5. Using VIS/NIR and IR spectral cameras for detecting and separating crime scene details

    Science.gov (United States)

    Kuula, Jaana; Pölönen, Ilkka; Puupponen, Hannu-Heikki; Selander, Tuomas; Reinikainen, Tapani; Kalenius, Tapani; Saari, Heikki

    2012-06-01

    Detecting invisible details and separating mixed evidence is critical for forensic inspection. If this can be done reliably and fast at the crime scene, irrelevant objects do not require further examination at the laboratory. This will speed up the inspection process and release resources for other critical tasks. This article reports on tests which have been carried out at the University of Jyväskylä in Finland together with the Central Finland Police Department and the National Bureau of Investigation for detecting and separating forensic details with hyperspectral technology. In the tests evidence was sought after at an assumed violent burglary scene with the use of VTT's 500-900 nm wavelength VNIR camera, Specim's 400- 1000 nm VNIR camera, and Specim's 1000-2500 nm SWIR camera. The tested details were dried blood on a ceramic plate, a stain of four types of mixed and absorbed blood, and blood which had been washed off a table. Other examined details included untreated latent fingerprints, gunshot residue, primer residue, and layered paint on small pieces of wood. All cameras could detect visible details and separate mixed paint. The SWIR camera could also separate four types of human and animal blood which were mixed in the same stain and absorbed into a fabric. None of the cameras could however detect primer residue, untreated latent fingerprints, or blood that had been washed off. The results are encouraging and indicate the need for further studies. The results also emphasize the importance of creating optimal imaging conditions into the crime scene for each kind of subjects and backgrounds.

  6. An Attempt to Raise Japanese EFL Learners' Pragmatic Awareness Using Online Discourse Completion Tasks

    Science.gov (United States)

    Tanaka, Hiroya; Oki, Nanaho

    2015-01-01

    This practical paper discusses the effect of explicit instruction to raise Japanese EFL learners' pragmatic awareness using online discourse completion tasks. The five-part tasks developed by the authors use American TV drama scenes depicting particular speech acts and include explicit instruction in these speech acts. 46 Japanese EFL college…

  7. Kriolu Scenes in Lisbon: Where Migration Experiences and Housing Policy Meet

    DEFF Research Database (Denmark)

    Pardue, Derek

    2014-01-01

    by Kriolu-speaking Portuguese of Cape Verdean descent. I analyze residents’ responses to these changes, particularly the responses of young male rappers. My analysis reveals that rap music in these transforming neighborhoods is a means for making “Kriolu scenes”—expressions highlighting Cape Verdeans......’ experiences of Portuguese colonialism, postcolonialism, marginalization due to language and race, and now urban displacement. They are also expressions of belonging and cultural citizenship, and exercises of emplacement within the changing city. Kriolu scenes highlight an important but underappreciated role...

  8. "A cool little buzz": alcohol intoxication in the dance club scene.

    Science.gov (United States)

    Hunt, Geoffrey; Moloney, Molly; Fazio, Adam

    2014-06-01

    In recent years, there has been increasing concern about youthful "binge" drinking and intoxication. Yet the meaning of intoxication remains under-theorized. This paper examines intoxication in a young adult nightlife scene, using data from a 2005-2008 National Institute on Drug Abuse-funded project on Asian American youth and nightlife. Analyzing in-depth qualitative interview data with 250 Asian American young adults in the San Francisco area, we examine their narratives about alcohol intoxication with respect to sociability, stress, and fun, and their navigation of the fine line between being "buzzed" and being "wasted." Finally, limitations of the study and directions for future research are noted.

  9. Improved content aware scene retargeting for retinitis pigmentosa patients

    Directory of Open Access Journals (Sweden)

    Al-Atabany Walid I

    2010-09-01

    Full Text Available Abstract Background In this paper we present a novel scene retargeting technique to reduce the visual scene while maintaining the size of the key features. The algorithm is scalable to implementation onto portable devices, and thus, has potential for augmented reality systems to provide visual support for those with tunnel vision. We therefore test the efficacy of our algorithm on shrinking the visual scene into the remaining field of view for those patients. Methods Simple spatial compression of visual scenes makes objects appear further away. We have therefore developed an algorithm which removes low importance information, maintaining the size of the significant features. Previous approaches in this field have included seam carving, which removes low importance seams from the scene, and shrinkability which dynamically shrinks the scene according to a generated importance map. The former method causes significant artifacts and the latter is inefficient. In this work we have developed a new algorithm, combining the best aspects of both these two previous methods. In particular, our approach is to generate a shrinkability importance map using as seam based approach. We then use it to dynamically shrink the scene in similar fashion to the shrinkability method. Importantly, we have implemented it so that it can be used in real time without prior knowledge of future frames. Results We have evaluated and compared our algorithm to the seam carving and image shrinkability approaches from a content preservation perspective and a compression quality perspective. Also our technique has been evaluated and tested on a trial included 20 participants with simulated tunnel vision. Results show the robustness of our method at reducing scenes up to 50% with minimal distortion. We also demonstrate efficacy in its use for those with simulated tunnel vision of 22 degrees of field of view or less. Conclusions Our approach allows us to perform content aware video

  10. Robust pedestrian detection and tracking in crowded scenes

    Science.gov (United States)

    Lypetskyy, Yuriy

    2007-09-01

    This paper presents a vision based tracking system developed for very crowded situations like underground or railway stations. Our system consists on two main parts - searching of people candidates in single frames, and tracking them frame to frame over the scene. This paper concentrates mostly on the tracking part and describes its core components in detail. These are trajectories predictions using KLT vectors or Kalman filter, adaptive active shape model adjusting and texture matching. We show that combination of presented algorithms leads to robust people tracking even in complex scenes with permanent occlusions.

  11. Image Chunking: Defining Spatial Building Blocks for Scene Analysis.

    Science.gov (United States)

    1987-04-01

    mumgs0.USmusa 7.AUWOJO 4. CIUTAC Rm6ANT Wuugme*j James V/. Mlahoney DACA? 6-85-C-00 10 NOQ 1 4-85-K-O 124 Artificial Inteligence Laboratory US USS 545...0197 672 IMAGE CHUWING: DEINING SPATIAL UILDING PLOCKS FOR 142 SCENE ANRLYSIS(U) MASSACHUSETTS INST OF TECH CAIIAIDGE ARTIFICIAL INTELLIGENCE LAO J...Technical Report 980 F-Image Chunking: Defining Spatial Building Blocks for Scene DTm -Analysis S ELECTED James V. Mahoney’ MIT Artificial Intelligence

  12. Cortical Representations of Speech in a Multitalker Auditory Scene.

    Science.gov (United States)

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory

  13. Recognizing the Stranger: Recognition Scenes in the Gospel of John

    DEFF Research Database (Denmark)

    Larsen, Kasper Bro

    Recognizing the Stranger is the first monographic study of recognition scenes and motifs in the Gospel of John. The recognition type-scene (anagnōrisis) was a common feature in ancient drama and narrative, highly valued by Aristotle as a touching moment of truth, e.g., in Oedipus’ tragic self......-discovery and Odysseus’ happy homecoming. The book offers a reconstruction of the conventions of the genre and argues that it is one of the most recurrent and significant literary forms in the Gospel. When portraying Jesus as the divine stranger from heaven, the Gospel employs and transforms the formal and ideological...

  14. License plate localization in complex scenes based on oriented FAST and rotated BRIEF feature

    Science.gov (United States)

    Wang, Ran; Xia, Yuanchun; Wang, Guoyou; Tian, Jiangmin

    2015-09-01

    Within intelligent transportation systems, fast and robust license plate localization (LPL) in complex scenes is still a challenging task. Real-world scenes introduce complexities such as variation in license plate size and orientation, uneven illumination, background clutter, and nonplate objects. These complexities lead to poor performance using traditional LPL features, such as color, edge, and texture. Recently, state-of-the-art performance in LPL has been achieved by applying the scale invariant feature transform (SIFT) descriptor to LPL for visual matching. However, for applications that require fast processing, such as mobile phones, SIFT does not meet the efficiency requirement due to its relatively slow computational speed. To address this problem, a new approach for LPL, which uses the oriented FAST and rotated BRIEF (ORB) feature detector, is proposed. The feature extraction in ORB is much more efficient than in SIFT and is invariant to scale and grayscale as well as rotation changes, and hence is able to provide superior performance for LPL. The potential regions of a license plate are detected by considering spatial and color information simultaneously, which is different from previous approaches. The experimental results on a challenging dataset demonstrate the effectiveness and efficiency of the proposed method.

  15. Exposure of Secondary School Adolescents from Argentina and Mexico to Smoking Scenes in Movies: a Population-based Estimation

    Science.gov (United States)

    SALGADO, MARÍA V.; PÉREZ, ADRIANA; ABAD-VIVERO, ERIKA N.; THRASHER, JAMES F.; SARGENT, JAMES D.; MEJÍA, RAÚL

    2016-01-01

    Background Smoking scenes in movies promote adolescent smoking onset; thus, the analysis of the number of images of smoking in movies really reaching adolescents has become a subject of increasing interest. Objective The aim of this study was to estimate the level of exposure to images of smoking in movies watched by adolescents in Argentina and Mexico. Methods First-year secondary school students from Argentina and Mexico were surveyed. One hundred highest-grossing films from each year of the period 2009-2013 (Argentina) and 2010-2014 (Mexico) were analyzed. Each participant was assigned a random sample of 50 of these movies and was asked if he/she had watched them. The total number of adolescents who had watched each movie in each country was estimated and was multiplied by the number of smoking scenes (occurrences) in each movie to obtain the number of gross smoking impressions seen by secondary school adolescents from each country. Results Four-hundred and twenty-two movies were analyzed in Argentina and 433 in Mexico. Exposure to more than 500 million smoking impressions was estimated for adolescents in each country, averaging 128 and 121 minutes of smoking scenes seen by each Argentine and Mexican adolescent, respectively. Although 15, 16 and 18-rated movies had more smoking scenes in average, movies rated for younger teenagers were responsible for the highest number of smoking scenes watched by the students (67.3% in Argentina and 54.4% in Mexico) due to their larger audience. Conclusion At the population level, movies aimed at children are responsible for the highest tobacco burden seen by adolescents. PMID:27354756

  16. Ontology of a scene based on Java 3D architecture.

    Directory of Open Access Journals (Sweden)

    Rubén González Crespo

    2009-12-01

    Full Text Available The present article seeks to make an approach to the class hierarchy of a scene built with the architecture Java 3D, to develop an ontology of a scene as from the semantic essential components for the semantic structuring of the Web3D. Java was selected because the language recommended by the W3C Consortium for the Development of the Web3D oriented applications as from X3D standard is Xj3D which compositionof their Schemas is based the architecture of Java3D In first instance identifies the domain and scope of the ontology, defining classes and subclasses that comprise from Java3D architecture and the essential elements of a scene, as its point of origin, the field of rotation, translation The limitation of the scene and the definition of shaders, then define the slots that are declared in RDF as a framework for describing the properties of the classes established from identifying thedomain and range of each class, then develops composition of the OWL ontology on SWOOP Finally, be perform instantiations of the ontology building for a Iconosphere object as from class expressions defined.

  17. Estimating cotton canopy ground cover from remotely sensed scene reflectance

    International Nuclear Information System (INIS)

    Maas, S.J.

    1998-01-01

    Many agricultural applications require spatially distributed information on growth-related crop characteristics that could be supplied through aircraft or satellite remote sensing. A study was conducted to develop and test a methodology for estimating plant canopy ground cover for cotton (Gossypium hirsutum L.) from scene reflectance. Previous studies indicated that a relatively simple relationship between ground cover and scene reflectance could be developed based on linear mixture modeling. Theoretical analysis indicated that the effects of shadows in the scene could be compensated for by averaging the results obtained using scene reflectance in the red and near-infrared wavelengths. The methodology was tested using field data collected over several years from cotton test plots in Texas and California. Results of the study appear to verify the utility of this approach. Since the methodology relies on information that can be obtained solely through remote sensing, it would be particularly useful in applications where other field information, such as plant size, row spacing, and row orientation, is unavailable

  18. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Science.gov (United States)

    Cao, Jianfang; Chen, Lichao

    2015-01-01

    With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP) neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance. PMID:25838818

  19. Understanding road scenes using visual cues and GPS information

    NARCIS (Netherlands)

    Alvarez, J.M.; Lumbreras, F.; Lopez, A.M.; Gevers, T.

    2012-01-01

    Understanding road scenes is important in computer vision with different applications to improve road safety (e.g., advanced driver assistance systems) and to develop autonomous driving systems (e.g., Google driver-less vehicle). Current vision-based approaches rely on the robust combination of

  20. Evaluating Color Descriptors for Object and Scene Recognition

    NARCIS (Netherlands)

    van de Sande, K.E.A.; Gevers, T.; Snoek, C.G.M.

    2010-01-01

    Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been

  1. Crime scene as spatial production on screen, online and offline

    DEFF Research Database (Denmark)

    Sandvik, Kjetil; Waade, Anne Marit

    of Swedish town Ystad as location in Henning Mankell's crime novels and as film locations in remediating the novels into movies is turning the actual town into a virtual crime scene for visiting tourists. In the computer game Dollar the adaptation of Liza Marklund's crime universe remediates Stockholm...

  2. Cultural heritage and history in the European metal scene

    NARCIS (Netherlands)

    Klepper, de S.; Molpheta, S.; Pille, S.; Saouma, R.; During, R.; Muilwijk, M.

    2007-01-01

    This paper represents an inquiry on the use of history and cultural heritage in the metal scene. It is an attempt to show how history and cultural heritage can possibly be spread among people using an unconventional way. The followed research method was built on an explorative study that included an

  3. Audio scene segmentation for video with generic content

    Science.gov (United States)

    Niu, Feng; Goela, Naveen; Divakaran, Ajay; Abdel-Mottaleb, Mohamed

    2008-01-01

    In this paper, we present a content-adaptive audio texture based method to segment video into audio scenes. The audio scene is modeled as a semantically consistent chunk of audio data. Our algorithm is based on "semantic audio texture analysis." At first, we train GMM models for basic audio classes such as speech, music, etc. Then we define the semantic audio texture based on those classes. We study and present two types of scene changes, those corresponding to an overall audio texture change and those corresponding to a special "transition marker" used by the content creator, such as a short stretch of music in a sitcom or silence in dramatic content. Unlike prior work using genre specific heuristics, such as some methods presented for detecting commercials, we adaptively find out if such special transition markers are being used and if so, which of the base classes are being used as markers without any prior knowledge about the content. Our experimental results show that our proposed audio scene segmentation works well across a wide variety of broadcast content genres.

  4. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Directory of Open Access Journals (Sweden)

    Jianfang Cao

    2015-01-01

    Full Text Available With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  5. Coping with Perceived Ethnic Prejudice on the Gay Scene

    Science.gov (United States)

    Jaspal, Rusi

    2017-01-01

    There has been only cursory research into the sociological and psychological aspects of ethnic/racial discrimination among ethnic minority gay and bisexual men, and none that focuses specifically upon British ethnic minority gay men. This article focuses on perceptions of intergroup relations on the gay scene among young British South Asian gay…

  6. Number of perceptually distinct surface colors in natural scenes.

    Science.gov (United States)

    Marín-Franch, Iván; Foster, David H

    2010-09-30

    The ability to perceptually identify distinct surfaces in natural scenes by virtue of their color depends not only on the relative frequency of surface colors but also on the probabilistic nature of observer judgments. Previous methods of estimating the number of discriminable surface colors, whether based on theoretical color gamuts or recorded from real scenes, have taken a deterministic approach. Thus, a three-dimensional representation of the gamut of colors is divided into elementary cells or points which are spaced at one discrimination-threshold unit intervals and which are then counted. In this study, information-theoretic methods were used to take into account both differing surface-color frequencies and observer response uncertainty. Spectral radiances were calculated from 50 hyperspectral images of natural scenes and were represented in a perceptually almost uniform color space. The average number of perceptually distinct surface colors was estimated as 7.3 × 10(3), much smaller than that based on counting methods. This number is also much smaller than the number of distinct points in a scene that are, in principle, available for reliable identification under illuminant changes, suggesting that color constancy, or the lack of it, does not generally determine the limit on the use of color for surface identification.

  7. Effects of self-motion on auditory scene analysis.

    Science.gov (United States)

    Kondo, Hirohito M; Pressnitzer, Daniel; Toshima, Iwaki; Kashino, Makio

    2012-04-24

    Auditory scene analysis requires the listener to parse the incoming flow of acoustic information into perceptual "streams," such as sentences from a single talker in the midst of background noise. Behavioral and neural data show that the formation of streams is not instantaneous; rather, streaming builds up over time and can be reset by sudden changes in the acoustics of the scene. Here, we investigated the effect of changes induced by voluntary head motion on streaming. We used a telepresence robot in a virtual reality setup to disentangle all potential consequences of head motion: changes in acoustic cues at the ears, changes in apparent source location, and changes in motor or attentional processes. The results showed that self-motion influenced streaming in at least two ways. Right after the onset of movement, self-motion always induced some resetting of perceptual organization to one stream, even when the acoustic scene itself had not changed. Then, after the motion, the prevalent organization was rapidly biased by the binaural cues discovered through motion. Auditory scene analysis thus appears to be a dynamic process that is affected by the active sensing of the environment.

  8. Improving Perceptual Skills with Interactive 3-D VRML Scenes.

    Science.gov (United States)

    Johns, Janet Faye

    1998-01-01

    Describes techniques developed to improve the perceptual skills of maintenance technicians who align shafts on rotating equipment. A 3-D practice environment composed of animated mechanical components and tools was enhanced with 3-D VRML (Virtual Reality Modeling Language) scenes. (Author/AEF)

  9. Scene simulation of terahertz radiation characteristics of the armored vehicle

    Science.gov (United States)

    He, Ye; Jiang, Yuesong; He, Yuntao

    2008-12-01

    Scene simulation of radiation characteristics of targets and backgrounds is an important research topic for its benefits in the adaption and optimization of a sensor and its observation conditions. In this paper, imaging of the armored vehicle, an important and complicated military target, formed by passive terahertz sensors was studied, including calculation of the temperature field, analysis of atmospheric effects and the sensor models.

  10. Memory, emotion, and pupil diameter: Repetition of natural scenes.

    Science.gov (United States)

    Bradley, Margaret M; Lang, Peter J

    2015-09-01

    Recent studies have suggested that pupil diameter, like the "old-new" ERP, may be a measure of memory. Because the amplitude of the old-new ERP is enhanced for items encoded in the context of repetitions that are distributed (spaced), compared to massed (contiguous), we investigated whether pupil diameter is similarly sensitive to repetition. Emotional and neutral pictures of natural scenes were viewed once or repeated with massed (contiguous) or distributed (spaced) repetition during incidental free viewing and then tested on an explicit recognition test. Although an old-new difference in pupil diameter was found during successful recognition, pupil diameter was not enhanced for distributed, compared to massed, repetitions during either recognition or initial free viewing. Moreover, whereas a significant old-new difference was found for erotic scenes that had been seen only once during encoding, this difference was absent when erotic scenes were repeated. Taken together, the data suggest that pupil diameter is not a straightforward index of prior occurrence for natural scenes. © 2015 Society for Psychophysiological Research.

  11. The Rescue Mission: Assigning Guilt to a Chaotic Scene.

    Science.gov (United States)

    Procter, David E.

    1987-01-01

    Seeks to identify rhetorical distinctiveness of the rescue mission as a form of belligerency--examining presidential discourse justifying the 1985 Lebanon intervention, the 1965 Dominican intervention, and the 1983 Grenada intervention. Argues that the distinction is in guilt narrowly assigned to a chaotic scene and the concomitant call for…

  12. The role of memory for visual search in scenes.

    Science.gov (United States)

    Le-Hoa Võ, Melissa; Wolfe, Jeremy M

    2015-03-01

    Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes. © 2015 New York Academy of Sciences.

  13. Semi-Supervised Multitask Learning for Scene Recognition.

    Science.gov (United States)

    Lu, Xiaoqiang; Li, Xuelong; Mou, Lichao

    2015-09-01

    Scene recognition has been widely studied to understand visual information from the level of objects and their relationships. Toward scene recognition, many methods have been proposed. They, however, encounter difficulty to improve the accuracy, mainly due to two limitations: 1) lack of analysis of intrinsic relationships across different scales, say, the initial input and its down-sampled versions and 2) existence of redundant features. This paper develops a semi-supervised learning mechanism to reduce the above two limitations. To address the first limitation, we propose a multitask model to integrate scene images of different resolutions. For the second limitation, we build a model of sparse feature selection-based manifold regularization (SFSMR) to select the optimal information and preserve the underlying manifold structure of data. SFSMR coordinates the advantages of sparse feature selection and manifold regulation. Finally, we link the multitask model and SFSMR, and propose the semi-supervised learning method to reduce the two limitations. Experimental results report the improvements of the accuracy in scene recognition.

  14. Modelling Technology for Building Fire Scene with Virtual Geographic Environment

    Science.gov (United States)

    Song, Y.; Zhao, L.; Wei, M.; Zhang, H.; Liu, W.

    2017-09-01

    Building fire is a risky activity that can lead to disaster and massive destruction. The management and disposal of building fire has always attracted much interest from researchers. Integrated Virtual Geographic Environment (VGE) is a good choice for building fire safety management and emergency decisions, in which a more real and rich fire process can be computed and obtained dynamically, and the results of fire simulations and analyses can be much more accurate as well. To modelling building fire scene with VGE, the application requirements and modelling objective of building fire scene were analysed in this paper. Then, the four core elements of modelling building fire scene (the building space environment, the fire event, the indoor Fire Extinguishing System (FES) and the indoor crowd) were implemented, and the relationship between the elements was discussed also. Finally, with the theory and framework of VGE, the technology of building fire scene system with VGE was designed within the data environment, the model environment, the expression environment, and the collaborative environment as well. The functions and key techniques in each environment are also analysed, which may provide a reference for further development and other research on VGE.

  15. On the contribution of binocular disparity to the long-term memory for natural scenes.

    Directory of Open Access Journals (Sweden)

    Matteo Valsecchi

    Full Text Available Binocular disparity is a fundamental dimension defining the input we receive from the visual world, along with luminance and chromaticity. In a memory task involving images of natural scenes we investigate whether binocular disparity enhances long-term visual memory. We found that forest images studied in the presence of disparity for relatively long times (7s were remembered better as compared to 2D presentation. This enhancement was not evident for other categories of pictures, such as images containing cars and houses, which are mostly identified by the presence of distinctive artifacts rather than by their spatial layout. Evidence from a further experiment indicates that observers do not retain a trace of stereo presentation in long-term memory.

  16. The effect of scene content on speed, time, and distance perception

    Science.gov (United States)

    Awe, Cynthia A.; Johnson, Walter W.

    1993-01-01

    Helicopter flights performed at low-levels place high demands on pilots; they must simultaneously control the vehicle, avoid obstacles, and navigate. Therefore, pilots must correlate cues viewed in the external scene with information on map in order to maintain their geographical orientation. This is a particularly difficult task when helicopter pilots fly through visually unfamiliar terrain without highly detailed maps. As a result, pilots must often use estimates of elapsed time, distance traveled, and/or average speed in order to maintain a flight path indicated on a map during flight segments when these cues are absent. Therefore, the current study is concerned with the perception of speed, time, and distance, which we assume underlies the ability to orient oneself during these types of flight segments.

  17. Deep Learning Models of the Retinal Response to Natural Scenes.

    Science.gov (United States)

    McIntosh, Lane T; Maheswaranathan, Niru; Nayebi, Aran; Ganguli, Surya; Baccus, Stephen A

    2016-01-01

    A central challenge in sensory neuroscience is to understand neural computations and circuit mechanisms that underlie the encoding of ethologically relevant, natural stimuli. In multilayered neural circuits, nonlinear processes such as synaptic transmission and spiking dynamics present a significant obstacle to the creation of accurate computational models of responses to natural stimuli. Here we demonstrate that deep convolutional neural networks (CNNs) capture retinal responses to natural scenes nearly to within the variability of a cell's response, and are markedly more accurate than linear-nonlinear (LN) models and Generalized Linear Models (GLMs). Moreover, we find two additional surprising properties of CNNs: they are less susceptible to overfitting than their LN counterparts when trained on small amounts of data, and generalize better when tested on stimuli drawn from a different distribution (e.g. between natural scenes and white noise). An examination of the learned CNNs reveals several properties. First, a richer set of feature maps is necessary for predicting the responses to natural scenes compared to white noise. Second, temporally precise responses to slowly varying inputs originate from feedforward inhibition, similar to known retinal mechanisms. Third, the injection of latent noise sources in intermediate layers enables our model to capture the sub-Poisson spiking variability observed in retinal ganglion cells. Fourth, augmenting our CNNs with recurrent lateral connections enables them to capture contrast adaptation as an emergent property of accurately describing retinal responses to natural scenes. These methods can be readily generalized to other sensory modalities and stimulus ensembles. Overall, this work demonstrates that CNNs not only accurately capture sensory circuit responses to natural scenes, but also can yield information about the circuit's internal structure and function.

  18. The Effect of Distance on Moral Engagement: Event Related Potentials and Alpha Power are Sensitive to Perspective in a Virtual Shooting Task.

    Science.gov (United States)

    Petras, Kirsten; Ten Oever, Sanne; Jansma, Bernadette M

    2015-01-01

    In a shooting video game we investigated whether increased distance reduces moral conflict. We measured and analyzed the event related potential (ERP), including the N2 component, which has previously been linked to cognitive conflict from competing decision tendencies. In a modified Go/No-go task designed to trigger moral conflict participants had to shoot suddenly appearing human like avatars in a virtual reality scene. The scene was seen either from an ego perspective with targets appearing directly in front of the participant or from a bird's view, where targets were seen from above and more distant. To control for low level visual features, we added a visually identical control condition, where the instruction to "shoot" was replaced by an instruction to "detect." ERP waveforms showed differences between the two tasks as early as in the N1 time-range, with higher N1 amplitudes for the close perspective in the "shoot" task. Additionally, we found that pre-stimulus alpha power was significantly decreased in the ego, compared to the bird's view only for the "shoot" but not for the "detect" task. In the N2 time window, we observed main amplitude effects for response (No-go > Go) and distance (ego > bird perspective) but no interaction with task type (shoot vs. detect). We argue that the pre-stimulus and N1 effects can be explained by reduced attention and arousal in the distance condition when people are instructed to "shoot." These results indicate a reduced moral engagement for increased distance. The lack of interaction in the N2 across tasks suggests that at that time point response execution dominates. We discuss potential implications for real life shooting situations, especially considering recent developments in drone shootings which are per definition of a distant view.

  19. Perception While Watching Movies: Effects of Physical Screen Size and Scene Type

    Directory of Open Access Journals (Sweden)

    Tom Troscianko

    2012-08-01

    Full Text Available Over the last decade, television screens and display monitors have increased in size considerably, but has this improved our televisual experience? Our working hypothesis was that the audiences adopt a general strategy that “bigger is better.” However, as our visual perceptions do not tap directly into basic retinal image properties such as retinal image size (C. A. Burbeck, 1987, we wondered whether object size itself might be an important factor. To test this, we needed a task that would tap into the subjective experiences of participants watching a movie on different-sized displays with the same retinal subtense. Our participants used a line bisection task to self-report their level of “presence” (i.e., their involvement with the movie at several target locations that were probed in a 45-min section of the movie “The Good, The Bad, and The Ugly.” Measures of pupil dilation and reaction time to the probes were also obtained. In Experiment 1, we found that subjective ratings of presence increased with physical screen size, supporting our hypothesis. Face scenes also produced higher presence scores than landscape scenes for both screen sizes. In Experiment 2, reaction time and pupil dilation results showed the same trends as the presence ratings and pupil dilation correlated with presence ratings, providing some validation of the method. Overall, the results suggest that real-time measures of subjective presence might be a valuable tool for measuring audience experience for different types of (i display and (ii audiovisual material.

  20. SCENES OF HAPPINESS IN THE NOVELS OF DOSTOEVSKY

    Directory of Open Access Journals (Sweden)

    Rita Osipovna Mazel

    2013-11-01

    Full Text Available This article presents Dostoyevsky to readers as an author praising happiness and felicity. Having lived through deep sorrows he acquired insight into another dimension of life. Like a longing pathfinder, he states the unfeigned grace of life. “Life is a gift, life is mercy, and any minute may be the age of happiness”, – this is the essence of his great novels. People are not lonesome on Earth; they are bound by invisible threads. A loner may not succeed. One heart or one consciousness attracts another one like a magnet, as if claiming: thou art... Christ, with his Love and his Sacrifice, the greatest miracle on the Earth. It is impossible to be aware of Christ’s existence and not to be joyful. Dostoyevsky reveals one of the main principles of life: when you love someone and sacrifice yourself to this person you satisfy your aspiration for a beau ideal and feel like in heavens. In this article the author analyzes selected scenes of happiness in Dostoevsky’s novels: Arkady’s and his sister Liza’s admiration for the sacrifice of their father Versilov; Alyosha and Grushen’ka, saving each other instead of committing sins and transgressing moral standards; Alyosha’s dream about the Christ’s first miracle in Cana of Galilee; Stavrogin’s dream of the Golden Age of the blessed mankind... In Dostoyevsky’s tragic novel The Possessed (The Devils, or Demons a reader faces an image of love – mutual, sacrificial, fulfilling, and blithe. There is probably nothing similar in the history of the world literature. One can eminently feel the interconnectedness of Dostoevsky’s heroes with another, higher world that penetrates into every aspect of their lives. All of his creatures are illumed by the light of other worlds. It is clear that there cannot be darkness, despair, or hopelessness in Dostoevsky’s works, because even in the hell full of demons there is a place for righteous people, luminous (as Nikolai Berdyaev called them and

  1. Differences in change blindness to real-life scenes in adults with autism spectrum conditions.

    Directory of Open Access Journals (Sweden)

    Chris Ashwin

    Full Text Available People often fail to detect large changes to visual scenes following a brief interruption, an effect known as 'change blindness'. People with autism spectrum conditions (ASC have superior attention to detail and better discrimination of targets, and often notice small details that are missed by others. Together these predict people with autism should show enhanced perception of changes in simple change detection paradigms, including reduced change blindness. However, change blindness studies to date have reported mixed results in ASC, which have sometimes included no differences to controls or even enhanced change blindness. Attenuated change blindness has only been reported to date in ASC in children and adolescents, with no study reporting reduced change blindness in adults with ASC. The present study used a change blindness flicker task to investigate the detection of changes in images of everyday life in adults with ASC (n = 22 and controls (n = 22 using a simple change detection task design and full range of original scenes as stimuli. Results showed the adults with ASC had reduced change blindness compared to adult controls for changes to items of marginal interest in scenes, with no group difference for changes to items of central interest. There were no group differences in overall response latencies to correctly detect changes nor in the overall number of missed detections in the experiment. However, the ASC group showed greater missed changes for marginal interest changes of location, showing some evidence of greater change blindness as well. These findings show both reduced change blindness to marginal interest changes in ASC, based on response latencies, as well as greater change blindness to changes of location of marginal interest items, based on detection rates. The findings of reduced change blindness are consistent with clinical reports that people with ASC often notice small changes to less salient items within their

  2. Contextual effects of scene on the visual perception of object orientation in depth.

    Directory of Open Access Journals (Sweden)

    Ryosuke Niimi

    Full Text Available We investigated the effect of background scene on the human visual perception of depth orientation (i.e., azimuth angle of three-dimensional common objects. Participants evaluated the depth orientation of objects. The objects were surrounded by scenes with an apparent axis of the global reference frame, such as a sidewalk scene. When a scene axis was slightly misaligned with the gaze line, object orientation perception was biased, as if the gaze line had been assimilated into the scene axis (Experiment 1. When the scene axis was slightly misaligned with the object, evaluated object orientation was biased, as if it had been assimilated into the scene axis (Experiment 2. This assimilation may be due to confusion between the orientation of the scene and object axes (Experiment 3. Thus, the global reference frame may influence object orientation perception when its orientation is similar to that of the gaze-line or object.

  3. Estimating 3D tilt from local image cues in natural scenes

    Science.gov (United States)

    Burge, Johannes; McCann, Brian C.; Geisler, Wilson S.

    2016-01-01

    Estimating three-dimensional (3D) surface orientation (slant and tilt) is an important first step toward estimating 3D shape. Here, we examine how three local image cues from the same location (disparity gradient, luminance gradient, and dominant texture orientation) should be combined to estimate 3D tilt in natural scenes. We collected a database of natural stereoscopic images with precisely co-registered range images that provide the ground-truth distance at each pixel location. We then analyzed the relationship between ground-truth tilt and image cue values. Our analysis is free of assumptions about the joint probability distributions and yields the Bayes optimal estimates of tilt, given the cue values. Rich results emerge: (a) typical tilt estimates are only moderately accurate and strongly influenced by the cardinal bias in the prior probability distribution; (b) when cue values are similar, or when slant is greater than 40°, estimates are substantially more accurate; (c) when luminance and texture cues agree, they often veto the disparity cue, and when they disagree, they have little effect; and (d) simplifying assumptions common in the cue combination literature is often justified for estimating tilt in natural scenes. The fact that tilt estimates are typically not very accurate is consistent with subjective impressions from viewing small patches of natural scene. The fact that estimates are substantially more accurate for a subset of image locations is also consistent with subjective impressions and with the hypothesis that perceived surface orientation, at more global scales, is achieved by interpolation or extrapolation from estimates at key locations. PMID:27738702

  4. Ritual Scenes in the Iliad: Rote, Hallowed, or Encrypted as Ancient Art?

    Directory of Open Access Journals (Sweden)

    Margo Kitts

    2011-03-01

    Full Text Available Based in oral poetic and ritual theory, this article proposes that ritual scenes in Homer’s Iliad reflect unique compositional constraints beyond those found in other kinds of typical scenes. The focus is on oath-sacrifices and commensal sacrifices. Both ritual scene types exhibit strong identifying features, although they differ in their formal particulars and cultural implications. It is argued that both sorts of sacrificial scenes preserve especially ancient ritual patterns that may have parallels in Anatolian texts.

  5. Analyzing Peace Pedagogies

    Science.gov (United States)

    Haavelsrud, Magnus; Stenberg, Oddbjorn

    2012-01-01

    Eleven articles on peace education published in the first volume of the Journal of Peace Education are analyzed. This selection comprises peace education programs that have been planned or carried out in different contexts. In analyzing peace pedagogies as proposed in the 11 contributions, we have chosen network analysis as our method--enabling…

  6. Separate and simultaneous adjustment of light qualities in a real scene

    NARCIS (Netherlands)

    Xia, L.; Pont, S.C.; Heynderickx, I.E.J.R.

    2017-01-01

    Humans are able to estimate light field properties in a scene in that they have expectations of the objects' appearance inside it. Previously, we probed such expectations in a real scene by asking whether a "probe object" fitted a real scene with regard to its lighting. But how well are observers

  7. Scene complexity: influence on perception, memory, and development in the medial temporal lobe

    Directory of Open Access Journals (Sweden)

    Xiaoqian J Chai

    2010-03-01

    Full Text Available Regions in the medial temporal lobe (MTL and prefrontal cortex (PFC are involved in memory formation for scenes in both children and adults. The development in children and adolescents of successful memory encoding for scenes has been associated with increased activation in PFC, but not MTL, regions. However, evidence suggests that a functional subregion of the MTL that supports scene perception, located in the parahippocampal gyrus (PHG, goes through a prolonged maturation process. Here we tested the hypothesis that maturation of scene perception supports the development of memory for complex scenes. Scenes were characterized by their levels of complexity defined by the number of unique object categories depicted in the scene. Recognition memory improved with age, in participants ages 8-24, for high, but not low, complexity scenes. High-complexity compared to low-complexity scenes activated a network of regions including the posterior PHG. The difference in activations for high- versus low- complexity scenes increased with age in the right posterior PHG. Finally, activations in right posterior PHG were associated with age-related increases in successful memory formation for high-, but not low-, complexity scenes. These results suggest that functional maturation of the right posterior PHG plays a critical role in the development of enduring long-term recollection for high-complexity scenes.

  8. Mirth and Murder: Crime Scene Investigation as a Work Context for Examining Humor Applications

    Science.gov (United States)

    Roth, Gene L.; Vivona, Brian

    2010-01-01

    Within work settings, humor is used by workers for a wide variety of purposes. This study examines humor applications of a specific type of worker in a unique work context: crime scene investigation. Crime scene investigators examine death and its details. Members of crime scene units observe death much more frequently than other police officers…

  9. Scene From The Birth Of Venus: Kajian Karya Fotografi

    Directory of Open Access Journals (Sweden)

    Agra Locita

    2016-01-01

    Full Text Available Scene From The Birth Of Venus is a photographic artwork that created in 1949. The artwork was the result of collaboration between Salvador Dali and two photographers, Baron George Hoyningen-Huene and George Platt Lynes. They created Birth of Venus differently with first painting whom created by Bottocelli Sandro. He depicted the goddess Venus graceful and shy, but it recreated by Salvador Dali in an imaginative photographic artwork. In Scene From The Birth Of Venus, the goddess Venus depicted as half human and fish. Two creations in different way but still in the birth of the goddess Venus theme.  Key words: photography, painting, imaginative

  10. DJ Culture in the Commercial Sydney Dance Music Scene

    Directory of Open Access Journals (Sweden)

    Ed Montano

    2009-09-01

    Full Text Available The development of contemporary, post-disco dance music and its associated culture, as representative of a (supposedly underground, radical subculture, has been given extensive consideration within popular music studies. Significantly less attention has been given to the commercial, mainstream manifestations of this music. Therefore, this article examines the contemporary commercial dance music scene in Sydney, Australia, incorporating an analytical framework that revolves mainly around the work of DJs and the commercial scene they operate within. The ideas, opinions and interpretations of a selection of local DJs and other music industry practitioners who work in Sydney are central to the article’s analysis of DJ culture within the city and of, more specifically, DJ self-understandings with respect to choices of records and in relation to the twin imperatives of entertainment and education.

  11. A corticothalamic circuit model for sound identification in complex scenes.

    Directory of Open Access Journals (Sweden)

    Gonzalo H Otazu

    Full Text Available The identification of the sound sources present in the environment is essential for the survival of many animals. However, these sounds are not presented in isolation, as natural scenes consist of a superposition of sounds originating from multiple sources. The identification of a source under these circumstances is a complex computational problem that is readily solved by most animals. We present a model of the thalamocortical circuit that performs level-invariant recognition of auditory objects in complex auditory scenes. The circuit identifies the objects present from a large dictionary of possible elements and operates reliably for real sound signals with multiple concurrently active sources. The key model assumption is that the activities of some cortical neurons encode the difference between the observed signal and an internal estimate. Reanalysis of awake auditory cortex recordings revealed neurons with patterns of activity corresponding to such an error signal.

  12. Calculating Retinal Contrast from Scene Content: A Program

    Directory of Open Access Journals (Sweden)

    John J. McCann

    2018-01-01

    Full Text Available This paper describes a computer program for calculating the contrast image on the human retina from an array of scene luminances. We used achromatic transparency targets and measured test target's luminances with meters. We used the CIE standard Glare Spread Function (GSF to calculate the array of retinal contrast. This paper describes the CIE standard, the calculation and the analysis techniques comparing the calculated retinal image with observer data. The paper also describes in detail the techniques of accurate measurements of HDR scenes, conversion of measurements to input data arrays, calculation of the retinal image, including open source MATLAB code, pseudocolor visualization of HDR images that exceed the range of standard displays, and comparison of observed sensations with retinal stimuli.

  13. Nested-hierarchical scene models and image segmentation

    Science.gov (United States)

    Woodcock, C.; Harward, V. J.

    1992-01-01

    An improved model of scenes for image analysis purposes, a nested-hierarchical approach which explicitly acknowledges multiple scales of objects or categories of objects, is presented. A multiple-pass, region-based segmentation algorithm improves the segmentation of images from scenes better modeled as a nested hierarchy. A multiple-pass approach allows slow and careful growth of regions while interregion distances are below a global threshold. Past the global threshold, a minimum region size parameter forces development of regions in areas of high local variance. Maximum and viable region size parameters limit the development of undesirably large regions. Application of the segmentation algorithm for forest stand delineation in TM imagery yields regions corresponding to identifiable features in the landscape. The use of a local variance, adaptive-window texture channel in conjunction with spectral bands improves the ability to define regions corresponding to sparsely stocked forest stands which have high internal variance.

  14. Fast Scene Based Nonuniformity Correction with Minimal Temporal Latency

    Science.gov (United States)

    2006-09-01

    period of time. For example, the “Zenith camera” referenced in Elkins et al. can process 16 frames at a speed of 100 million frames per second (Mfps...derived to use whole shifts in - 42 - order to optimize for speed . The error introduced by uncertainty in the shifts is highly dependant on the...Sons, Inc. 1985. 3. Hayat, Majeed M; Ratliff, Bradley M; Tyo , J. Scott; Agi, Kamil. “Generalized Algebraic Scene-based Nonuniformity Correction

  15. Range sections as rock models for intensity rock scene segmentation

    CSIR Research Space (South Africa)

    Mkwelo, S

    2007-11-01

    Full Text Available can be drawn. • A methodology for rock-scene segmentation that com- bines intensity and range image analysis to reduce the effects of texture and color density variations is presented. • Post-processing in the form of outlier rejection... to the environment under imaging: poor lighting; color density and texture variations. Lighting conditions have been controlled through the elimination of natural lighting and proper design of syn- thetic lighting [3]. We present a methodology that avoids...

  16. Acoustic simulation in realistic 3D virtual scenes

    Science.gov (United States)

    Gozard, Patrick; Le Goff, Alain; Naz, Pierre; Cathala, Thierry; Latger, Jean

    2003-09-01

    The simulation workshop CHORALE developed in collaboration with OKTAL SE company for the French MoD is used by government services and industrial companies for weapon system validation and qualification trials in the infrared domain. The main operational reference for CHORALE is the assessment of the infrared guidance system of the Storm Shadow missile French version, called Scalp. The use of CHORALE workshop is now extended to the acoustic domain. The main objective is the simulation of the detection of moving vehicles in realistic 3D virtual scenes. This article briefly describes the acoustic model in CHORALE. The 3D scene is described by a set of polygons. Each polygon is characterized by its acoustic resistivity or its complex impedance. Sound sources are associated with moving vehicles and are characterized by their spectra and directivities. A microphone sensor is defined by its position, its frequency band and its sensitivity. The purpose of the acoustic simulation is to calculate the incoming acoustic pressure on microphone sensors. CHORALE is based on a generic ray tracing kernel. This kernel possesses original capabilities: computation time is nearly independent on the scene complexity, especially the number of polygons, databases are enhanced with precise physical data, special mechanisms of antialiasing have been developed that enable to manage very accurate details. The ray tracer takes into account the wave geometrical divergence and the atmospheric transmission. The sound wave refraction is simulated and rays cast in the 3D scene are curved according to air temperature gradient. Finally, sound diffraction by edges (hill, wall,...) is also taken into account.

  17. Developing Scene Understanding Neural Software for Realistic Autonomous Outdoor Missions

    Science.gov (United States)

    2017-09-01

    frameworks Name Developer Language Computation Key reference Caffe Berkeley Vision and Learning Center C++, Python /Matlab CPU, GPU a Torch Collobert...environment for machine learning . Proc Advances in Neural Information Processing Systems; EPFL-CONF-192376; 2011. c Al-Rfou R et al. Theano: A Python ...13. SUPPLEMENTARY NOTES 14. ABSTRACT We present a deep learning neural network model software implementation for improving scene understanding

  18. Demonstrating correction of low levels of astigmatism with realistic scenes.

    Science.gov (United States)

    Milton, Andy; Murphy, Michael; Rose, Ben; Olivares, Giovanna; Little, Borm Kim; Lau, Charis; Sulley, Anna

    2016-02-01

    Modern standard visual acuity tests are primarily designed as diagnostic tools for use during subjective refraction and normally bear little relation to real-world situations. We have developed a methodology to create realistic rendered scenes that demonstrate potential vision improvement in a relevant and engaging way. Low-cylindrical refractive error can be made more noticeable by optimizing the contrast and spatial frequencies, and by testing four different visual perception skills: motion tracking, pattern recognition, visual clutter differentiation and contrast sensitivity. Using a 1.00DC lens during iteration, we created a range of still and video scenes before optimizing to a selection 3-D rendered street scenes. These were assessed on everyday relevance, emotional and visual engagement and sensitivity to refractive correction for low-cylinder astigmats (0.75-1.00DC, n=74) wearing best spherical equivalent correction and then with astigmatism corrected. The most promising visual elements involved or combined optimized textures, distracting patterns behind text, faces at a distance, and oblique text. 91.9% of subjects (95% CI: 83.2, 97.0) reported an overall visual improvement when viewing the images with astigmatic correction, and 96% found the images helpful to determine which type of contact lens to use. Our method, which combines visual science with design thinking, takes a new approach to creating vision tests. The resultant test scenes can be used to improve patient interaction and help low cylinder astigmats see relevant, every-day benefits in correcting low levels (0.75 & 1.00DC) of astigmatism. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Efficient sliding spotlight SAR raw signal simulation of extended scenes

    Directory of Open Access Journals (Sweden)

    Huang Pingping

    2011-01-01

    Full Text Available Abstract Sliding spotlight mode is a novel synthetic aperture radar (SAR imaging scheme with an achieved azimuth resolution better than stripmap mode and ground coverage larger than spotlight configuration. However, its raw signal simulation of extended scenes may not be efficiently implemented in the two-dimensional (2D Fourier transformed domain. This article presents a novel sliding spotlight raw signal simulation approach from the wide-beam SAR imaging modes. This approach can generate sliding spotlight raw signal not only from raw data evaluated by the simulators, but also from real data in the stripmap/spotlight mode. In order to obtain the desired raw data from conventional stripmap/spotlight mode, the azimuth time-varying filtering, which is implemented by de-rotation and low-pass filtering, is adopted. As raw signal of extended scenes in the stripmap/spotlight mode can efficiently be evaluated in the 2D Fourier domain, the proposed approach provides an efficient sliding spotlight SAR simulator of extended scenes. Simulation results validate this efficient simulator.

  20. The Hip-Hop club scene: Gender, grinding and sex.

    Science.gov (United States)

    Muñoz-Laboy, Miguel; Weinstein, Hannah; Parker, Richard

    2007-01-01

    Hip-Hop culture is a key social medium through which many young men and women from communities of colour in the USA construct their gender. In this study, we focused on the Hip-Hop club scene in New York City with the intention of unpacking narratives of gender dynamics from the perspective of young men and women, and how these relate to their sexual experiences. We conducted a three-year ethnographic study that included ethnographic observations of Hip-Hop clubs and their social scene, and in-depth interviews with young men and young women aged 15-21. This paper describes how young people negotiate gender relations on the dance floor of Hip-Hop clubs. The Hip-Hop club scene represents a context or setting where young men's masculinities are contested by the social environment, where women challenge hypermasculine privilege and where young people can set the stage for what happens next in their sexual and emotional interactions. Hip-Hop culture therefore provides a window into the gender and sexual scripts of many urban minority youth. A fuller understanding of these patterns can offer key insights into the social construction of sexual risk, as well as the possibilities for sexual health promotion, among young people in urban minority populations.

  1. Differential electrophysiological signatures of semantic and syntactic scene processing.

    Science.gov (United States)

    Võ, Melissa L-H; Wolfe, Jeremy M

    2013-09-01

    In sentence processing, semantic and syntactic violations elicit differential brain responses observable in event-related potentials: An N400 signals semantic violations, whereas a P600 marks inconsistent syntactic structure. Does the brain register similar distinctions in scene perception? To address this question, we presented participants with semantic inconsistencies, in which an object was incongruent with a scene's meaning, and syntactic inconsistencies, in which an object violated structural rules. We found a clear dissociation between semantic and syntactic processing: Semantic inconsistencies produced negative deflections in the N300-N400 time window, whereas mild syntactic inconsistencies elicited a late positivity resembling the P600 found for syntactic inconsistencies in sentence processing. Extreme syntactic violations, such as a hovering beer bottle defying gravity, were associated with earlier perceptual processing difficulties reflected in the N300 response, but failed to produce a P600 effect. We therefore conclude that different neural populations are active during semantic and syntactic processing of scenes, and that syntactically impossible object placements are processed in a categorically different manner than are syntactically resolvable object misplacements.

  2. Oxytocin increases amygdala reactivity to threatening scenes in females.

    Science.gov (United States)

    Lischke, Alexander; Gamer, Matthias; Berger, Christoph; Grossmann, Annette; Hauenstein, Karlheinz; Heinrichs, Markus; Herpertz, Sabine C; Domes, Gregor

    2012-09-01

    The neuropeptide oxytocin (OT) is well known for its profound effects on social behavior, which appear to be mediated by an OT-dependent modulation of amygdala activity in the context of social stimuli. In humans, OT decreases amygdala reactivity to threatening faces in males, but enhances amygdala reactivity to similar faces in females, suggesting sex-specific differences in OT-dependent threat-processing. To further explore whether OT generally enhances amygdala-dependent threat-processing in females, we used functional magnetic resonance imaging (fMRI) in a randomized within-subject crossover design to measure amygdala activity in response to threatening and non-threatening scenes in 14 females following intranasal administration of OT or placebo. Participants' eye movements were recorded to investigate whether an OT-dependent modulation of amygdala activity is accompanied by enhanced exploration of salient scene features. Although OT had no effect on participants' gazing behavior, it increased amygdala reactivity to scenes depicting social and non-social threat. In females, OT may, thus, enhance the detection of threatening stimuli in the environment, potentially by interacting with gonadal steroids, such as progesterone and estrogen. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Hierarchical, Three-Dimensional Measurement System for Crime Scene Scanning.

    Science.gov (United States)

    Marcin, Adamczyk; Maciej, Sieniło; Robert, Sitnik; Adam, Woźniak

    2017-07-01

    We present a new generation of three-dimensional (3D) measuring systems, developed for the process of crime scene documentation. This measuring system facilitates the preparation of more insightful, complete, and objective documentation for crime scenes. Our system reflects the actual requirements for hierarchical documentation, and it consists of three independent 3D scanners: a laser scanner for overall measurements, a situational structured light scanner for more minute measurements, and a detailed structured light scanner for the most detailed parts of tscene. Each scanner has its own spatial resolution, of 2.0, 0.3, and 0.05 mm, respectively. The results of interviews we have conducted with technicians indicate that our developed 3D measuring system has significant potential to become a useful tool for forensic technicians. To ensure the maximum compatibility of our measuring system with the standards that regulate the documentation process, we have also performed a metrological validation and designated the maximum permissible length measurement error E MPE for each structured light scanner. In this study, we present additional results regarding documentation processes conducted during crime scene inspections and a training session. © 2017 American Academy of Forensic Sciences.

  4. Choosing Your Poison: Optimizing Simulator Visual System Selection as a Function of Operational Tasks

    Science.gov (United States)

    Sweet, Barbara T.; Kaiser, Mary K.

    2013-01-01

    Although current technology simulator visual systems can achieve extremely realistic levels they do not completely replicate the experience of a pilot sitting in the cockpit, looking at the outside world. Some differences in experience are due to visual artifacts, or perceptual features that would not be present in a naturally viewed scene. Others are due to features that are missing from the simulated scene. In this paper, these differences will be defined and discussed. The significance of these differences will be examined as a function of several particular operational tasks. A framework to facilitate the choice of visual system characteristics based on operational task requirements will be proposed.

  5. Visual search in barn owls: Task difficulty and saccadic behavior.

    Science.gov (United States)

    Orlowski, Julius; Ben-Shahar, Ohad; Wagner, Hermann

    2018-01-01

    How do we find what we are looking for? A target can be in plain view, but it may be detected only after extensive search. During a search we make directed attentional deployments like saccades to segment the scene until we detect the target. Depending on difficulty, the search may be fast with few attentional deployments or slow with many, shorter deployments. Here we study visual search in barn owls by tracking their overt attentional deployments-that is, their head movements-with a camera. We conducted a low-contrast feature search, a high-contrast orientation conjunction search, and a low-contrast orientation conjunction search, each with set sizes varying from 16 to 64 items. The barn owls were able to learn all of these tasks and showed serial search behavior. In a subsequent step, we analyzed how search behavior of owls changes with search complexity. We compared the search mechanisms in these three serial searches with results from pop-out searches our group had reported earlier. Saccade amplitude shortened and fixation duration increased in difficult searches. Also, in conjunction search saccades were guided toward items with shared target features. These data suggest that during visual search, barn owls utilize mechanisms similar to those that humans use.

  6. Miniature mass analyzer

    CERN Document Server

    Cuna, C; Lupsa, N; Cuna, S; Tuzson, B

    2003-01-01

    The paper presents the concept of different mass analyzers that were specifically designed as small dimension instruments able to detect with great sensitivity and accuracy the main environmental pollutants. The mass spectrometers are very suited instrument for chemical and isotopic analysis, needed in environmental surveillance. Usually, this is done by sampling the soil, air or water followed by laboratory analysis. To avoid drawbacks caused by sample alteration during the sampling process and transport, the 'in situ' analysis is preferred. Theoretically, any type of mass analyzer can be miniaturized, but some are more appropriate than others. Quadrupole mass filter and trap, magnetic sector, time-of-flight and ion cyclotron mass analyzers can be successfully shrunk, for each of them some performances being sacrificed but we must know which parameters are necessary to be kept unchanged. To satisfy the miniaturization criteria of the analyzer, it is necessary to use asymmetrical geometries, with ion beam obl...

  7. Analog multivariate counting analyzers

    CERN Document Server

    Nikitin, A V; Armstrong, T P

    2003-01-01

    Characterizing rates of occurrence of various features of a signal is of great importance in numerous types of physical measurements. Such signal features can be defined as certain discrete coincidence events, e.g. crossings of a signal with a given threshold, or occurrence of extrema of a certain amplitude. We describe measuring rates of such events by means of analog multivariate counting analyzers. Given a continuous scalar or multicomponent (vector) input signal, an analog counting analyzer outputs a continuous signal with the instantaneous magnitude equal to the rate of occurrence of certain coincidence events. The analog nature of the proposed analyzers allows us to reformulate many problems of the traditional counting measurements, and cast them in a form which is readily addressed by methods of differential calculus rather than by algebraic or logical means of digital signal processing. Analog counting analyzers can be easily implemented in discrete or integrated electronic circuits, do not suffer fro...

  8. Attentional Bias towards Emotional Scenes in Boys with Attention Deficit Hyperactivity Disorder.

    Science.gov (United States)

    Pishyareh, Ebrahim; Tehrani-Doost, Mehdi; Mahmoodi-Gharaie, Javad; Khorrami, Anahita; Joudi, Mitra; Ahmadi, Mehrnoosh

    2012-01-01

    Children with attention-deficit/hyperactivity disorder (ADHD) react explosively and inappropriately to emotional stimuli. It could be hypothesized that these children have some impairment in attending to emotional cues. Based on this hypothesis, we conducted this study to evaluate visual directions of children with ADHD towards paired emotional scenes. Thirty boys between the ages of 6 and 11 years diagnosed with ADHD were compared with 30 age-matched normal boys. All participants were presented paired emotional and neutral scenes in the four following categories: pleasant-neutral; pleasant-unpleasant; unpleasant-neutral; and neutral - neutral. Meanwhile, their visual orientations towards these pictures were evaluated using the eye tracking system. The number and duration of first fixation and duration of first gaze were compared between the two groups using the MANOVA analysis. The performance of each group in different categories was also analyzed using the Friedman test. With regards to duration of first gaze, which is the time taken to fixate on a picture before moving to another picture, ADHD children spent less time on pleasant pictures compared to normal group, while they were looking at pleasant - neutral and unpleasant - pleasant pairs. The duration of first gaze on unpleasant pictures was higher while children with ADHD were looking at unpleasant - neutral pairs (P<0.01). Based on the findings of this study it could be concluded that children with ADHD attend to unpleasant conditions more than normal children which leads to their emotional reactivity.

  9. Scene and character: interdisciplinary analysis of musical and sound symbols for higher education

    Directory of Open Access Journals (Sweden)

    Josep Gustems Carnicer

    2017-01-01

    Full Text Available The aim of this paper is to analyze interdisciplinary and educationally the descriptive aspects of the characters in literature in the world of music (opera, ballet, musical theater, program music, audiovisual, etc. through a wide range of resources and creative processes in various skills that include or encompass the sound. Because of that a literature review and multidisciplinary documentary is done from the most relevant texts and principal authors of the dynamic and stable personality models, from the analysis of vocal features in the scene and in the audiovisuals, from the leitmotiv as a symbol and sound representation of the character, from the the conflicts faced by the characters and how they can overcome them and how we could translated into music those transitions. The subject of myths brought to the world of music scene, character stereotypes and sound symbols that may characterize these scenic and literary content is also addressed. Notably, there is a broad consensus on the use of sound resources to characterize the different characters throughout the history of Western music in its various styles and genres. Furthermore, indications for their use are given and suggestions for activities to higher education suggest.

  10. Active visual search in non-stationary scenes: coping with temporal variability and uncertainty

    Science.gov (United States)

    Ušćumlić, Marija; Blankertz, Benjamin

    2016-02-01

    Objective. State-of-the-art experiments for studying neural processes underlying visual cognition often constrain sensory inputs (e.g., static images) and our behavior (e.g., fixed eye-gaze, long eye fixations), isolating or simplifying the interaction of neural processes. Motivated by the non-stationarity of our natural visual environment, we investigated the electroencephalography (EEG) correlates of visual recognition while participants overtly performed visual search in non-stationary scenes. We hypothesized that visual effects (such as those typically used in human-computer interfaces) may increase temporal uncertainty (with reference to fixation onset) of cognition-related EEG activity in an active search task and therefore require novel techniques for single-trial detection. Approach. We addressed fixation-related EEG activity in an active search task with respect to stimulus-appearance styles and dynamics. Alongside popping-up stimuli, our experimental study embraces two composite appearance styles based on fading-in, enlarging, and motion effects. Additionally, we explored whether the knowledge obtained in the pop-up experimental setting can be exploited to boost the EEG-based intention-decoding performance when facing transitional changes of visual content. Main results. The results confirmed our initial hypothesis that the dynamic of visual content can increase temporal uncertainty of the cognition-related EEG activity in active search with respect to fixation onset. This temporal uncertainty challenges the pivotal aim to keep the decoding performance constant irrespective of visual effects. Importantly, the proposed approach for EEG decoding based on knowledge transfer between the different experimental settings gave a promising performance. Significance. Our study demonstrates that the non-stationarity of visual scenes is an important factor in the evolution of cognitive processes, as well as in the dynamic of ocular behavior (i.e., dwell time and

  11. Goal-side selection in soccer penalty kicking when viewing natural scenes

    Directory of Open Access Journals (Sweden)

    Matthias eWeigelt

    2012-09-01

    Full Text Available The present study investigates the influence of goalkeeper displacement on goal-side selection in soccer penalty kicking. Facing a penalty situation, participants viewed photo-realistic images of a goalkeeper and a soccer goal. In the action selection task, they were asked to kick to the greater goal side, and in the perception task, they indicated the position of the goalkeeper on the goal line. To this end, the goalkeeper was depicted in a regular goalkeeping posture, standing either in the exact middle of the goal or being displaced at different distances to the left or right of the goal’s center. Results showed that the goalkeeper’s position on the goal line systematically affected goal-side selection, even when participants were not mindful of the displacement. These findings provide further support for the notion that the implicit processing of the stimulus layout in natural scenes can effect action selection in complex environments, such in soccer penalty shooting.

  12. Effects of scene content and layout on the perceived light direction in 3D spaces.

    Science.gov (United States)

    Xia, Ling; Pont, Sylvia C; Heynderickx, Ingrid

    2016-08-01

    The lighting and furnishing of an interior space (i.e., the reflectance of its materials, the geometries of the furnishings, and their arrangement) determine the appearance of this space. Conversely, human observers infer lighting properties from the space's appearance. We conducted two psychophysical experiments to investigate how the perception of the light direction is influenced by a scene's objects and their layout using real scenes. In the first experiment, we confirmed that the shape of the objects in the scene and the scene layout influence the perceived light direction. In the second experiment, we systematically investigated how specific shape properties influenced the estimation of the light direction. The results showed that increasing the number of visible faces of an object, ultimately using globally spherical shapes in the scene, supported the veridicality of the estimated light direction. Furthermore, symmetric arrangements in the scene improved the estimation of the tilt direction. Thus, human perception of light should integrally consider materials, scene content, and layout.

  13. Preferential recruitment of the basolateral amygdala during memory encoding of negative scenes in posttraumatic stress disorder.

    Science.gov (United States)

    Patel, Ronak; Girard, Todd A; Pukay-Martin, Nicole; Monson, Candice

    2016-04-01

    The vast majority of functional neuroimaging studies in posttraumatic stress disorder (PTSD) have examined the amygdala as a unitary structure. However, an emerging body of studies indicates that separable functions are subserved by discrete amygdala subregions. The basolateral subdivision (BLA), as compared with the centromedial amygdala (CMA), plays a unique role in learning and memory-based processes for threatening events, and alterations to the BLA have been implicated in the pathogenesis of PTSD. We assessed whether PTSD is associated with differential involvement of the BLA versus the CMA during successful encoding of emotionally charged events. Participants with PTSD (n=11) and a trauma-exposed comparison (TEC) group (n=11) viewed a series of photos that varied in valence (negative versus positive) and arousal (high versus low) while undergoing functional magnetic resonance imaging (fMRI). Subsequently, participants completed an old/new recognition memory test. Using analytic methods based on probabilistic cytoarchitectonic mapping, PTSD was associated with greater activation of the BLA, as compared to the CMA, during successful encoding of negative scenes, a finding which was not observed in the TEC group. Moreover, this memory-related activity in the BLA independently predicted PTSD status. Contrary to hypotheses, there was no evidence of altered BLA activity during memory encoding of high arousing relative to low arousing scenes. Task-related brain activation in PTSD does not appear to be consistent across the entire amygdala. Importantly, memory-related processing of negative information in PTSD is associated with preferential recruitment of the BLA. Copyright © 2016. Published by Elsevier Inc.

  14. Temporal and spatial neural dynamics in the perception of basic emotions from complex scenes.

    Science.gov (United States)

    Costa, Tommaso; Cauda, Franco; Crini, Manuella; Tatu, Mona-Karina; Celeghin, Alessia; de Gelder, Beatrice; Tamietto, Marco

    2014-11-01

    The different temporal dynamics of emotions are critical to understand their evolutionary role in the regulation of interactions with the surrounding environment. Here, we investigated the temporal dynamics underlying the perception of four basic emotions from complex scenes varying in valence and arousal (fear, disgust, happiness and sadness) with the millisecond time resolution of Electroencephalography (EEG). Event-related potentials were computed and each emotion showed a specific temporal profile, as revealed by distinct time segments of significant differences from the neutral scenes. Fear perception elicited significant activity at the earliest time segments, followed by disgust, happiness and sadness. Moreover, fear, disgust and happiness were characterized by two time segments of significant activity, whereas sadness showed only one long-latency time segment of activity. Multidimensional scaling was used to assess the correspondence between neural temporal dynamics and the subjective experience elicited by the four emotions in a subsequent behavioral task. We found a high coherence between these two classes of data, indicating that psychological categories defining emotions have a close correspondence at the brain level in terms of neural temporal dynamics. Finally, we localized the brain regions of time-dependent activity for each emotion and time segment with the low-resolution brain electromagnetic tomography. Fear and disgust showed widely distributed activations, predominantly in the right hemisphere. Happiness activated a number of areas mostly in the left hemisphere, whereas sadness showed a limited number of active areas at late latency. The present findings indicate that the neural signature of basic emotions can emerge as the byproduct of dynamic spatiotemporal brain networks as investigated with millisecond-range resolution, rather than in time-independent areas involved uniquely in the processing one specific emotion. © The Author (2013

  15. Novelty vs. familiarity principles in preference decisions: Task-context of past experience matters

    Directory of Open Access Journals (Sweden)

    Hsin-I eLiao

    2011-03-01

    Full Text Available Our preferences are shaped by past experience in many ways, but a systematic understanding of the factors is yet to be achieved. For example, studies of the mere exposure effect show that experience with an item leads to increased liking (familiarity preference, but the exact opposite tendency is found in other studies utilizing dishabituation (novelty preference. Recently, it has been found that image category affects whether familiarity or novelty preference emerges from repeated stimulus exposure (Park, Shimojo, and Shimojo, PNAS 2010. Faces elicited familiarity preference, but natural scenes elicited novelty preference. In their task, preference judgments were made throughout all exposures, raising the question of whether the task-context during exposure was involved. We adapt their paradigm, testing if passive exposure or objective judgment task-contexts lead to different results. Results showed that after passive viewing, familiar faces were preferred, but no preference bias in either direction was found with natural scenes, or with geometric figures (control. After exposure during the objective judgment task, familiar faces were preferred, novel natural scenes were preferred, and no preference bias was found with geometric figures. The overall results replicate the segregation of preference biases across object categories and suggest that the preference for familiar faces and novel natural scenes are modulated by task-context memory at different processing levels or selection involvement. Possible underlying mechanisms of the two types of preferences are discussed.

  16. Analyzing Stereotypes in Media.

    Science.gov (United States)

    Baker, Jackie

    1996-01-01

    A high school film teacher studied how students recognized messages in film, examining how film education could help students identify and analyze racial and gender stereotypes. Comparison of students' attitudes before and after the film course found that the course was successful in raising students' consciousness. (SM)

  17. Centrifugal analyzer development

    International Nuclear Information System (INIS)

    Burtis, C.A.; Bauer, M.L.; Bostick, W.D.

    1976-01-01

    The development of the centrifuge fast analyzer (CFA) is reviewed. The development of a miniature CFA with computer data analysis is reported and applications for automated diagnostic chemical and hematological assays are discussed. A portable CFA system with microprocessor was adapted for field assays of air and water samples for environmental pollutants, including ammonia, nitrates, nitrites, phosphates, sulfates, and silica. 83 references

  18. Americal options analyzed differently

    NARCIS (Netherlands)

    Nieuwenhuis, J.W.

    2003-01-01

    In this note we analyze in a discrete-time context and with a finite outcome space American options starting with the idea that every tradable should be a martingale under a certain measure. We believe that in this way American options become more understandable to people with a good working

  19. CAMEO-SIM: a physics-based broadband scene simulation tool for assessment of camouflage, concealment, and deception methodologies

    Science.gov (United States)

    Moorhead, Ian R.; Gilmore, Marilyn A.; Houlbrook, Alexander W.; Oxford, David E.; Filbee, David R.; Stroud, Colin A.; Hutchings, G.; Kirk, Albert

    2001-09-01

    Assessment of camouflage, concealment, and deception (CCD) methodologies is not a trivial problem; conventionally the only method has been to carry out field trials, which are both expensive and subject to the vagaries of the weather. In recent years computing power has increased, such that there are now many research programs using synthetic environments for CCD assessments. Such an approach is attractive; the user has complete control over the environmental parameters and many more scenarios can be investigated. The UK Ministry of Defence is currently developing a synthetic scene generation tool for assessing the effectiveness of air vehicle camouflage schemes. The software is sufficiently flexible to allow it to be used in a broader range of applications, including full CCD assessment. The synthetic scene simulation system (CAMEO- SIM) has been developed, as an extensible system, to provide imagery within the 0.4 to 14 micrometers spectral band with as high a physical fidelity as possible. it consists of a scene design tool, an image generator, that incorporates both radiosity and ray-tracing process, and an experimental trials tool. The scene design tool allows the user to develop a 3D representation of the scenario of interest from a fixed viewpoint. Target(s) of interest can be placed anywhere within this 3D representation and may be either static or moving. Different illumination conditions and effects of the atmosphere can be modeled together with directional reflectance effects. The user has complete control over the level of fidelity of the final image. The output from the rendering tool is a sequence of radiance maps, which may be used by sensor models or for experimental trials in which observers carry out target acquisition tasks. The software also maintains an audit trail of all data selected to generate a particular image, both in terms of material properties used and the rendering options chosen. A range of verification tests has shown that the

  20. Rapid discrimination of visual scene content in the human brain

    Science.gov (United States)

    Anokhin, Andrey P.; Golosheykin, Simon; Sirevaag, Erik; Kristjansson, Sean; Rohrbaugh, John W.; Heath, Andrew C.

    2007-01-01

    The rapid evaluation of complex visual environments is critical for an organism's adaptation and survival. Previous studies have shown that emotionally significant visual scenes, both pleasant and unpleasant, elicit a larger late positive wave in the event-related brain potential (ERP) than emotionally neutral pictures. The purpose of the present study was to examine whether neuroelectric responses elicited by complex pictures discriminate between specific, biologically relevant contents of the visual scene and to determine how early in the picture processing this discrimination occurs. Subjects (n=264) viewed 55 color slides differing in both scene content and emotional significance. No categorical judgments or responses were required. Consistent with previous studies, we found that emotionally arousing pictures, regardless of their content, produce a larger late positive wave than neutral pictures. However, when pictures were further categorized by content, anterior ERP components in a time window between 200−600 ms following stimulus onset showed a high selectivity for pictures with erotic content compared to other pictures regardless of their emotional valence (pleasant, neutral, and unpleasant) or emotional arousal. The divergence of ERPs elicited by erotic and non-erotic contents started at 185 ms post-stimulus in the fronto-central midline regions, with a later onset in parietal regions. This rapid, selective, and content-specific processing of erotic materials and its dissociation from other pictures (including emotionally positive pictures) suggests the existence of a specialized neural network for prioritized processing of a distinct category of biologically relevant stimuli with high adaptive and evolutionary significance. PMID:16712815

  1. The capture and recreation of 3D auditory scenes

    Science.gov (United States)

    Li, Zhiyun

    The main goal of this research is to develop the theory and implement practical tools (in both software and hardware) for the capture and recreation of 3D auditory scenes. Our research is expected to have applications in virtual reality, telepresence, film, music, video games, auditory user interfaces, and sound-based surveillance. The first part of our research is concerned with sound capture via a spherical microphone array. The advantage of this array is that it can be steered into any 3D directions digitally with the same beampattern. We develop design methodologies to achieve flexible microphone layouts, optimal beampattern approximation and robustness constraint. We also design novel hemispherical and circular microphone array layouts for more spatially constrained auditory scenes. Using the captured audio, we then propose a unified and simple approach for recreating them by exploring the reciprocity principle that is satisfied between the two processes. Our approach makes the system easy to build, and practical. Using this approach, we can capture the 3D sound field by a spherical microphone array and recreate it using a spherical loudspeaker array, and ensure that the recreated sound field matches the recorded field up to a high order of spherical harmonics. For some regular or semi-regular microphone layouts, we design an efficient parallel implementation of the multi-directional spherical beamformer by using the rotational symmetries of the beampattern and of the spherical microphone array. This can be implemented in either software or hardware and easily adapted for other regular or semi-regular layouts of microphones. In addition, we extend this approach for headphone-based system. Design examples and simulation results are presented to verify our algorithms. Prototypes are built and tested in real-world auditory scenes.

  2. Optic flow aided navigation and 3D scene reconstruction

    Science.gov (United States)

    Rollason, Malcolm

    2013-10-01

    An important enabler for low cost airborne systems is the ability to exploit low cost inertial instruments. An Inertial Navigation System (INS) can provide a navigation solution, when GPS is denied, by integrating measurements from inertial sensors. However, the gyrometer and accelerometer biases of low cost inertial sensors cause compound errors in the integrated navigation solution. This paper describes experiments to establish whether (and to what extent) the navigation solution can be aided by fusing measurements from an on-board video camera with measurements from the inertial sensors. The primary aim of the work was to establish whether optic flow aided navigation is beneficial even when the 3D structure within the observed scene is unknown. A further aim was to investigate whether an INS can help to infer 3D scene content from video. Experiments with both real and synthetic data have been conducted. Real data was collected using an AR Parrot quadrotor. Empirical results illustrate that optic flow provides a useful aid to navigation even when the 3D structure of the observed scene is not known. With optic flow aiding of the INS, the computed trajectory is consistent with the true camera motion, whereas the unaided INS yields a rapidly increasing position error (the data represents ~40 seconds, after which the unaided INS is ~50 metres in error and has passed through the ground). The results of the Monte Carlo simulation concur with the empirical result. Position errors, which grow as a quadratic function of time when unaided, are substantially checked by the availability of optic flow measurements.

  3. Analyzed Using Statistical Moments

    International Nuclear Information System (INIS)

    Oltulu, O.

    2004-01-01

    Diffraction enhanced imaging (DEl) technique is a new x-ray imaging method derived from radiography. The method uses a monorheumetten x-ray beam and introduces an analyzer crystal between an object and a detector Narrow angular acceptance of the analyzer crystal generates an improved contrast over the evaluation radiography. While standart radiography can produce an 'absorption image', DEl produces 'apparent absorption' and 'apparent refraction' images with superior quality. Objects with similar absorption properties may not be distinguished with conventional techniques due to close absorption coefficients. This problem becomes more dominant when an object has scattering properties. A simple approach is introduced to utilize scattered radiation to obtain 'pure absorption' and 'pure refraction' images

  4. Dynamic IR scene projector based upon the digital micromirror device

    Science.gov (United States)

    Beasley, D. Brett; Bender, Matt W.; Crosby, Jay; Messer, Tim; Saylor, Daniel A.

    2001-08-01

    Optical Sciences Corp. has developed a new dynamic infrared scene projector technology called the Micromirror Array Projector System (MAPS). The MAPS is based upon the Texas Instruments Digital Micromirror DeviceTM which has been modified to project images that are suitable for testing sensors and seekers operating in the UV, visible, and IR wavebands. The projector may be used in several configurations which are optimized for specific applications. This paper provides an overview of the design and performance of the MAPS projection system, as well as example imagery from prototype projector systems.

  5. Scene recognition and colorization for vehicle infrared images

    Science.gov (United States)

    Hou, Junjie; Sun, Shaoyuan; Shen, Zhenyi; Huang, Zhen; Zhao, Haitao

    2016-10-01

    In order to make better use of infrared technology for driving assistance system, a scene recognition and colorization method is proposed in this paper. Various objects in a queried infrared image are detected and labelled with proper categories by a combination of SIFT-Flow and MRF model. The queried image is then colorized by assigning corresponding colors according to the categories of the objects appeared. The results show that the strategy here emphasizes important information of the IR images for human vision and could be used to broaden the application of IR images for vehicle driving.

  6. Stereo Scene Flow for 3D Motion Analysis

    CERN Document Server

    Wedel, Andreas

    2011-01-01

    This book presents methods for estimating optical flow and scene flow motion with high accuracy, focusing on the practical application of these methods in camera-based driver assistance systems. Clearly and logically structured, the book builds from basic themes to more advanced concepts, culminating in the development of a novel, accurate and robust optic flow method. Features: reviews the major advances in motion estimation and motion analysis, and the latest progress of dense optical flow algorithms; investigates the use of residual images for optical flow; examines methods for deriving mot

  7. Scene structure in the saturation component of color images

    Science.gov (United States)

    Thomas, Bruce A.; Strickland, Robin N.

    1996-04-01

    A tenet of a new class of color image enhancement algorithms involves the observation that the saturation component of color images often contains what appears to be valid image structure depicting the underlying scene. In this work we present the findings of a study of the structural correspondence between the saturation and luminance components of a large database of color images. Various statistical relationships are identified. The correspondence of edges at different scales in the sense of Marr's theory of vision is also observed. Several new color image enhancement algorithms which exploit these unique characteristics are described.

  8. Photorealistic ray tracing to visualize automobile side mirror reflective scenes.

    Science.gov (United States)

    Lee, Hocheol; Kim, Kyuman; Lee, Gang; Lee, Sungkoo; Kim, Jingu

    2014-10-20

    We describe an interactive visualization procedure for determining the optimal surface of a special automobile side mirror, thereby removing the blind spot, without the need for feedback from the error-prone manufacturing process. If the horizontally progressive curvature distributions are set to the semi-mathematical expression for a free-form surface, the surface point set can then be derived through numerical integration. This is then converted to a NURBS surface while retaining the surface curvature. Then, reflective scenes from the driving environment can be virtually realized using photorealistic ray tracing, in order to evaluate how these reflected images would appear to drivers.

  9. Crimes Scenes as Augmented Reality, off-screen, online and offline

    DEFF Research Database (Denmark)

    Sandvik, Kjetil; Waade, Anne-Marit

    Our field of investigation is site specific realism in crime fiction and spatial production as media specific features. We analyze the (re)production of crime scenes in respectively crime series, computer games and tourist practice, and relate this to the ideas of augmented reality. Using...... a distinction between places as locations situated in the physical world and spaces as imagined or virtual locations as our point of departure, this paper investigates how places in various ways have become augmented by means of mediatization. Augmented reality represents processes of mediatization that broaden...... and enhance spatial experiences. These processes are characterized by the activation of users and the creation of artificial operational environments embedded in various physical or virtual locations. The idea of augmented spatial practice is related to the ideas of site specific aesthetic...

  10. Crimes Scenes as Augmented Reality, off-screen, online and offline

    DEFF Research Database (Denmark)

    Sandvik, Kjetil; Waade, Anne Marit

    2008-01-01

    Our field of investigation is site specific realism in crime fiction and spatial production as media specific features. We analyze the (re)production of crime scenes in respectively crime series, computer games and tourist practice, and relate this to the ideas of augmented reality. Using...... a distinction between places as locations situated in the physical world and spaces as imagined or virtual locations as our point of departure, this paper investigates how places in various ways have become augmented by means of mediatization. Augmented reality represents processes of mediatization that broaden...... and enhance spatial experiences. These processes are characterized by the activation of users and the creation of artificial operational environments embedded in various physical or virtual locations. The idea of augmented spatial practice is related to the ideas of site specific aesthetic...

  11. Standoff alpha radiation detection for hot cell imaging and crime scene investigation

    Science.gov (United States)

    Kerst, Thomas; Sand, Johan; Ihantola, Sakari; Peräjärvi, Kari; Nicholl, Adrian; Hrnecek, Erich; Toivonen, Harri; Toivonen, Juha

    2018-02-01

    This paper presents the remote detection of alpha contamination in a nuclear facility. Alpha-active material in a shielded nuclear radiation containment chamber has been localized by optical means. Furthermore, sources of radiation danger have been identified in a staged crime scene setting. For this purpose, an electron-multiplying charge-coupled device camera was used to capture photons generated by alpha-induced air scintillation (radioluminescence). The detected radioluminescence was superimposed with a regular photograph to reveal the origin of the light and thereby the alpha radioactive material. The experimental results show that standoff detection of alpha contamination is a viable tool in radiation threat detection. Furthermore, the radioluminescence spectrum in the air is spectrally analyzed. Possibilities of camera-based alpha threat detection under various background lighting conditions are discussed.

  12. Fractional channel multichannel analyzer

    Science.gov (United States)

    Brackenbush, L.W.; Anderson, G.A.

    1994-08-23

    A multichannel analyzer incorporating the features of the present invention obtains the effect of fractional channels thus greatly reducing the number of actual channels necessary to record complex line spectra. This is accomplished by using an analog-to-digital converter in the asynchronous mode, i.e., the gate pulse from the pulse height-to-pulse width converter is not synchronized with the signal from a clock oscillator. This saves power and reduces the number of components required on the board to achieve the effect of radically expanding the number of channels without changing the circuit board. 9 figs.

  13. Touch DNA sampling with SceneSafe Fast™ minitapes.

    Science.gov (United States)

    Stoop, Britta; Defaux, Priscille Merciani; Utz, Silvia; Zieger, Martin

    2017-11-01

    To achieve optimal results in the forensic analysis of trace DNA, choosing the right collection technique is crucial. Three common approaches are currently well-established for DNA retrieval from items of clothing, notably cutting, swabbing and tape-lifting. The latter two are non-destructive and therefore preferable on items of value. Even though the most recently established technique of DNA retrieval by adhesive tapes is widely used since quite some years now, little information has been published so far on how well it performs compared to other methods. Even more important, when it comes to choosing the right DNA extraction method for forensic lifting-tapes, the available information one can rely on as a forensic geneticist is quite scarce. In our study we compared the two widely used, commercially available and automation suitable magnetic bead-based extraction methods "iPrep Forensic Kit" and "PrepFiler Express BTA™ Kit" to conventional organic solvent extraction. The results demonstrate that DNA extraction from standardized saliva samples applied to SceneSafe Fast™ minitapes is most efficient with phenol-chloroform. We also provide evidence that SceneSafe Fast™ minitapes perform better than wet cotton swabs in the sampling of touch DNA from cotton fabric. Applying the tape only once in every spot on the tissue is thereby sufficient for a considerably better collection performance of the tapes compared to swabbing. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Evaluating color descriptors for object and scene recognition.

    Science.gov (United States)

    van de Sande, Koen E A; Gevers, Theo; Snoek, Cees G M

    2010-09-01

    Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been proposed. Because many different descriptors exist, a structured overview is required of color invariant descriptors in the context of image category recognition. Therefore, this paper studies the invariance properties and the distinctiveness of color descriptors (software to compute the color descriptors from this paper is available from http://www.colordescriptors.com) in a structured way. The analytical invariance properties of color descriptors are explored, using a taxonomy based on invariance properties with respect to photometric transformations, and tested experimentally using a data set with known illumination conditions. In addition, the distinctiveness of color descriptors is assessed experimentally using two benchmarks, one from the image domain and one from the video domain. From the theoretical and experimental results, it can be derived that invariance to light intensity changes and light color changes affects category recognition. The results further reveal that, for light intensity shifts, the usefulness of invariance is category-specific. Overall, when choosing a single descriptor and no prior knowledge about the data set and object and scene categories is available, the OpponentSIFT is recommended. Furthermore, a combined set of color descriptors outperforms intensity-based SIFT and improves category recognition by 8 percent on the PASCAL VOC 2007 and by 7 percent on the Mediamill Challenge.

  15. Scene understanding based on network-symbolic models

    Science.gov (United States)

    Kuvich, Gary

    2005-05-01

    New generations of smart weapons and unmanned vehicles must have reliable perceptual systems that are similar to human vision. Instead of precise computations of 3-dimensional models, a network-symbolic system converts image information into an "understandable" Network-Symbolic format, which is similar to relational knowledge models. Logic of visual scenes can be captured in the Network-Symbolic models and used for the disambiguation of visual information. It is hard to use geometric operations for processing of natural images. Instead, the brain builds a relational network-symbolic structure of visual scene, using different clues to set up the relational order of surfaces and objects. Feature, symbol, and predicate are equivalent in the biologically inspired Network-Symbolic systems. A linking mechanism binds these features/symbols into coherent structures, and image converts from a "raster" into a "vector" representation that can be better interpreted by higher-level knowledge structures. View-based object recognition is a hard problem for traditional algorithms that directly match a primary view of an object to a model. In Network-Symbolic Models, the derived structure, not the primary view, is a subject for recognition. Such recognition is not affected by local changes and appearances of the object as seen from a set of similar views.

  16. Collaborating Filtering Community Image Recommendation System Based on Scene

    Directory of Open Access Journals (Sweden)

    He Tao

    2017-01-01

    Full Text Available With the advancement of smart city, the development of intelligent mobile terminal and wireless network, the traditional text information service no longer meet the needs of the community residents, community image service appeared as a new media service. “There are pictures of the truth” has become a community residents to understand and master the new dynamic community, image information service has become a new information service. However, there are two major problems in image information service. Firstly, the underlying eigenvalues extracted by current image feature extraction techniques are difficult for users to understand, and there is a semantic gap between the image content itself and the user’s understanding; secondly, in community life of the image data increasing quickly, it is difficult to find their own interested image data. Aiming at the two problems, this paper proposes a unified image semantic scene model to express the image content. On this basis, a collaborative filtering recommendation model of fusion scene semantics is proposed. In the recommendation model, a comprehensiveness and accuracy user interest model is proposed to improve the recommendation quality. The results of the present study have achieved good results in the pilot cities of Wenzhou and Yan'an, and it is applied normally.

  17. Estimating perception of scene layout properties from global image features.

    Science.gov (United States)

    Ross, Michael G; Oliva, Aude

    2010-01-08

    The relationship between image features and scene structure is central to the study of human visual perception and computer vision, but many of the specifics of real-world layout perception remain unknown. We do not know which image features are relevant to perceiving layout properties, or whether those features provide the same information for every type of image. Furthermore, we do not know the spatial resolutions required for perceiving different properties. This paper describes an experiment and a computational model that provides new insights on these issues. Humans perceive the global spatial layout properties such as dominant depth, openness, and perspective, from a single image. This work describes an algorithm that reliably predicts human layout judgments. This model's predictions are general, not specific to the observers it trained on. Analysis reveals that the optimal spatial resolutions for determining layout vary with the content of the space and the property being estimated. Openness is best estimated at high resolution, depth is best estimated at medium resolution, and perspective is best estimated at low resolution. Given the reliability and simplicity of estimating the global layout of real-world environments, this model could help resolve perceptual ambiguities encountered by more detailed scene reconstruction schemas.

  18. S3-2: Colorfulness Perception Adapting to Natural Scenes

    Directory of Open Access Journals (Sweden)

    Yoko Mizokami

    2012-10-01

    Full Text Available Our visual system has the ability to adapt to the color characteristics of environment and maintain stable color appearance. Many researches on chromatic adaptation and color constancy suggested that the different levels of visual processes involve the adaptation mechanism. In the case of colorfulness perception, it has been shown that the perception changes with adaptation to chromatic contrast modulation and to surrounding chromatic variance. However, it is still not clear how the perception changes in natural scenes and what levels of visual mechanisms contribute to the perception. Here, I will mainly present our recent work on colorfulness-adaptation in natural images. In the experiment, we examined whether the colorfulness perception of an image was influenced by the adaptation to natural images with different degrees of saturation. Natural and unnatural (shuffled or phase-scrambled images are used for adapting and test images, and all combinations of adapting and test images were tested (e.g., the combination of natural adapting images and a shuffled test image. The results show that colorfulness perception was influenced by adaptation to the saturation of images. A test image appeared less colorful after adaptation to saturated images, and vice versa. The effect of colorfulness adaptation was the strongest for the combination of natural adapting and natural test images. The fact that the naturalness of the spatial structure in an image affects the strength of the adaptation effect implies that the recognition of natural scene would play an important role in the adaptation mechanism.

  19. The Influence of Familiarity on Affective Responses to Natural Scenes

    Science.gov (United States)

    Sanabria Z., Jorge C.; Cho, Youngil; Yamanaka, Toshimasa

    This kansei study explored how familiarity with image-word combinations influences affective states. Stimuli were obtained from Japanese print advertisements (ads), and consisted of images (e.g., natural-scene backgrounds) and their corresponding headlines (advertising copy). Initially, a group of subjects evaluated their level of familiarity with images and headlines independently, and stimuli were filtered based on the results. In the main experiment, a different group of subjects rated their pleasure and arousal to, and familiarity with, image-headline combinations. The Self-Assessment Manikin (SAM) scale was used to evaluate pleasure and arousal, and a bipolar scale was used to evaluate familiarity. The results showed a high correlation between familiarity and pleasure, but low correlation between familiarity and arousal. The characteristics of the stimuli, and their effect on the variables of pleasure, arousal and familiarity, were explored through ANOVA. It is suggested that, in the case of natural-scene ads, familiarity with image-headline combinations may increase the pleasure response to the ads, and that certain components in the images (e.g., water) may increase arousal levels.

  20. HDR IMAGING FOR FEATURE DETECTION ON DETAILED ARCHITECTURAL SCENES

    Directory of Open Access Journals (Sweden)

    G. Kontogianni

    2015-02-01

    Full Text Available 3D reconstruction relies on accurate detection, extraction, description and matching of image features. This is even truer for complex architectural scenes that pose needs for 3D models of high quality, without any loss of detail in geometry or color. Illumination conditions influence the radiometric quality of images, as standard sensors cannot depict properly a wide range of intensities in the same scene. Indeed, overexposed or underexposed pixels cause irreplaceable information loss and degrade digital representation. Images taken under extreme lighting environments may be thus prohibitive for feature detection/extraction and consequently for matching and 3D reconstruction. High Dynamic Range (HDR images could be helpful for these operators because they broaden the limits of illumination range that Standard or Low Dynamic Range (SDR/LDR images can capture and increase in this way the amount of details contained in the image. Experimental results of this study prove this assumption as they examine state of the art feature detectors applied both on standard dynamic range and HDR images.

  1. New Proposition for Redating of Mithraic Tauroctony Scene

    Science.gov (United States)

    Bon, Edi; Ćirković, Milan; Milosavljević, Ivana

    Considering the idea that figures in the central icon of the Mithraic religion, with scene of tauroctony (bull slaying), represent equatorial constellations in the times in which the spring equinox was in between of Taurus and Aries (Ulansey, 1989) , it was hard to explain why some equatorial constellations were not included in the Mithraic icons (constellations of Orion and Libra), when those constellations were equatorial in those times. With simulations of skies in the times in which the spring equinox was in the constellation of Taurus, only small area of spring equinox positions allows excluding those two constellations, with all other representations of equatorial constellations included (Taurus, Canis Minor, Hidra, Crater, Corvus, Scorpio). These positions were the beginning of the ages of Taurus. But these positions of spring equinox included Gemini as equatorial constellation. Two of the main figures in the icons of Mithaic religions were two identical figures, usually represented on the each side of the bull, wearing frigian caps and holding the torches. Their names, Cautes and Cautopates, and their looks could lead to the idea that they represent the constellation of Gemini. In that case the main icon of Mithraic religion could represent the event that happened around 4000 BC, when the spring equinox entered the constellation of Taurus. Also, this position of equator contain Perseus as the equatorial constellation. In the work of Ulansey was presented that the god Mithras was the constellation of Perseus. In that case, all figures in the main scene would be equatorial constellations.

  2. Intelligence-led crime scene processing. Part I: Forensic intelligence.

    Science.gov (United States)

    Ribaux, Olivier; Baylon, Amélie; Roux, Claude; Delémont, Olivier; Lock, Eric; Zingg, Christian; Margot, Pierre

    2010-02-25

    Forensic science is generally defined as the application of science to address questions related to the law. Too often, this view restricts the contribution of science to one single process which eventually aims at bringing individuals to court while minimising risk of miscarriage of justice. In order to go beyond this paradigm, we propose to refocus the attention towards traces themselves, as remnants of a criminal activity, and their information content. We postulate that traces contribute effectively to a wide variety of other informational processes that support decision making in many situations. In particular, they inform actors of new policing strategies who place the treatment of information and intelligence at the centre of their systems. This contribution of forensic science to these security oriented models is still not well identified and captured. In order to create the best condition for the development of forensic intelligence, we suggest a framework that connects forensic science to intelligence-led policing (part I). Crime scene attendance and processing can be envisaged within this view. This approach gives indications about how to structure knowledge used by crime scene examiners in their effective practice (part II). 2009 Elsevier Ireland Ltd. All rights reserved.

  3. Plutonium solution analyzer

    International Nuclear Information System (INIS)

    Burns, D.A.

    1994-09-01

    A fully automated analyzer has been developed for plutonium solutions. It was assembled from several commercially available modules, is based upon segmented flow analysis, and exhibits precision about an order of magnitude better than commercial units (0.5%-O.05% RSD). The system was designed to accept unmeasured, untreated liquid samples in the concentration range 40-240 g/L and produce a report with sample identification, sample concentrations, and an abundance of statistics. Optional hydraulics can accommodate samples in the concentration range 0.4-4.0 g/L. Operating at a typical rate of 30 to 40 samples per hour, it consumes only 0.074 mL of each sample and standard, and generates waste at the rate of about 1.5 mL per minute. No radioactive material passes through its multichannel peristaltic pump (which remains outside the glovebox, uncontaminated) but rather is handled by a 6-port, 2-position chromatography-type loop valve. An accompanying computer is programmed in QuickBASIC 4.5 to provide both instrument control and data reduction. The program is truly user-friendly and communication between operator and instrument is via computer screen displays and keyboard. Two important issues which have been addressed are waste minimization and operator safety (the analyzer can run in the absence of an operator, once its autosampler has been loaded)

  4. Ring Image Analyzer

    Science.gov (United States)

    Strekalov, Dmitry V.

    2012-01-01

    Ring Image Analyzer software analyzes images to recognize elliptical patterns. It determines the ellipse parameters (axes ratio, centroid coordinate, tilt angle). The program attempts to recognize elliptical fringes (e.g., Newton Rings) on a photograph and determine their centroid position, the short-to-long-axis ratio, and the angle of rotation of the long axis relative to the horizontal direction on the photograph. These capabilities are important in interferometric imaging and control of surfaces. In particular, this program has been developed and applied for determining the rim shape of precision-machined optical whispering gallery mode resonators. The program relies on a unique image recognition algorithm aimed at recognizing elliptical shapes, but can be easily adapted to other geometric shapes. It is robust against non-elliptical details of the image and against noise. Interferometric analysis of precision-machined surfaces remains an important technological instrument in hardware development and quality analysis. This software automates and increases the accuracy of this technique. The software has been developed for the needs of an R&TD-funded project and has become an important asset for the future research proposal to NASA as well as other agencies.

  5. Plutonium solution analyzer

    Energy Technology Data Exchange (ETDEWEB)

    Burns, D.A.

    1994-09-01

    A fully automated analyzer has been developed for plutonium solutions. It was assembled from several commercially available modules, is based upon segmented flow analysis, and exhibits precision about an order of magnitude better than commercial units (0.5%-O.05% RSD). The system was designed to accept unmeasured, untreated liquid samples in the concentration range 40-240 g/L and produce a report with sample identification, sample concentrations, and an abundance of statistics. Optional hydraulics can accommodate samples in the concentration range 0.4-4.0 g/L. Operating at a typical rate of 30 to 40 samples per hour, it consumes only 0.074 mL of each sample and standard, and generates waste at the rate of about 1.5 mL per minute. No radioactive material passes through its multichannel peristaltic pump (which remains outside the glovebox, uncontaminated) but rather is handled by a 6-port, 2-position chromatography-type loop valve. An accompanying computer is programmed in QuickBASIC 4.5 to provide both instrument control and data reduction. The program is truly user-friendly and communication between operator and instrument is via computer screen displays and keyboard. Two important issues which have been addressed are waste minimization and operator safety (the analyzer can run in the absence of an operator, once its autosampler has been loaded).

  6. Scene reassembly after multimodal digitization and pipeline evaluation using photorealistic rendering

    DEFF Research Database (Denmark)

    Stets, Jonathan Dyssel; Dal Corso, Alessandro; Nielsen, Jannik Boll

    2017-01-01

    of the lighting environment. This enables pixelwise comparison of photographs of the real scene with renderings of the digital version of the scene. Such quantitative evaluation is useful for verifying acquired material appearance and reconstructed surface geometry, which is an important aspect of digital content......Transparent objects require acquisition modalities that are very different from the ones used for objects with more diffuse reflectance properties. Digitizing a scene where objects must be acquired with different modalities requires scene reassembly after reconstruction of the object surfaces....... This reassembly of a scene that was picked apart for scanning seems unexplored. We contribute with a multimodal digitization pipeline for scenes that require this step of reassembly. Our pipeline includes measurement of bidirectional reflectance distribution functions and high dynamic range imaging...

  7. Analyzing Water's Optical Absorption

    Science.gov (United States)

    2002-01-01

    A cooperative agreement between World Precision Instruments (WPI), Inc., and Stennis Space Center has led the UltraPath(TM) device, which provides a more efficient method for analyzing the optical absorption of water samples at sea. UltraPath is a unique, high-performance absorbance spectrophotometer with user-selectable light path lengths. It is an ideal tool for any study requiring precise and highly sensitive spectroscopic determination of analytes, either in the laboratory or the field. As a low-cost, rugged, and portable system capable of high- sensitivity measurements in widely divergent waters, UltraPath will help scientists examine the role that coastal ocean environments play in the global carbon cycle. UltraPath(TM) is a trademark of World Precision Instruments, Inc. LWCC(TM) is a trademark of World Precision Instruments, Inc.

  8. PDA: Pooled DNA analyzer

    Directory of Open Access Journals (Sweden)

    Lin Chin-Yu

    2006-04-01

    Full Text Available Abstract Background Association mapping using abundant single nucleotide polymorphisms is a powerful tool for identifying disease susceptibility genes for complex traits and exploring possible genetic diversity. Genotyping large numbers of SNPs individually is performed routinely but is cost prohibitive for large-scale genetic studies. DNA pooling is a reliable and cost-saving alternative genotyping method. However, no software has been developed for complete pooled-DNA analyses, including data standardization, allele frequency estimation, and single/multipoint DNA pooling association tests. This motivated the development of the software, 'PDA' (Pooled DNA Analyzer, to analyze pooled DNA data. Results We develop the software, PDA, for the analysis of pooled-DNA data. PDA is originally implemented with the MATLAB® language, but it can also be executed on a Windows system without installing the MATLAB®. PDA provides estimates of the coefficient of preferential amplification and allele frequency. PDA considers an extended single-point association test, which can compare allele frequencies between two DNA pools constructed under different experimental conditions. Moreover, PDA also provides novel chromosome-wide multipoint association tests based on p-value combinations and a sliding-window concept. This new multipoint testing procedure overcomes a computational bottleneck of conventional haplotype-oriented multipoint methods in DNA pooling analyses and can handle data sets having a large pool size and/or large numbers of polymorphic markers. All of the PDA functions are illustrated in the four bona fide examples. Conclusion PDA is simple to operate and does not require that users have a strong statistical background. The software is available at http://www.ibms.sinica.edu.tw/%7Ecsjfann/first%20flow/pda.htm.

  9. Crime Scene Investigation: Clinical Application of Chemical Shift Imaging as a Problem Solving Tool

    Science.gov (United States)

    2016-02-26

    MDW/SGVU SUBJECT: Professional Presentation Approva l 26 FEB 2016 1. Your paper, entitled Crime Scene Investigation: Clinical Aoolication of...or technical information as a publication/presentation, a new 59 MDW Form 3039 must be submitted for review and approval.] Crime Scene Investiga...tion: Clinical Application of Chemical Shift Imaging as a Problem Solving Tool 1. TITLE OF MATERIAL TO BE PUBLISHED OR PRESENTED Crime Scene

  10. The effect of distraction on change detection in crowded acoustic scenes

    OpenAIRE

    Petsas, Theofilos; Harrison, Jemma; Kashino, Makio; Furukawa, Shigeto; Chait, Maria

    2016-01-01

    In this series of behavioural experiments we investigated the effect of distraction on the maintenance of acoustic scene information in short-term memory. Stimuli are artificial acoustic ?scenes? composed of several (up to twelve) concurrent tone-pip streams (?sources?). A gap (1000?ms) is inserted partway through the ?scene?; Changes in the form of an appearance of a new source or disappearance of an existing source, occur after the gap in 50% of the trials. Listeners were instructed to moni...

  11. Iranian Audience Poll on Smoking Scenes in Persian Movies in 2011

    OpenAIRE

    Heydari, Gholamreza

    2014-01-01

    Background: Scenes depicting smoking are among the causes of smoking initiation in youth. The present study was the first in Iran to collect some primary information regarding the presence of smoking scenes in movies and propagation of tobacco use. Methods: This cross-sectional study was conducted by polling audience about smoking scenes in Persian movies on theaters in 2011. Data were collected using a questionnaire. A total of 2000 subjects were selected for questioning. The questioning...

  12. Learning from Academic Tasks.

    Science.gov (United States)

    Marx, Ronald W.; Walsh, John

    1988-01-01

    Offers a descriptive theory of the nature of classroom tasks. Describes the interplay among (1) the conditions under which tasks are set; (2) the cognitive plans students use to accomplish tasks; and (3) the products students create as a result of their task-related efforts. (SKC)

  13. Accumulating and remembering the details of neutral and emotional natural scenes.

    Science.gov (United States)

    Melcher, David

    2010-01-01

    In contrast to our rich sensory experience with complex scenes in everyday life, the capacity of visual working memory is thought to be quite limited. Here our memory has been examined for the details of naturalistic scenes as a function of display duration, emotional valence of the scene, and delay before test. Individual differences in working memory and long-term memory for pictorial scenes were examined in experiment 1. The accumulation of memory for emotional scenes and the retention of these details in long-term memory were investigated in experiment 2. Although there were large individual differences in performance, memory for scene details generally exceeded the traditional working memory limit within a few seconds. Information about positive scenes was learned most quickly, while negative scenes showed the worst memory for details. The overall pattern of results was consistent with the idea that both short-term and long-term representations are mixed together in a medium-term 'online' memory for scenes.

  14. Seek and you shall remember: Scene semantics interact with visual search to build better memories

    Science.gov (United States)

    Draschkow, Dejan; Wolfe, Jeremy M.; Võ, Melissa L.-H.

    2014-01-01

    Memorizing critical objects and their locations is an essential part of everyday life. In the present study, incidental encoding of objects in naturalistic scenes during search was compared to explicit memorization of those scenes. To investigate if prior knowledge of scene structure influences these two types of encoding differently, we used meaningless arrays of objects as well as objects in real-world, semantically meaningful images. Surprisingly, when participants were asked to recall scenes, their memory performance was markedly better for searched objects than for objects they had explicitly tried to memorize, even though participants in the search condition were not explicitly asked to memorize objects. This finding held true even when objects were observed for an equal amount of time in both conditions. Critically, the recall benefit for searched over memorized objects in scenes was eliminated when objects were presented on uniform, non-scene backgrounds rather than in a full scene context. Thus, scene semantics not only help us search for objects in naturalistic scenes, but appear to produce a representation that supports our memory for those objects beyond intentional memorization. PMID:25015385

  15. Viewing nature scenes positively affects recovery of autonomic function following acute-mental stress.

    Science.gov (United States)

    Brown, Daniel K; Barton, Jo L; Gladwell, Valerie F

    2013-06-04

    A randomized crossover study explored whether viewing different scenes prior to a stressor altered autonomic function during the recovery from the stressor. The two scenes were (a) nature (composed of trees, grass, fields) or (b) built (composed of man-made, urban scenes lacking natural characteristics) environments. Autonomic function was assessed using noninvasive techniques of heart rate variability; in particular, time domain analyses evaluated parasympathetic activity, using root-mean-square of successive differences (RMSSD). During stress, secondary cardiovascular markers (heart rate, systolic and diastolic blood pressure) showed significant increases from baseline which did not differ between the two viewing conditions. Parasympathetic activity, however, was significantly higher in recovery following the stressor in the viewing scenes of nature condition compared to viewing scenes depicting built environments (RMSSD; 50.0 ± 31.3 vs 34.8 ± 14.8 ms). Thus, viewing nature scenes prior to a stressor alters autonomic activity in the recovery period. The secondary aim was to examine autonomic function during viewing of the two scenes. Standard deviation of R-R intervals (SDRR), as change from baseline, during the first 5 min of viewing nature scenes was greater than during built scenes. Overall, this suggests that nature can elicit improvements in the recovery process following a stressor.

  16. Reconstruction of 3D scenes from sequences of images

    Science.gov (United States)

    Niu, Bei; Sang, Xinzhu; Chen, Duo; Cai, Yuanfa

    2013-08-01

    Reconstruction of three-dimensional (3D) scenes is an active research topic in the field of computer vision and 3D display. It's a challenge to model 3D objects rapidly and effectively. A 3D model can be extracted from multiple images. The system only requires a sequence of images taken with cameras without knowing the parameters of camera, which provide flexibility to a high degree. We focus on quickly merging point cloud of the object from depth map sequences. The whole system combines algorithms of different areas in computer vision, such as camera calibration, stereo correspondence, point cloud splicing and surface reconstruction. The procedure of 3D reconstruction is decomposed into a number of successive steps. Firstly, image sequences are received by the camera freely moving around the object. Secondly, the scene depth is obtained by a non-local stereo matching algorithm. The pairwise is realized with the Scale Invariant Feature Transform (SIFT) algorithm. An initial matching is then made for the first two images of the sequence. For the subsequent image that is processed with previous image, the point of interest corresponding to ones in previous images are refined or corrected. The vertical parallax between the images is eliminated. The next step is to calibrate camera, and intrinsic parameters and external parameters of the camera are calculated. Therefore, The relative position and orientation of camera are gotten. A sequence of depth maps are acquired by using a non-local cost aggregation method for stereo matching. Then point cloud sequence is achieved by the scene depths, which consists of point cloud model using the external parameters of camera and the point cloud sequence. The point cloud model is then approximated by a triangular wire-frame mesh to reduce geometric complexity and to tailor the model to the requirements of computer graphics visualization systems. Finally, the texture is mapped onto the wire-frame model, which can also be used for 3

  17. PULSE HEIGHT ANALYZER

    Science.gov (United States)

    Johnstone, C.W.

    1958-01-21

    An anticoincidence device is described for a pair of adjacent channels of a multi-channel pulse height analyzer for preventing the lower channel from generating a count pulse in response to an input pulse when the input pulse has sufficient magnitude to reach the upper level channel. The anticoincidence circuit comprises a window amplifier, upper and lower level discriminators, and a biased-off amplifier. The output of the window amplifier is coupled to the inputs of the discriminators, the output of the upper level discriminator is connected to the resistance end of a series R-C network, the output of the lower level discriminator is coupled to the capacitance end of the R-C network, and the grid of the biased-off amplifier is coupled to the junction of the R-C network. In operation each discriminator produces a negative pulse output when the input pulse traverses its voltage setting. As a result of the connections to the R-C network, a trigger pulse will be sent to the biased-off amplifier when the incoming pulse level is sufficient to trigger only the lower level discriminator.

  18. Analyzing Spacecraft Telecommunication Systems

    Science.gov (United States)

    Kordon, Mark; Hanks, David; Gladden, Roy; Wood, Eric

    2004-01-01

    Multi-Mission Telecom Analysis Tool (MMTAT) is a C-language computer program for analyzing proposed spacecraft telecommunication systems. MMTAT utilizes parameterized input and computational models that can be run on standard desktop computers to perform fast and accurate analyses of telecommunication links. MMTAT is easy to use and can easily be integrated with other software applications and run as part of almost any computational simulation. It is distributed as either a stand-alone application program with a graphical user interface or a linkable library with a well-defined set of application programming interface (API) calls. As a stand-alone program, MMTAT provides both textual and graphical output. The graphs make it possible to understand, quickly and easily, how telecommunication performance varies with variations in input parameters. A delimited text file that can be read by any spreadsheet program is generated at the end of each run. The API in the linkable-library form of MMTAT enables the user to control simulation software and to change parameters during a simulation run. Results can be retrieved either at the end of a run or by use of a function call at any time step.

  19. Explicit goal-driven attention, unlike implicitly learned attention, spreads to secondary tasks.

    Science.gov (United States)

    Addleman, Douglas A; Tao, Jinyi; Remington, Roger W; Jiang, Yuhong V

    2018-03-01

    To what degree does spatial attention for one task spread to all stimuli in the attended region, regardless of task relevance? Most models imply that spatial attention acts through a unitary priority map in a task-general manner. We show that implicit learning, unlike endogenous spatial cuing, can bias spatial attention within one task without biasing attention to a spatially overlapping secondary task. Participants completed a visual search task superimposed on a background containing scenes, which they were told to encode for a later memory task. Experiments 1 and 2 used explicit instructions to bias spatial attention to one region for visual search; Experiment 3 used location probability cuing to implicitly bias spatial attention. In location probability cuing, a target appeared in one region more than others despite participants not being told of this. In all experiments, search performance was better in the cued region than in uncued regions. However, scene memory was better in the cued region only following endogenous guidance, not after implicit biasing of attention. These data support a dual-system view of top-down attention that dissociates goal-driven and implicitly learned attention. Goal-driven attention is task general, amplifying processing of a cued region across tasks, whereas implicit statistical learning is task-specific. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  20. The hippocampus plays a role in the recognition of visual scenes presented at behaviorally relevant points in time: evidence from amnestic mild cognitive impairment (aMCI) and healthy controls.

    Science.gov (United States)

    Szamosi, András; Levy-Gigi, Einat; Kelemen, Oguz; Kéri, Szabolcs

    2013-01-01

    When people perform an attentionally demanding target task at fixation, they also encode the surrounding visual environment, which serves as a context of the task. Here, we examined the role of the hippocampus in memory for target and context. Thirty-five patients with amnestic mild cognitive impairment (aMCI) and 35 healthy controls matched for age, gender, and education participated in the study. Participants completed visual letter detection and auditory tone discrimination target tasks, while also viewing a series of briefly presented urban and natural scenes. For the measurement of hippocampal and cerebral cortical volume, we utilized the FreeSurfer protocol using a Siemens Trio 3 T scanner. Before the quantification of brain volumes, hippocampal atrophy was confirmed by visual inspection in each patient. Results revealed intact letter recall and tone discrimination performances in aMCI patients, whereas they showed severe impairments in the recognition of scenes presented together with the targets. Patients with aMCI showed bilaterally reduced hippocampal volumes, but intact cortical volume, as compared with the controls. In controls and in the whole sample, hippocampal volume was positively associated with scene recognition when a target task was present. This relationship was observed in both visual and auditory conditions. Scene recognition and target tasks were not associated with executive functions. These results suggest that the hippocampus plays an essential role in the formation of memory traces of the visual environment when people concurrently perform a target task at behaviorally relevant points in time. Copyright © 2012 Elsevier Ltd. All rights reserved.

  1. Attentional Bias towards Emotional Scenes in Boys with Attention Deficit Hyperactivity Disorder

    Directory of Open Access Journals (Sweden)

    Ebrahim Pishyareh

    2012-06-01

    Full Text Available Objective: Children with attention-deficit / hyperactivity disorder (ADHD react explosively and inappropriately to emotional stimuli. It could be hypothesized that these children have some impairment in attending to emotional cues. Based on this hypothesis, we conducted this study to evaluate visual directions of children with ADHD towards paired emotional scenes.Method: thirty boys between the ages of 6 and 11 years diagnosed with ADHD were compared with 30 age-matched normal boys. All participants were presented paired emotional and neutral scenes in the four following categories: pleasant-neutral; pleasant-unpleasant; unpleasant-neutral; and neutral – neutral. Meanwhile, their visual orientations towards these pictures were evaluated using the eye tracking system. The number and duration of first fixation and duration of first gaze were compared between the two groups using the MANOVA analysis. The performance of each group in different categories was also analyzed using the Friedman test.Results: With regards to duration of first gaze, which is the time taken to fixate on a picture before moving to another picture, ADHD children spent less time on pleasant pictures compared to normal group ,while they were looking at pleasant – neutral and unpleasant – pleasant pairs. The duration of first gaze on unpleasant pictures was higher while children with ADHD were looking at unpleasant – neutral pairs (P<0.01.Conclusion: based on the findings of this study it could be concluded that children with ADHD attend to unpleasant conditions more than normal children which leads to their emotional reactivity.

  2. Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis

    Science.gov (United States)

    Büchler, Michael; Allegro, Silvia; Launer, Stefan; Dillier, Norbert

    2005-12-01

    A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.

  3. "Undoing" (or Symbolic Reversal) at Homicide Crime Scenes.

    Science.gov (United States)

    Russell, Maria; Schlesinger, Louis B; Leon, Maria; Holdren, Samantha

    2018-03-01

    A closed case file review of a nonrandom national sample of 975 homicides disclosed 11 cases (1.13%) of undoing, wherein offenders engaged in crime scene behavior that has been considered an attempt to symbolically reverse the murder. The frequency of the various methods of undoing involved the use of blankets to cover the victim's body (55%), positioning the body (55%), use of a bed or couch (42%), washing the body (36%), using pillows (36%), as well as removing clothing and adding other types of adornments (27%). Ten of the 11 offenders were male, and one was female; all 12 victims were female. Ten of the 12 victims were family members or relationship intimates. These findings are consistent with prior reports which concluded that the motivation for undoing behavior is an attempt to compensate for guilt or remorse for having committed the homicide. © 2017 American Academy of Forensic Sciences.

  4. Dynamic infrared scene projectors based upon the DMD

    Science.gov (United States)

    Beasley, D. Brett; Bender, Matt; Crosby, Jay; Messer, Tim

    2009-02-01

    The Micromirror Array Projector System (MAPS) is an advanced dynamic scene projector system developed by Optical Sciences Corporation (OSC) for Hardware-In-the-Loop (HWIL) simulation and sensor test applications. The MAPS is based upon the Texas Instruments Digital Micromirror Device (DMD) which has been modified to project high resolution, realistic imagery suitable for testing sensors and seekers operating in the UV, visible, NIR, and IR wavebands. Since the introduction of the first MAPS in 2001, OSC has continued to improve the technology and develop systems for new projection and Electro-Optical (E-O) test applications. This paper reviews the basic MAPS design and performance capabilities. We also present example projectors and E-O test sets designed and fabricated by OSC in the last 7 years. Finally, current research efforts and new applications of the MAPS technology are discussed.

  5. ADULT BASIC LIFE SUPPORT ON NEAR DROWNING AT THE SCENE

    Directory of Open Access Journals (Sweden)

    Gd. Harry Kurnia Prawedana

    2013-04-01

    Full Text Available Indonesia is a popular tourist destination which has potential for drowning cases. Therefore, required knowledge of adult basic life support to be able to deal with such cases in the field. Basic life support in an act to maintain airway and assist breathing and circulation without the use of tools other than simple breathing aids. The most important factor that determines the outcome of drowning event is the duration and severity of hypoxia induced. The management of near drowning at the scene include the rescue of victim from the water, rescue breathing, chest compression, cleaning the vomit substances which allowing blockage of the airway, prevent loss of body heat, and transport the victim to nearest emergency department for evaluation and monitoring.

  6. An effective method of locating lisence plate in complex scenes

    Science.gov (United States)

    Ling, Jianing; Xie, Mei

    2013-03-01

    License plate recognition system(LPRS) is one of the most important parts of the intelligent transportation system(ITS),and the license plate location is the most important step of the LPRS,it derectly affects the performance of the character segmentation and recognition afterward.In this paper,an effective algorithm of lisence plate location is proposed.In this method,Firstly we obtain high frequency coefficient through 1-D discrete wavelet transform. Then we process the image with median filter, binarization and morphology operation.Finally,we label and record the connected regions. Then we can locate the candidate license plates according to the region information. Experiment proved that our method performs well in the long range and complex scenes,and performs well on robustness.

  7. Complete Scene Recovery and Terrain Classification in Textured Terrain Meshes

    Directory of Open Access Journals (Sweden)

    Kyhyun Um

    2012-08-01

    Full Text Available Terrain classification allows a mobile robot to create an annotated map of its local environment from the three-dimensional (3D and two-dimensional (2D datasets collected by its array of sensors, including a GPS receiver, gyroscope, video camera, and range sensor. However, parts of objects that are outside the measurement range of the range sensor will not be detected. To overcome this problem, this paper describes an edge estimation method for complete scene recovery and complete terrain reconstruction. Here, the Gibbs-Markov random field is used to segment the ground from 2D videos and 3D point clouds. Further, a masking method is proposed to classify buildings and trees in a terrain mesh.

  8. Text Line Detection from Rectangle Traffic Panels of Natural Scene

    Science.gov (United States)

    Wang, Shiyuan; Huang, Linlin; Hu, Jian

    2018-01-01

    Traffic sign detection and recognition is very important for Intelligent Transportation. Among traffic signs, traffic panel contains rich information. However, due to low resolution and blur in the rectangular traffic panel, it is difficult to extract the character and symbols. In this paper, we propose a coarse-to-fine method to detect the Chinese character on traffic panels from natural scenes. Given a traffic panel Color Quantization is applied to extract candidate regions of Chinese characters. Second, a multi-stage filter based on learning is applied to discard the non-character regions. Third, we aggregate the characters for text lines by Distance Metric Learning method. Experimental results on real traffic images from Baidu Street View demonstrate the effectiveness of the proposed method.

  9. Scenes of shame, social Roles, and the play with masks

    DEFF Research Database (Denmark)

    Welz, Claudia

    2014-01-01

    This article explores various scenes of shame, raising the questions of what shame discloses about the self and how this self-disclosure takes place. Thereby, the common idea that shame discloses the self’s debasement will be challenged. The dramatic dialectics of showing and hiding display a much...... more ambiguous, dynamic self-image as result of an interactive evaluation of oneself by oneself and others. Seeing oneself seen contributes to the sense of who one becomes. From being absorbed in what one does, one might suddenly become self-aware, shift viewpoints and feel pressed to put on masks....... In putting on a mask, one relates to oneself in distancing oneself from oneself. In being at once a moral agent and a performing actor with an audience and norms in mind, one embodies and transcends the social roles one takes. In addition to the feeling of shame, in which the self finds itself passively...

  10. Behinds the scenes of GS: a DSO like no other

    CERN Multimedia

    Antonella Del Rosso

    2014-01-01

    At CERN, Departmental Safety Officers (DSOs) are responsible for making the members of their department aware of safety issues. They’re our first point of call every time a problem arises relating to environmental matters or the safety of people and installations. In GS, this role is even more crucial as the Department’s activities are scattered across the Laboratory and affect everyone.   As we have pointed out in our article series "Behind the scenes of GS”, the GS Department is responsible for the construction, renovation and maintenance of buildings and related technical infrastructures. The latter include heating and toilet facilities; detection and alarm systems; the management of the hotels, stores, stocks, shuttle services and mail; and the development of technical and administrative databases. The activities of the Medical Service and the Fire and Rescue Service also come under the umbrella of GS, as do the many other daily activities that are pa...

  11. Infrared imaging of the crime scene: possibilities and pitfalls.

    Science.gov (United States)

    Edelman, Gerda J; Hoveling, Richelle J M; Roos, Martin; van Leeuwen, Ton G; Aalders, Maurice C G

    2013-09-01

    All objects radiate infrared energy invisible to the human eye, which can be imaged by infrared cameras, visualizing differences in temperature and/or emissivity of objects. Infrared imaging is an emerging technique for forensic investigators. The rapid, nondestructive, and noncontact features of infrared imaging indicate its suitability for many forensic applications, ranging from the estimation of time of death to the detection of blood stains on dark backgrounds. This paper provides an overview of the principles and instrumentation involved in infrared imaging. Difficulties concerning the image interpretation due to different radiation sources and different emissivity values within a scene are addressed. Finally, reported forensic applications are reviewed and supported by practical illustrations. When introduced in forensic casework, infrared imaging can help investigators to detect, to visualize, and to identify useful evidence nondestructively. © 2013 American Academy of Forensic Sciences.

  12. Classification of visual and linguistic tasks using eye-movement features.

    Science.gov (United States)

    Coco, Moreno I; Keller, Frank

    2014-03-07

    The role of the task has received special attention in visual-cognition research because it can provide causal explanations of goal-directed eye-movement responses. The dependency between visual attention and task suggests that eye movements can be used to classify the task being performed. A recent study by Greene, Liu, and Wolfe (2012), however, fails to achieve accurate classification of visual tasks based on eye-movement features. In the present study, we hypothesize that tasks can be successfully classified when they differ with respect to the involvement of other cognitive domains, such as language processing. We extract the eye-movement features used by Greene et al. as well as additional features from the data of three different tasks: visual search, object naming, and scene description. First, we demonstrated that eye-movement responses make it possible to characterize the goals of these tasks. Then, we trained three different types of classifiers and predicted the task participants performed with an accuracy well above chance (a maximum of 88% for visual search). An analysis of the relative importance of features for classification accuracy reveals that just one feature, i.e., initiation time, is sufficient for above-chance performance (a maximum of 79% accuracy in object naming). Crucially, this feature is independent of task duration, which differs systematically across the three tasks we investigated. Overall, the best task classification performance was obtained with a set of seven features that included both spatial information (e.g., entropy of attention allocation) and temporal components (e.g., total fixation on objects) of the eye-movement record. This result confirms the task-dependent allocation of visual attention and extends previous work by showing that task classification is possible when tasks differ in the cognitive processes involved (purely visual tasks such as search vs. communicative tasks such as scene description).

  13. Spatial biases in understanding descriptions of static scenes: the role of reading and writing direction.

    Science.gov (United States)

    Román, Antonio; El Fathi, Abderrahman; Santiago, Julio

    2013-05-01

    Prior studies on reasoning tasks have shown lateral spatial biases on mental model construction, which converge with known spatial biases in the mental representation of number, time, and events. The latter have been shown to be related to habitual reading and writing direction. The present study bridges and extends both research strands by looking at the processes of mental model construction in language comprehension and examining how they are influenced by reading and writing direction. Sentences like "the table is between the lamp and the TV" were auditorily presented to groups of mono- and bidirectional readers in languages with left-to-right or right-to-left scripts, and participants were asked to draw the described scene. There was a clear preference for deploying the lateral objects in the direction marked by the script of the input language and some hints of a much smaller effect of the degree of practice with the script. These lateral biases occurred in the context of universal strategies for working memory management.

  14. Sensory and cognitive contributions of color to the recognition of natural scenes.

    Science.gov (United States)

    Gegenfurtner, K R; Rieger, J

    2000-06-29

    Although color plays a prominent part in our subjective experience of the visual world, the evolutionary advantage of color vision is still unclear [1] [2], with most current answers pointing towards specialized uses, for example to detect ripe fruit amongst foliage [3] [4] [5] [6]. We investigated whether color has a more general role in visual recognition by looking at the contribution of color to the encoding and retrieval processes involved in pattern recognition [7] [8] [9]. Recognition accuracy was higher for color images of natural scenes than for luminance-matched black and white images, and color information contributed to both components of the recognition process. Initially, color leads to an image-coding advantage at the very early stages of sensory processing, most probably by easing the image-segmentation task. Later, color leads to an advantage in retrieval, presumably as the result of an enhanced image representation in memory due to the additional attribute. Our results ascribe color vision a general role in the processing of visual form, starting at the very earliest stages of analysis: color helps us to recognize things faster and to remember them better.

  15. Optimized 3D Street Scene Reconstruction from Driving Recorder Images

    Directory of Open Access Journals (Sweden)

    Yongjun Zhang

    2015-07-01

    Full Text Available The paper presents an automatic region detection based method to reconstruct street scenes from driving recorder images. The driving recorder in this paper is a dashboard camera that collects images while the motor vehicle is moving. An enormous number of moving vehicles are included in the collected data because the typical recorders are often mounted in the front of moving vehicles and face the forward direction, which can make matching points on vehicles and guardrails unreliable. Believing that utilizing these image data can reduce street scene reconstruction and updating costs because of their low price, wide use, and extensive shooting coverage, we therefore proposed a new method, which is called the Mask automatic detecting method, to improve the structure results from the motion reconstruction. Note that we define vehicle and guardrail regions as “mask” in this paper since the features on them should be masked out to avoid poor matches. After removing the feature points in our new method, the camera poses and sparse 3D points that are reconstructed with the remaining matches. Our contrast experiments with the typical pipeline of structure from motion (SfM reconstruction methods, such as Photosynth and VisualSFM, demonstrated that the Mask decreased the root-mean-square error (RMSE of the pairwise matching results, which led to more accurate recovering results from the camera-relative poses. Removing features from the Mask also increased the accuracy of point clouds by nearly 30%–40% and corrected the problems of the typical methods on repeatedly reconstructing several buildings when there was only one target building.

  16. Recognition memory for colored and black-and-white scenes in normal and color deficient observers (dichromats).

    Science.gov (United States)

    Brédart, Serge; Cornet, Alyssa; Rakic, Jean-Marie

    2014-01-01

    Color deficient (dichromat) and normal observers' recognition memory for colored and black-and-white natural scenes was evaluated through several parameters: the rate of recognition, discrimination (A'), response bias (B"D), response confidence, and the proportion of conscious recollections (Remember responses) among hits. At the encoding phase, 36 images of natural scenes were each presented for 1 sec. Half of the images were shown in color and half in black-and-white. At the recognition phase, these 36 pictures were intermixed with 36 new images. The participants' task was to indicate whether an image had been presented or not at the encoding phase, to rate their level of confidence in his her/his response, and in the case of a positive response, to classify the response as a Remember, a Know or a Guess response. Results indicated that accuracy, response discrimination, response bias and confidence ratings were higher for colored than for black-and-white images; this advantage for colored images was similar in both groups of participants. Rates of Remember responses were not higher for colored images than for black-and-white ones, whatever the group. However, interestingly, Remember responses were significantly more often based on color information for colored than for black-and-white images in normal observers only, not in dichromats.

  17. Soft Decision Analyzer

    Science.gov (United States)

    Steele, Glen; Lansdowne, Chatwin; Zucha, Joan; Schlensinger, Adam

    2013-01-01

    The Soft Decision Analyzer (SDA) is an instrument that combines hardware, firmware, and software to perform realtime closed-loop end-to-end statistical analysis of single- or dual- channel serial digital RF communications systems operating in very low signal-to-noise conditions. As an innovation, the unique SDA capabilities allow it to perform analysis of situations where the receiving communication system slips bits due to low signal-to-noise conditions or experiences constellation rotations resulting in channel polarity in versions or channel assignment swaps. SDA s closed-loop detection allows it to instrument a live system and correlate observations with frame, codeword, and packet losses, as well as Quality of Service (QoS) and Quality of Experience (QoE) events. The SDA s abilities are not confined to performing analysis in low signal-to-noise conditions. Its analysis provides in-depth insight of a communication system s receiver performance in a variety of operating conditions. The SDA incorporates two techniques for identifying slips. The first is an examination of content of the received data stream s relation to the transmitted data content and the second is a direct examination of the receiver s recovered clock signals relative to a reference. Both techniques provide benefits in different ways and allow the communication engineer evaluating test results increased confidence and understanding of receiver performance. Direct examination of data contents is performed by two different data techniques, power correlation or a modified Massey correlation, and can be applied to soft decision data widths 1 to 12 bits wide over a correlation depth ranging from 16 to 512 samples. The SDA detects receiver bit slips within a 4 bits window and can handle systems with up to four quadrants (QPSK, SQPSK, and BPSK systems). The SDA continuously monitors correlation results to characterize slips and quadrant change and is capable of performing analysis even when the

  18. Paternal Effectiveness in a Selected Cognitive Task.

    Science.gov (United States)

    Acuff, Nancy Hamblen

    The immediate effectiveness of paternal instruction in a selected cognitive task was investigated. The sub-problems were (1) to compare paternal and maternal instruction, and (2) to analyze paternal instructional effectiveness with the son or the daughter. The cognitive task selected was the Goodenough-Harris Draw-A-Man Test. Subjects were 42…

  19. The Multinational Logistics Joint Task Force (MLJTF)

    National Research Council Canada - National Science Library

    Higginbotham, Matthew T

    2007-01-01

    In this monograph, by analyzing the UN, NATO and the US Army's evolving Modular Logistics Doctrine, the author integrates the key areas from each doctrine into a multinational logistics joint task force (MLJTF) organization...

  20. Angular difference feature extraction for urban scene classification using ZY-3 multi-angle high-resolution satellite imagery

    Science.gov (United States)

    Huang, Xin; Chen, Huijun; Gong, Jianya

    2018-01-01

    Spaceborne multi-angle images with a high-resolution are capable of simultaneously providing spatial details and three-dimensional (3D) information to support detailed and accurate classification of complex urban scenes. In recent years, satellite-derived digital surface models (DSMs) have been increasingly utilized to provide height information to complement spectral properties for urban classification. However, in such a way, the multi-angle information is not effectively exploited, which is mainly due to the errors and difficulties of the multi-view image matching and the inaccuracy of the generated DSM over complex and dense urban scenes. Therefore, it is still a challenging task to effectively exploit the available angular information from high-resolution multi-angle images. In this paper, we investigate the potential for classifying urban scenes based on local angular properties characterized from high-resolution ZY-3 multi-view images. Specifically, three categories of angular difference features (ADFs) are proposed to describe the angular information at three levels (i.e., pixel, feature, and label levels): (1) ADF-pixel: the angular information is directly extrapolated by pixel comparison between the multi-angle images; (2) ADF-feature: the angular differences are described in the feature domains by comparing the differences between the multi-angle spatial features (e.g., morphological attribute profiles (APs)). (3) ADF-label: label-level angular features are proposed based on a group of urban primitives (e.g., buildings and shadows), in order to describe the specific angular information related to the types of primitive classes. In addition, we utilize spatial-contextual information to refine the multi-level ADF features using superpixel segmentation, for the purpose of alleviating the effects of salt-and-pepper noise and representing the main angular characteristics within a local area. The experiments on ZY-3 multi-angle images confirm that the proposed

  1. Automating the construction of scene classifiers for content-based video retrieval

    NARCIS (Netherlands)

    Khan, L.; Israël, Menno; Petrushin, V.A.; van den Broek, Egon; van der Putten, Peter

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a

  2. Research and Technology Development for Construction of 3d Video Scenes

    Science.gov (United States)

    Khlebnikova, Tatyana A.

    2016-06-01

    For the last two decades surface information in the form of conventional digital and analogue topographic maps has been being supplemented by new digital geospatial products, also known as 3D models of real objects. It is shown that currently there are no defined standards for 3D scenes construction technologies that could be used by Russian surveying and cartographic enterprises. The issues regarding source data requirements, their capture and transferring to create 3D scenes have not been defined yet. The accuracy issues for 3D video scenes used for measuring purposes can hardly ever be found in publications. Practicability of development, research and implementation of technology for construction of 3D video scenes is substantiated by 3D video scene capability to expand the field of data analysis application for environmental monitoring, urban planning, and managerial decision problems. The technology for construction of 3D video scenes with regard to the specified metric requirements is offered. Technique and methodological background are recommended for this technology used to construct 3D video scenes based on DTM, which were created by satellite and aerial survey data. The results of accuracy estimation of 3D video scenes are presented.

  3. Object Attention Patches for Text Detection and Recognition in Scene Images using SIFT

    NARCIS (Netherlands)

    Sriman, Bowornrat; Schomaker, Lambertus; De Marsico, Maria; Figueiredo, Mário; Fred, Ana

    2015-01-01

    Natural urban scene images contain many problems for character recognition such as luminance noise, varying font styles or cluttered backgrounds. Detecting and recognizing text in a natural scene is a difficult problem. Several techniques have been proposed to overcome these problems. These are,

  4. Motivational Objects in Natural Scenes (MONS): A Database of >800 Objects.

    Science.gov (United States)

    Schomaker, Judith; Rau, Elias M; Einhäuser, Wolfgang; Wittmann, Bianca C

    2017-01-01

    In daily life, we are surrounded by objects with pre-existing motivational associations. However, these are rarely controlled for in experiments with natural stimuli. Research on natural stimuli would therefore benefit from stimuli with well-defined motivational properties; in turn, such stimuli also open new paths in research on motivation. Here we introduce a database of Motivational Objects in Natural Scenes (MONS). The database consists of 107 scenes. Each scene contains 2 to 7 objects placed at approximately equal distance from the scene center. Each scene was photographed creating 3 versions, with one object ("critical object") being replaced to vary the overall motivational value of the scene (appetitive, aversive, and neutral), while maintaining high visual similarity between the three versions. Ratings on motivation, valence, arousal and recognizability were obtained using internet-based questionnaires. Since the main objective was to provide stimuli of well-defined motivational value, three motivation scales were used: (1) Desire to own the object; (2) Approach/Avoid; (3) Desire to interact with the object. Three sets of ratings were obtained in independent sets of observers: for all 805 objects presented on a neutral background, for 321 critical objects presented in their scene context, and for the entire scenes. On the basis of the motivational ratings, objects were subdivided into aversive, neutral, and appetitive categories. The MONS database will provide a standardized basis for future studies on motivational value under realistic conditions.

  5. Motivational Objects in Natural Scenes (MONS: A Database of >800 Objects

    Directory of Open Access Journals (Sweden)

    Judith Schomaker

    2017-09-01

    Full Text Available In daily life, we are surrounded by objects with pre-existing motivational associations. However, these are rarely controlled for in experiments with natural stimuli. Research on natural stimuli would therefore benefit from stimuli with well-defined motivational properties; in turn, such stimuli also open new paths in research on motivation. Here we introduce a database of Motivational Objects in Natural Scenes (MONS. The database consists of 107 scenes. Each scene contains 2 to 7 objects placed at approximately equal distance from the scene center. Each scene was photographed creating 3 versions, with one object (“critical object” being replaced to vary the overall motivational value of the scene (appetitive, aversive, and neutral, while maintaining high visual similarity between the three versions. Ratings on motivation, valence, arousal and recognizability were obtained using internet-based questionnaires. Since the main objective was to provide stimuli of well-defined motivational value, three motivation scales were used: (1 Desire to own the object; (2 Approach/Avoid; (3 Desire to interact with the object. Three sets of ratings were obtained in independent sets of observers: for all 805 objects presented on a neutral background, for 321 critical objects presented in their scene context, and for the entire scenes. On the basis of the motivational ratings, objects were subdivided into aversive, neutral, and appetitive categories. The MONS database will provide a standardized basis for future studies on motivational value under realistic conditions.

  6. A semi-interactive panorama based 3D reconstruction framework for indoor scenes

    NARCIS (Netherlands)

    Dang, T.K.; Worring, M.; Bui, T.D.

    2011-01-01

    We present a semi-interactive method for 3D reconstruction specialized for indoor scenes which combines computer vision techniques with efficient interaction. We use panoramas, popularly used for visualization of indoor scenes, but clearly not able to show depth, for their great field of view, as

  7. Eye Movement Control in Scene Viewing and Reading: Evidence from the Stimulus Onset Delay Paradigm

    Science.gov (United States)

    Luke, Steven G.; Nuthmann, Antje; Henderson, John M.

    2013-01-01

    The present study used the stimulus onset delay paradigm to investigate eye movement control in reading and in scene viewing in a within-participants design. Short onset delays (0, 25, 50, 200, and 350 ms) were chosen to simulate the type of natural processing difficulty encountered in reading and scene viewing. Fixation duration increased…

  8. Places in the Brain: Bridging Layout and Object Geometry in Scene-Selective Cortex.

    Science.gov (United States)

    Dillon, Moira R; Persichetti, Andrew S; Spelke, Elizabeth S; Dilks, Daniel D

    2017-06-13

    Diverse animal species primarily rely on sense (left-right) and egocentric distance (proximal-distal) when navigating the environment. Recent neuroimaging studies with human adults show that this information is represented in 2 scene-selective cortical regions-the occipital place area (OPA) and retrosplenial complex (RSC)-but not in a third scene-selective region-the parahippocampal place area (PPA). What geometric properties, then, does the PPA represent, and what is its role in scene processing? Here we hypothesize that the PPA represents relative length and angle, the geometric properties classically associated with object recognition, but only in the context of large extended surfaces that compose the layout of a scene. Using functional magnetic resonance imaging adaptation, we found that the PPA is indeed sensitive to relative length and angle changes in pictures of scenes, but not pictures of objects that reliably elicited responses to the same geometric changes in object-selective cortical regions. Moreover, we found that the OPA is also sensitive to such changes, while the RSC is tolerant to such changes. Thus, the geometric information typically associated with object recognition is also used during some aspects of scene processing. These findings provide evidence that scene-selective cortex differentially represents the geometric properties guiding navigation versus scene categorization. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  9. Panoramic Search: The Interaction of Memory and Vision in Search through a Familiar Scene

    Science.gov (United States)

    Oliva, Aude; Wolfe, Jeremy M. Arsenio, Helga C.

    2004-01-01

    How do observers search through familiar scenes? A novel panoramic search method is used to study the interaction of memory and vision in natural search behavior. In panoramic search, observers see part of an unchanging scene larger than their current field of view. A target object can be visible, present in the display but hidden from view, or…

  10. Stage Movement with Scripts and More Work with Scenes. TAP (Theatre Arts Package) 211 and 212.

    Science.gov (United States)

    Engelsman, Alan; Thalden, Irene

    The purpose of these lessons is to provide learning experiences which facilitate junior high and senior high school actors' mastery of stage movements when working with scripts. Suggested exercises include practice in finding motivation for actors' stage movements, acting a scene (from "West Side Story"), and interpreting and acting scenes of…

  11. How do targets, nontargets, and scene context influence real-world object detection?

    NARCIS (Netherlands)

    Katti, H.; Peelen, M.V.; Arun, S.P.

    2017-01-01

    Humans excel at finding objects in complex natural scenes, but the features that guide this behaviour have proved elusive. We used computational modeling to measure the contributions of target, nontarget, and coarse scene features towards object detection in humans. In separate experiments,

  12. Making a scene: exploring the dimensions of place through Dutch popular music, 1960-2010

    NARCIS (Netherlands)

    Brandellero, A.; Pfeffer, K.

    2015-01-01

    This paper applies a multi-layered conceptualisation of place to the analysis of particular music scenes in the Netherlands, 1960-2010. We focus on: the clustering of music-related activities in locations; the delineation of spatially tied music scenes, based on a shared identity, reproduced over

  13. The Interplay of Episodic and Semantic Memory in Guiding Repeated Search in Scenes

    Science.gov (United States)

    Vo, Melissa L.-H.; Wolfe, Jeremy M.

    2013-01-01

    It seems intuitive to think that previous exposure or interaction with an environment should make it easier to search through it and, no doubt, this is true in many real-world situations. However, in a recent study, we demonstrated that previous exposure to a scene does not necessarily speed search within that scene. For instance, when observers…

  14. Developmental Changes in Attention to Faces and Bodies in Static and Dynamic Scenes

    Directory of Open Access Journals (Sweden)

    Brenda M Stoesz

    2014-03-01

    Full Text Available Typically developing individuals show a strong visual preference for faces and face-like stimuli; however, this may come at the expense of attending to bodies or to other aspects of a scene. The primary goal of the present study was to provide additional insight into the development of attentional mechanisms that underlie perception of real people in naturalistic scenes. We examined the looking behaviours of typical children, adolescents, and young adults as they viewed static and dynamic scenes depicting one or more people. Overall, participants showed a bias to attend to faces more than on other parts of the scenes. Adding motion cues led to a reduction in the number, but an increase in the average duration of face fixations in single-character scenes. When multiple characters appeared in a scene, motion-related effects were attenuated and participants shifted their gaze from faces to bodies, or made off-screen glances. Children showed the largest effects related to the introduction of motion cues or additional characters, suggesting that they find dynamic faces difficult to process, and are especially prone to look away from faces when viewing complex social scenes – a strategy that could reduce the cognitive and the affective load imposed by having to divide one’s attention between multiple faces. Our findings provide new insights into the typical development of social attention during natural scene viewing, and lay the foundation for future work examining gaze behaviours in typical and atypical development.

  15. The Incongruency Advantage for Environmental Sounds Presented in Natural Auditory Scenes

    Science.gov (United States)

    Gygi, Brian; Shafiro, Valeriy

    2011-01-01

    The effect of context on the identification of common environmental sounds (e.g., dogs barking or cars honking) was tested by embedding them in familiar auditory background scenes (street ambience, restaurants). Initial results with subjects trained on both the scenes and the sounds to be identified showed a significant advantage of about five…

  16. Was That Levity or Livor Mortis? Crime Scene Investigators' Perspectives on Humor and Work

    Science.gov (United States)

    Vivona, Brian D.

    2012-01-01

    Humor is common and purposeful in most work settings. Although researchers have examined humor and joking behavior in various work settings, minimal research has been done on humor applications in the field of crime scene investigation. The crime scene investigator encounters death, trauma, and tragedy in a more intimate manner than any other…

  17. RESEARCH AND TECHNOLOGY DEVELOPMENT FOR CONSTRUCTION OF 3D VIDEO SCENES

    Directory of Open Access Journals (Sweden)

    T. A. Khlebnikova

    2016-06-01

    Full Text Available For the last two decades surface information in the form of conventional digital and analogue topographic maps has been being supplemented by new digital geospatial products, also known as 3D models of real objects. It is shown that currently there are no defined standards for 3D scenes construction technologies that could be used by Russian surveying and cartographic enterprises. The issues regarding source data requirements, their capture and transferring to create 3D scenes have not been defined yet. The accuracy issues for 3D video scenes used for measuring purposes can hardly ever be found in publications. Practicability of development, research and implementation of technology for construction of 3D video scenes is substantiated by 3D video scene capability to expand the field of data analysis application for environmental monitoring, urban planning, and managerial decision problems. The technology for construction of 3D video scenes with regard to the specified metric requirements is offered. Technique and methodological background are recommended for this technology used to construct 3D video scenes based on DTM, which were created by satellite and aerial survey data. The results of accuracy estimation of 3D video scenes are presented.

  18. Face, Body, and Center of Gravity Mediate Person Detection in Natural Scenes

    Science.gov (United States)

    Bindemann, Markus; Scheepers, Christoph; Ferguson, Heather J.; Burton, A. Mike

    2010-01-01

    Person detection is an important prerequisite of social interaction, but is not well understood. Following suggestions that people in the visual field can capture a viewer's attention, this study examines the role of the face and the body for person detection in natural scenes. We observed that viewers tend first to look at the center of a scene,…

  19. Geovisualization Approaches for Spatio-temporal Crime Scene Analysis - Towards 4D Crime Mapping

    Science.gov (United States)

    Wolff, Markus; Asche, Hartmut

    This paper presents a set of methods and techniques for analysis and multidimensional visualisation of crime scenes in a German city. As a first step the approach implies spatio-temporal analysis of crime scenes. Against this background a GIS-based application is developed that facilitates discovering initial trends in spatio-temporal crime scene distributions even for a GIS untrained user. Based on these results further spatio-temporal analysis is conducted to detect variations of certain hotspots in space and time. In a next step these findings of crime scene analysis are integrated into a geovirtual environment. Behind this background the concept of the space-time cube is adopted to allow for visual analysis of repeat burglary victimisation. Since these procedures require incorporating temporal elements into virtual 3D environments, basic methods for 4D crime scene visualisation are outlined in this paper.

  20. The forensic holodeck: an immersive display for forensic crime scene reconstructions.

    Science.gov (United States)

    Ebert, Lars C; Nguyen, Tuan T; Breitbeck, Robert; Braun, Marcel; Thali, Michael J; Ross, Steffen

    2014-12-01

    In forensic investigations, crime scene reconstructions are created based on a variety of three-dimensional image modalities. Although the data gathered are three-dimensional, their presentation on computer screens and paper is two-dimensional, which incurs a loss of information. By applying immersive virtual reality (VR) techniques, we propose a system that allows a crime scene to be viewed as if the investigator were present at the scene. We used a low-cost VR headset originally developed for computer gaming in our system. The headset offers a large viewing volume and tracks the user's head orientation in real-time, and an optical tracker is used for positional information. In addition, we created a crime scene reconstruction to demonstrate the system. In this article, we present a low-cost system that allows immersive, three-dimensional and interactive visualization of forensic incident scene reconstructions.

  1. Task search in a human computation market

    OpenAIRE

    Chilton, Lydia B.; Miller, Robert C.; Horton, John J.; Azenkot, Shiri

    2010-01-01

    In order to understand how a labor market for human computation functions, it is important to know how workers search for tasks. This paper uses two complementary methods to gain insight into how workers search for tasks on Mechanical Turk. First, we perform a high frequency scrape of 36 pages of search results and analyze it by looking at the rate of disappearance of tasks across key ways Mechanical Turk allows workers to sort tasks. Second, we present the results of a survey in which we pai...

  2. Project Tasks in Robotics

    DEFF Research Database (Denmark)

    Sørensen, Torben; Hansen, Poul Erik

    1998-01-01

    Description of the compulsary project tasks to be carried out as a part of DTU course 72238 Robotics......Description of the compulsary project tasks to be carried out as a part of DTU course 72238 Robotics...

  3. Transferring Pre-Trained Deep CNNs for Remote Scene Classification with General Features Learned from Linear PCA Network

    Directory of Open Access Journals (Sweden)

    Jie Wang

    2017-03-01

    Full Text Available Deep convolutional neural networks (CNNs have been widely used to obtain high-level representation in various computer vision tasks. However, in the field of remote sensing, there are not sufficient images to train a useful deep CNN. Instead, we tend to transfer successful pre-trained deep CNNs to remote sensing tasks. In the transferring process, generalization power of features in pre-trained deep CNNs plays the key role. In this paper, we propose two promising architectures to extract general features from pre-trained deep CNNs for remote scene classification. These two architectures suggest two directions for improvement. First, before the pre-trained deep CNNs, we design a linear PCA network (LPCANet to synthesize spatial information of remote sensing images in each spectral channel. This design shortens the spatial “distance” of target and source datasets for pre-trained deep CNNs. Second, we introduce quaternion algebra to LPCANet, which further shortens the spectral “distance” between remote sensing images and images used to pre-train deep CNNs. With five well-known pre-trained deep CNNs, experimental results on three independent remote sensing datasets demonstrate that our proposed framework obtains state-of-the-art results without fine-tuning and feature fusing. This paper also provides baseline for transferring fresh pretrained deep CNNs to other remote sensing tasks.

  4. Auditory Scene Analysis and sonified visual images. Does consonance negatively impact on object formation when using complex sonified stimuli?

    Directory of Open Access Journals (Sweden)

    David J Brown

    2015-10-01

    Full Text Available A critical task for the brain is the sensory representation and identification of perceptual objects in the world. When the visual sense is impaired, hearing and touch must take primary roles and in recent times compensatory techniques have been developed that employ the tactile or auditory system as a substitute for the visual system. Visual-to-auditory sonifications provide a complex, feature-based auditory representation that must be decoded and integrated into an object-based representation by the listener. However, we don’t yet know what role the auditory system plays in the object integration stage and whether the principles of auditory scene analysis apply. Here we used coarse sonified images in a two-tone discrimination task to test whether auditory feature-based representations of visual objects would be confounded when their features conflicted with the principles of auditory consonance. We found that listeners (N = 36 performed worse in an object recognition task when the auditory feature-based representation was harmonically consonant. We also found that this conflict was not negated with the provision of congruent audio-visual information. The findings suggest that early auditory processes of harmonic grouping dominate the object formation process and that the complexity of the signal, and additional sensory information have limited effect on this.

  5. A hierarchical probabilistic model for rapid object categorization in natural scenes.

    Directory of Open Access Journals (Sweden)

    Xiaofu He

    Full Text Available Humans can categorize objects in complex natural scenes within 100-150 ms. This amazing ability of rapid categorization has motivated many computational models. Most of these models require extensive training to obtain a decision boundary in a very high dimensional (e.g., ∼6,000 in a leading model feature space and often categorize objects in natural scenes by categorizing the context that co-occurs with objects when objects do not occupy large portions of the scenes. It is thus unclear how humans achieve rapid scene categorization.To address this issue, we developed a hierarchical probabilistic model for rapid object categorization in natural scenes. In this model, a natural object category is represented by a coarse hierarchical probability distribution (PD, which includes PDs of object geometry and spatial configuration of object parts. Object parts are encoded by PDs of a set of natural object structures, each of which is a concatenation of local object features. Rapid categorization is performed as statistical inference. Since the model uses a very small number (∼100 of structures for even complex object categories such as animals and cars, it requires little training and is robust in the presence of large variations within object categories and in their occurrences in natural scenes. Remarkably, we found that the model categorized animals in natural scenes and cars in street scenes with a near human-level performance. We also found that the model located animals and cars in natural scenes, thus overcoming a flaw in many other models which is to categorize objects in natural context by categorizing contextual features. These results suggest that coarse PDs of object categories based on natural object structures and statistical operations on these PDs may underlie the human ability to rapidly categorize scenes.

  6. SCEGRAM: An image database for semantic and syntactic inconsistencies in scenes.

    Science.gov (United States)

    Öhlschläger, Sabine; Võ, Melissa Le-Hoa

    2017-10-01

    Our visual environment is not random, but follows compositional rules according to what objects are usually found where. Despite the growing interest in how such semantic and syntactic rules - a scene grammar - enable effective attentional guidance and object perception, no common image database containing highly-controlled object-scene modifications has been publically available. Such a database is essential in minimizing the risk that low-level features drive high-level effects of interest, which is being discussed as possible source of controversial study results. To generate the first database of this kind - SCEGRAM - we took photographs of 62 real-world indoor scenes in six consistency conditions that contain semantic and syntactic (both mild and extreme) violations as well as their combinations. Importantly, always two scenes were paired, so that an object was semantically consistent in one scene (e.g., ketchup in kitchen) and inconsistent in the other (e.g., ketchup in bathroom). Low-level salience did not differ between object-scene conditions and was generally moderate. Additionally, SCEGRAM contains consistency ratings for every object-scene condition, as well as object-absent scenes and object-only images. Finally, a cross-validation using eye-movements replicated previous results of longer dwell times for both semantic and syntactic inconsistencies compared to consistent controls. In sum, the SCEGRAM image database is the first to contain well-controlled semantic and syntactic object-scene inconsistencies that can be used in a broad range of cognitive paradigms (e.g., verbal and pictorial priming, change detection, object identification, etc.) including paradigms addressing developmental aspects of scene grammar. SCEGRAM can be retrieved for research purposes from http://www.scenegrammarlab.com/research/scegram-database/ .

  7. Task assignment and coaching

    NARCIS (Netherlands)

    Dominguez-Martinez, S.

    2009-01-01

    An important task of a manager is to motivate her subordinates. One way in which a manager can give incentives to junior employees is through the assignment of tasks. How a manager allocates tasks in an organization, provides information to the junior employees about his ability. Without coaching

  8. Influence of environmental information in natural scenes and the effects of motion adaptation on a fly motion-sensitive neuron during simulated flight

    Directory of Open Access Journals (Sweden)

    Thomas W. Ullrich

    2014-12-01

    Full Text Available Gaining information about the spatial layout of natural scenes is a challenging task that flies need to solve, especially when moving at high velocities. A group of motion sensitive cells in the lobula plate of flies is supposed to represent information about self-motion as well as the environment. Relevant environmental features might be the nearness of structures, influencing retinal velocity during translational self-motion, and the brightness contrast. We recorded the responses of the H1 cell, an individually identifiable lobula plate tangential cell, during stimulation with image sequences, simulating translational motion through natural sceneries with a variety of differing depth structures. A correlation was found between the average nearness of environmental structures within large parts of the cell's receptive field and its response across a variety of scenes, but no correlation was found between the brightness contrast of the stimuli and the cell response. As a consequence of motion adaptation resulting from repeated translation through the environment, the time-dependent response modulations induced by the spatial structure of the environment were increased relatively to the background activity of the cell. These results support the hypothesis that some lobula plate tangential cells do not only serve as sensors of self-motion, but also as a part of a neural system that processes information about the spatial layout of natural scenes.

  9. Parameterized Radiation Transport Model for Neutron Detection in Complex Scenes

    Science.gov (United States)

    Lavelle, C. M.; Bisson, D.; Gilligan, J.; Fisher, B. M.; Mayo, R. M.

    2013-04-01

    There is interest in developing the ability to rapidly compute the energy dependent neutron flux within a complex geometry for a variety of applications. Coupled with sensor response function information, this capability would allow direct estimation of sensor behavior in multitude of operational scenarios. In situations where detailed simulation is not warranted or affordable, it is desirable to possess reliable estimates of the neutron field in practical scenarios which do not require intense computation. A tool set of this kind would provide quantitative means to address the development of operational concepts, inform asset allocation decisions, and exercise planning. Monte Carlo and/or deterministic methods provide a high degree of precision and fidelity consistent with the accuracy with which the scene is rendered. However, these methods are often too computationally expensive to support the real-time evolution of a virtual operational scenario. High fidelity neutron transport simulations are also time consuming from the standpoint of user setup and post-simulation analysis. We pre-compute adjoint solutions using MCNP to generate a coarse spatial and energy grid of the neutron flux over various surfaces as an alternative to full Monte Carlo modeling. We attempt to capture the characteristics of the neutron transport solution. We report on the results of brief verification and validation measurements which test the predictive capability of this approach over soil and asphalt concrete surfaces. We highlight the sensitivity of the simulated and experimental results to the material composition of the environment.

  10. Sleep Promotes Lasting Changes in Selective Memory for Emotional Scenes

    Directory of Open Access Journals (Sweden)

    Jessica ePayne

    2012-11-01

    Full Text Available Although we know that emotional events enjoy a privileged status in our memories, we still have much to learn about how emotional memories are processed, stored, and how they change over time. Here we show a positive association between REM sleep and the selective consolidation of central, negative aspects of complex scenes. Moreover, we show that the placement of sleep is critical for this selective emotional memory benefit. When testing occurred 24hr post-encoding, subjects who slept soon after learning (24hr Sleep First group had superior memory for emotional objects compared to subjects whose sleep was delayed for 16hr post-encoding following a full day of wakefulness (24hr Wake First group. However, this increase in memory for emotional objects corresponded with a decrease in memory for the neutral backgrounds on which these objects were placed. Furthermore, memory for emotional objects in the 24hr Sleep First group was comparable to performance after just a 12hr delay containing a night of sleep, suggesting that sleep soon after learning selectively stabilizes emotional memory. These results suggest that the sleeping brain preserves in long-term memory only what is emotionally salient and perhaps most adaptive to remember.

  11. Sleep promotes lasting changes in selective memory for emotional scenes.

    Science.gov (United States)

    Payne, Jessica D; Chambers, Alexis M; Kensinger, Elizabeth A

    2012-01-01

    Although we know that emotional events enjoy a privileged status in our memories, we still have much to learn about how emotional memories are processed, stored, and how they change over time. Here we show a positive association between REM sleep and the selective consolidation of central, negative aspects of complex scenes. Moreover, we show that the placement of sleep is critical for this selective emotional memory benefit. When testing occurred 24 h post-encoding, subjects who slept soon after learning (24 h Sleep First group) had superior memory for emotional objects compared to subjects whose sleep was delayed for 16 h post-encoding following a full day of wakefulness (24 h Wake First group). However, this increase in memory for emotional objects corresponded with a decrease in memory for the neutral backgrounds on which these objects were placed. Furthermore, memory for emotional objects in the 24 h Sleep First group was comparable to performance after just a 12 h delay containing a night of sleep, suggesting that sleep soon after learning selectively stabilizes emotional memory. These results suggest that the sleeping brain preserves in long-term memory only what is emotionally salient and perhaps most adaptive to remember.

  12. Metric Evaluation Pipeline for 3d Modeling of Urban Scenes

    Science.gov (United States)

    Bosch, M.; Leichtman, A.; Chilcott, D.; Goldberg, H.; Brown, M.

    2017-05-01

    Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.

  13. Fitting boxes to Manhattan scenes using linear integer programming

    KAUST Repository

    Li, Minglei

    2016-02-19

    We propose an approach for automatic generation of building models by assembling a set of boxes using a Manhattan-world assumption. The method first aligns the point cloud with a per-building local coordinate system, and then fits axis-aligned planes to the point cloud through an iterative regularization process. The refined planes partition the space of the data into a series of compact cubic cells (candidate boxes) spanning the entire 3D space of the input data. We then choose to approximate the target building by the assembly of a subset of these candidate boxes using a binary linear programming formulation. The objective function is designed to maximize the point cloud coverage and the compactness of the final model. Finally, all selected boxes are merged into a lightweight polygonal mesh model, which is suitable for interactive visualization of large scale urban scenes. Experimental results and a comparison with state-of-the-art methods demonstrate the effectiveness of the proposed framework.

  14. Intrinsic Scene Decomposition from RGB-D Images

    KAUST Repository

    Hachama, Mohammed

    2015-12-07

    In this paper, we address the problem of computing an intrinsic decomposition of the colors of a surface into an albedo and a shading term. The surface is reconstructed from a single or multiple RGB-D images of a static scene obtained from different views. We thereby extend and improve existing works in the area of intrinsic image decomposition. In a variational framework, we formulate the problem as a minimization of an energy composed of two terms: a data term and a regularity term. The first term is related to the image formation process and expresses the relation between the albedo, the surface normals, and the incident illumination. We use an affine shading model, a combination of a Lambertian model, and an ambient lighting term. This model is relevant for Lambertian surfaces. When available, multiple views can be used to handle view-dependent non-Lambertian reflections. The second term contains an efficient combination of l2 and l1-regularizers on the illumination vector field and albedo respectively. Unlike most previous approaches, especially Retinex-like techniques, these terms do not depend on the image gradient or texture, thus reducing the mixing shading/reflectance artifacts and leading to better results. The obtained non-linear optimization problem is efficiently solved using a cyclic block coordinate descent algorithm. Our method outperforms a range of state-of-the-art algorithms on a popular benchmark dataset.

  15. Napping and the Selective Consolidation of Negative Aspects of Scenes

    Science.gov (United States)

    Payne, Jessica D.; Kensinger, Elizabeth A.; Wamsley, Erin; Spreng, R. Nathan; Alger, Sara; Gibler, Kyle; Schacter, Daniel L.; Stickgold, Robert

    2018-01-01

    After information is encoded into memory, it undergoes an offline period of consolidation that occurs optimally during sleep. The consolidation process not only solidifies memories, but also selectively preserves aspects of experience that are emotionally salient and relevant for future use. Here, we provide evidence that an afternoon nap is sufficient to trigger preferential memory for emotional information contained in complex scenes. Selective memory for negative emotional information was enhanced after a nap compared to wakefulness in two control conditions designed to carefully address interference and time-of-day confounds. Although prior evidence has connected negative emotional memory formation to rapid eye movement (REM) sleep physiology, we found that non-REM delta activity and the amount of slow wave sleep (SWS) in the nap were robustly related to the selective consolidation of negative information. These findings suggest that the mechanisms underlying memory consolidation benefits associated with napping and nighttime sleep are not always the same. Finally, we provide preliminary evidence that the magnitude of the emotional memory benefit conferred by sleep is equivalent following a nap and a full night of sleep, suggesting that selective emotional remembering can be economically achieved by taking a nap. PMID:25706830

  16. METRIC EVALUATION PIPELINE FOR 3D MODELING OF URBAN SCENES

    Directory of Open Access Journals (Sweden)

    M. Bosch

    2017-05-01

    Full Text Available Publicly available benchmark data and metric evaluation approaches have been instrumental in enabling research to advance state of the art methods for remote sensing applications in urban 3D modeling. Most publicly available benchmark datasets have consisted of high resolution airborne imagery and lidar suitable for 3D modeling on a relatively modest scale. To enable research in larger scale 3D mapping, we have recently released a public benchmark dataset with multi-view commercial satellite imagery and metrics to compare 3D point clouds with lidar ground truth. We now define a more complete metric evaluation pipeline developed as publicly available open source software to assess semantically labeled 3D models of complex urban scenes derived from multi-view commercial satellite imagery. Evaluation metrics in our pipeline include horizontal and vertical accuracy and completeness, volumetric completeness and correctness, perceptual quality, and model simplicity. Sources of ground truth include airborne lidar and overhead imagery, and we demonstrate a semi-automated process for producing accurate ground truth shape files to characterize building footprints. We validate our current metric evaluation pipeline using 3D models produced using open source multi-view stereo methods. Data and software is made publicly available to enable further research and planned benchmarking activities.

  17. RONI-based steganographic method for 3D scene

    Science.gov (United States)

    Li, Xiao-Wei; Wang, Qiong-Hua

    2017-06-01

    Image steganography is one way of data hiding which provides data security in digital images. The aim is to embed and deliver secret data in digital images without any suspiciousness. However, most of the existing optical image hiding methods ignore the visual quality of the stego-image for improving the robustness of the secret image. To address this issue, in this paper, we present a Region of Non-Interest (RONI) steganographic algorithm to enhance the visual quality of the stego-image. In the proposed method, the carrier image is segmented into Region of Interest (ROI) and RONI. To enhance the visual quality, the 3D image information is embedded into the RONI of the digital images. In order to find appropriate regions for embedding, we use a visual attention model as a means of measuring the ROI of the digital images. The algorithm employs the computational integral imaging (CII) technique to hide the 3D scene in the carrier image. Comparison results show that the proposed technique performs better than some existing state of art techniques.

  18. Depth estimation of complex geometry scenes from light fields

    Science.gov (United States)

    Si, Lipeng; Wang, Qing

    2018-01-01

    The surface camera (SCam) of light fields gathers angular sample rays passing through a 3D point. The consistency of SCams is evaluated to estimate the depth map of scene. But the consistency is affected by several limitations such as occlusions or non-Lambertian surfaces. To solve those limitations, the SCam is partitioned into two segments that one of them could satisfy the consistency constraint. The segmentation pattern of SCam is highly related to the texture of spatial patch, so we enforce a mask matching to describe the shape correlation between segments of SCam and spatial patch. To further address the ambiguity in textureless region, a global method with pixel-wise plane label is presented. Plane label inference at each pixel can recover not only depth value but also local geometry structure, that is suitable for light fields with sub-pixel disparities and continuous view variation. Our method is evaluated on public light field datasets and outperforms the state-of-the-art.

  19. The adaptation of a 360° camera utilising an alternate light source (ALS) for the detection of biological fluids at crime scenes.

    Science.gov (United States)

    Sheppard, Kayleigh; Cassella, John P; Fieldhouse, Sarah; King, Roberto

    2017-07-01

    One of the most important and commonly encountered evidence types that can be recovered at crime scenes are biological fluids. Due to the ephemeral nature of biological fluids and the valuable DNA that they can contain, it is fundamental that these are documented extensively and recovered rapidly. Locating and identifying biological fluids can prove a challenging task but can aid in reconstructing a sequence of events. Alternate light sources (ALS) offer powerful non-invasive methods for locating and enhancing biological fluids utilising different wavelengths of light. Current methods for locating biological fluids using ALS's may be time consuming, as they often require close range searching of potentially large crime scenes. Subsequent documentation using digital cameras and alternate light sources can increase the investigation time and due to the cameras low dynamic range, photographs can appear under or over exposed. This study presents a technique, which allows the simultaneous detection and visualisation of semen and saliva utilising a SceneCam 360° camera (Spheron VR AG), which was adapted to integrate a blue Crime Lite XL (Foster+Freeman). This technique was investigated using different volumes of semen and saliva, on porous and non-porous substrates, and the ability to detect these at incremental distances from the substrate. Substrate type and colour had a significant effect on the detection of the biological fluid, with limited fluid detection on darker substrates. The unique real-time High Dynamic range (HDR) ability of the SceneCam significantly enhanced the detection of biological fluids where background fluorescence masked target fluorescence. These preliminary results are presented as a proof of concept for combining 360° photography using HDR and an ALS for the detection of biological stains, within a scene, in real time, whilst conveying spatial relationships of staining to other evidence. This technique presents the opportunity to

  20. Hyperspectral target detection analysis of a cluttered scene from a virtual airborne sensor platform using MuSES

    Science.gov (United States)

    Packard, Corey D.; Viola, Timothy S.; Klein, Mark D.

    2017-10-01

    The ability to predict spectral electro-optical (EO) signatures for various targets against realistic, cluttered backgrounds is paramount for rigorous signature evaluation. Knowledge of background and target signatures, including plumes, is essential for a variety of scientific and defense-related applications including contrast analysis, camouflage development, automatic target recognition (ATR) algorithm development and scene material classification. The capability to simulate any desired mission scenario with forecast or historical weather is a tremendous asset for defense agencies, serving as a complement to (or substitute for) target and background signature measurement campaigns. In this paper, a systematic process for the physical temperature and visible-through-infrared radiance prediction of several diverse targets in a cluttered natural environment scene is presented. The ability of a virtual airborne sensor platform to detect and differentiate targets from a cluttered background, from a variety of sensor perspectives and across numerous wavelengths in differing atmospheric conditions, is considered. The process described utilizes the thermal and radiance simulation software MuSES and provides a repeatable, accurate approach for analyzing wavelength-dependent background and target (including plume) signatures in multiple band-integrated wavebands (multispectral) or hyperspectrally. The engineering workflow required to combine 3D geometric descriptions, thermal material properties, natural weather boundary conditions, all modes of heat transfer and spectral surface properties is summarized. This procedure includes geometric scene creation, material and optical property attribution, and transient physical temperature prediction. Radiance renderings, based on ray-tracing and the Sandford-Robertson BRDF model, are coupled with MODTRAN for the inclusion of atmospheric effects. This virtual hyperspectral/multispectral radiance prediction methodology has been

  1. Skidmore Clips of Neutral and Expressive Scenarios (SCENES): Novel dynamic stimuli for social cognition research.

    Science.gov (United States)

    Schofield, Casey A; Weeks, Justin W; Taylor, Lea; Karnedy, Colten

    2015-12-30

    Social cognition research has relied primarily on photographic emotional stimuli. Such stimuli likely have limited ecological validity in terms of representing real world social interactions. The current study presents evidence for the validity of a new stimuli set of dynamic social SCENES (Skidmore Clips of Emotional and Neutral Expressive Scenarios). To develop these stimuli, ten undergraduate theater students were recruited to portray members of an audience. This audience was configured to display (seven) varying configurations of social feedback, ranging from unequivocally approving to unequivocally disapproving (including three different versions of balanced/neutral scenes). Validity data were obtained from 383 adult participants recruited from Amazon's Mechanical Turk. Each participant viewed three randomly assigned scenes and provided a rating of the perceived criticalness of each scene. Results indicate that the SCENES reflect the intended range of emotionality, and pairwise comparisons suggest that the SCENES capture distinct levels of critical feedback. Overall, the SCENES stimuli set represents a publicly available (www.scenesstimuli.com) resource for researchers interested in measuring social cognition in the presence of dynamic and naturalistic social stimuli. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  2. Multichannel analyzer development in CAMAC

    International Nuclear Information System (INIS)

    Nagy, J.Z.; Zarandy, A.

    1988-01-01

    The data acquisition in TOKAMAK experiments some CAMAC modules have been developed. The modules are the following: 64 K analyzer memory, 32 K analyzer memory, 6-channel pulse peak analyzer memory which contains the 32 K analyzer memory and eight AD-converters

  3. Representations and Techniques for 3D Object Recognition and Scene Interpretation

    CERN Document Server

    Hoiem, Derek

    2011-01-01

    One of the grand challenges of artificial intelligence is to enable computers to interpret 3D scenes and objects from imagery. This book organizes and introduces major concepts in 3D scene and object representation and inference from still images, with a focus on recent efforts to fuse models of geometry and perspective with statistical machine learning. The book is organized into three sections: (1) Interpretation of Physical Space; (2) Recognition of 3D Objects; and (3) Integrated 3D Scene Interpretation. The first discusses representations of spatial layout and techniques to interpret physi

  4. Do acting out verbs with dolls and comparison learning between scenes boost toddlers' verb comprehension?

    Science.gov (United States)

    Schwarz, Amy Louise; VAN Kleeck, Anne; Maguire, Mandy J; Abdi, Hervé

    2017-05-01

    To better understand how toddlers integrate multiple learning strategies to acquire verbs, we compared sensorimotor recruitment and comparison learning because both strategies are thought to boost children's access to scene-level information. For sensorimotor recruitment, we tested having toddlers use dolls as agents and compared this strategy with having toddlers observe another person enact verbs with dolls. For comparison learning, we compared providing pairs of: (a) training scenes in which animate objects with similar body-shapes maintained agent/patient roles with (b) scenes in which objects with dissimilar body-shapes switched agent/patient roles. Only comparison learning boosted verb comprehension.

  5. Rational-operator-based depth-from-defocus approach to scene reconstruction.

    Science.gov (United States)

    Li, Ang; Staunton, Richard; Tjahjadi, Tardi

    2013-09-01

    This paper presents a rational-operator-based approach to depth from defocus (DfD) for the reconstruction of three-dimensional scenes from two-dimensional images, which enables fast DfD computation that is independent of scene textures. Two variants of the approach, one using the Gaussian rational operators (ROs) that are based on the Gaussian point spread function (PSF) and the second based on the generalized Gaussian PSF, are considered. A novel DfD correction method is also presented to further improve the performance of the approach. Experimental results are considered for real scenes and show that both approaches outperform existing RO-based methods.

  6. Hierarchical Model for the Similarity Measurement of a Complex Holed-Region Entity Scene

    Directory of Open Access Journals (Sweden)

    Zhanlong Chen

    2017-11-01

    Full Text Available Complex multi-holed-region entity scenes (i.e., sets of random region with holes are common in spatial database systems, spatial query languages, and the Geographic Information System (GIS. A multi-holed-region (region with an arbitrary number of holes is an abstraction of the real world that primarily represents geographic objects that have more than one interior boundary, such as areas that contain several lakes or lakes that contain islands. When the similarity of the two complex holed-region entity scenes is measured, the number of regions in the scenes and the number of holes in the regions are usually different between the two scenes, which complicates the matching relationships of holed-regions and holes. The aim of this research is to develop several holed-region similarity metrics and propose a hierarchical model to measure comprehensively the similarity between two complex holed-region entity scenes. The procedure first divides a complex entity scene into three layers: a complex scene, a micro-spatial-scene, and a simple entity (hole. The relationships between the adjacent layers are considered to be sets of relationships, and each level of similarity measurements is nested with the adjacent one. Next, entity matching is performed from top to bottom, while the similarity results are calculated from local to global. In addition, we utilize position graphs to describe the distribution of the holed-regions and subsequently describe the directions between the holes using a feature matrix. A case study that uses the Great Lakes in North America in 1986 and 2015 as experimental data illustrates the entire similarity measurement process between two complex holed-region entity scenes. The experimental results show that the hierarchical model accounts for the relationships of the different layers in the entire complex holed-region entity scene. The model can effectively calculate the similarity of complex holed-region entity scenes, even if the

  7. Pair Negotiation When Developing English Speaking Tasks

    Science.gov (United States)

    Bohórquez Suárez, Ingrid Liliana; Gómez Sará, Mary Mily; Medina Mosquera, Sindy Lorena

    2011-01-01

    This study analyzes what characterizes the negotiations of seventh graders at a public school in Bogotá when working in pairs to develop speaking tasks in EFL classes. The inquiry is a descriptive case study that follows the qualitative paradigm. As a result of analyzing the data, we obtained four consecutive steps that characterize students'…

  8. Automated Coarse Registration of Point Clouds in 3d Urban Scenes Using Voxel Based Plane Constraint

    Science.gov (United States)

    Xu, Y.; Boerner, R.; Yao, W.; Hoegner, L.; Stilla, U.

    2017-09-01

    For obtaining a full coverage of 3D scans in a large-scale urban area, the registration between point clouds acquired via terrestrial laser scanning (TLS) is normally mandatory. However, due to the complex urban environment, the automatic registration of different scans is still a challenging problem. In this work, we propose an automatic marker free method for fast and coarse registration between point clouds using the geometric constrains of planar patches under a voxel structure. Our proposed method consists of four major steps: the voxelization of the point cloud, the approximation of planar patches, the matching of corresponding patches, and the estimation of transformation parameters. In the voxelization step, the point cloud of each scan is organized with a 3D voxel structure, by which the entire point cloud is partitioned into small individual patches. In the following step, we represent points of each voxel with the approximated plane function, and select those patches resembling planar surfaces. Afterwards, for matching the corresponding patches, a RANSAC-based strategy is applied. Among all the planar patches of a scan, we randomly select a planar patches set of three planar surfaces, in order to build a coordinate frame via their normal vectors and their intersection points. The transformation parameters between scans are calculated from these two coordinate frames. The planar patches set with its transformation parameters owning the largest number of coplanar patches are identified as the optimal candidate set for estimating the correct transformation parameters. The experimental results using TLS datasets of different scenes reveal that our proposed method can be both effective and efficient for the coarse registration task. Especially, for the fast orientation between scans, our proposed method can achieve a registration error of less than around 2 degrees using the testing datasets, and much more efficient than the classical baseline methods.

  9. Effects of Scene Properties and Emotional Valence on Brain Activations: A Fixation-Related fMRI Study

    Directory of Open Access Journals (Sweden)

    Michał Kuniecki

    2017-08-01

    Full Text Available Temporal and spatial characteristics of fixations are affected by image properties, including high-level scene characteristics, such as object-background composition, and low-level physical characteristics, such as image clarity. The influence of these factors is modulated by the emotional content of an image. Here, we aimed to establish whether brain correlates of fixations reflect these modulatory effects. To this end, we simultaneously scanned participants and measured their eye movements, while presenting negative and neutral images in various image clarity conditions, with controlled object-background composition. The fMRI data were analyzed using a novel fixation-based event-related (FIBER method, which allows the tracking of brain activity linked to individual fixations. The results revealed that fixating an emotional object was linked to greater deactivation in the right lingual gyrus than fixating the background of an emotional image, while no difference between object and background was found for neutral images. We suggest that deactivation in the lingual gyrus might be linked to inhibition of saccade execution. This was supported by fixation duration results, which showed that in the negative condition, fixations falling on the object were longer than those falling on the background. Furthermore, increase in the image clarity was correlated with fixation-related activity within the lateral occipital complex, the structure linked to object recognition. This correlation was significantly stronger for negative images, presumably due to greater deployment of attention towards emotional objects. Our eye-tracking results are in line with these observations, showing that the chance of fixating an object rose faster for negative images over neutral ones as the level of noise decreased. Overall, our study demonstrated that emotional value of an image changes the way that low and high-level scene properties affect the characteristics of

  10. Transport Task Force Leadership, Task 4

    International Nuclear Information System (INIS)

    Callen, J.D.

    1991-07-01

    The Transport Task Force (TTF) was initiated as a broad-based US magnetic fusion community activity during the fall of 1988 to focus attention on and encourage development of an increased understanding of anomalous transport in tokamaks. The overall TTF goal is to make progress on Characterizing, Understanding and Identifying how to Reduce plasma transport in tokamaks -- to CUIR transport

  11. Cardiorespiratory concerns shape brain responses during automatic panic-related scene processing in patients with panic disorder

    Science.gov (United States)

    Feldker, Katharina; Heitmann, Carina Yvonne; Neumeister, Paula; Brinkmann, Leonie; Bruchmann, Maximillan; Zwitserlood, Pienie; Straube, Thomas

    2018-01-01

    Background Increased automatic processing of threat-related stimuli has been proposed as a key element in panic disorder. Little is known about the neural basis of automatic processing, in particular to task-irrelevant, panic-related, ecologically valid stimuli, or about the association between brain activation and symptomatology in patients with panic disorder. Methods The present event-related functional MRI (fMRI) study compared brain responses to task-irrelevant, panic-related and neutral visual stimuli in medication-free patients with panic disorder and healthy controls. Panic-related and neutral scenes were presented while participants performed a spatially non-overlapping bar orientation task. Correlation analyses investigated the association between brain responses and panic-related aspects of symptomatology, measured using the Anxiety Sensitivity Index (ASI). Results We included 26 patients with panic disorder and 26 heatlhy controls in our analysis. Compared with controls, patients with panic disorder showed elevated activation in the amygdala, brainstem, thalamus, insula, anterior cingulate cortex and midcingulate cortex in response to panic-related versus neutral task-irrelevant stimuli. Furthermore, fear of cardiovascular symptoms (a subcomponent of the ASI) was associated with insula activation, whereas fear of respiratory symptoms was associated with brainstem hyperactivation in patients with panic disorder. Limitations The additional implementation of measures of autonomic activation, such as pupil diameter, heart rate, or electrodermal activity, would have been informative during the fMRI scan as well as during the rating procedure. Conclusion Results reveal a neural network involved in the processing of panic-related distractor stimuli in patients with panic disorder and suggest an automatic weighting of panic-related information depending on the magnitude of cardiovascular and respiratory symptoms. Insula and brainstem activations show function

  12. Cardiorespiratory concerns shape brain responses during automatic panic-related scene processing in patients with panic disorder.

    Science.gov (United States)

    Feldker, Katharina; Heitmann, Carina Yvonne; Neumeister, Paula; Brinkmann, Leonie; Bruchmann, Maximillan; Zwitserlood, Pienie; Straube, Thomas

    2018-01-01

    Increased automatic processing of threat-related stimuli has been proposed as a key element in panic disorder. Little is known about the neural basis of automatic processing, in particular to task-irrelevant, panic-related, ecologically valid stimuli, or about the association between brain activation and symptomatology in patients with panic disorder. The present event-related functional MRI (fMRI) study compared brain responses to task-irrelevant, panic-related and neutral visual stimuli in medication-free patients with panic disorder and healthy controls. Panic-related and neutral scenes were presented while participants performed a spatially nonoverlapping bar orientation task. Correlation analyses investigated the association between brain responses and panic-related aspects of symptomatology, measured using the Anxiety Sensitivity Index (ASI). We included 26 patients with panic disorder and 26 heatlhy controls in our analysis. Compared with controls, patients with panic disorder showed elevated activation in the amygdala, brainstem, thalamus, insula, anterior cingulate cortex and midcingulate cortex in response to panic-related versus neutral task-irrelevant stimuli. Furthermore, fear of cardiovascular symptoms (a subcomponent of the ASI) was associated with insula activation, whereas fear of respiratory symptoms was associated with brainstem hyperactivation in patients with panic disorder. The additional implementation of measures of autonomic activation, such as pupil diameter, heart rate, or electrodermal activity, would have been informative during the fMRI scan as well as during the rating procedure. Results reveal a neural network involved in the processing of panic-related distractor stimuli in patients with panic disorder and suggest an automatic weighting of panic-related information depending on the magnitude of cardiovascular and respiratory symptoms. Insula and brainstem activations show function-related associations with specific components of

  13. Influence of scene structure and content on visual search strategies.

    Science.gov (United States)

    Amor, Tatiana A; Luković, Mirko; Herrmann, Hans J; Andrade, José S

    2017-07-01

    When searching for a target within an image, our brain can adopt different strategies, but which one does it choose? This question can be answered by tracking the motion of the eye while it executes the task. Following many individuals performing various search tasks, we distinguish between two competing strategies. Motivated by these findings, we introduce a model that captures the interplay of the search strategies and allows us to create artificial eye-tracking trajectories, which could be compared with the experimental ones. Identifying the model parameters allows us to quantify the strategy employed in terms of ensemble averages, characterizing each experimental cohort. In this way, we can discern with high sensitivity the relation between the visual landscape and the average strategy, disclosing how small variations in the image induce changes in the strategy. © 2017 The Author(s).

  14. Behind the scenes of GS: the impact of IMPACT

    CERN Multimedia

    Katarina Anthony

    2014-01-01

    Carrying out a job at CERN can be a complicated task, with coordinators reaching across departments to manage personnel, ensure safety and minimise the impact of their activities on the rest of the Laboratory.  To help coordinators with this tough task, the GS Department developed IMPACT, the platform that, since 2011, has unified CERN's major experiment, accelerator and injector coordination tools.   When planning interventions both large and small, IMPACT (the Intervention Management Planning and Coordination Tool) is the go-to gizmo on every CERN coordinator's tool belt. "IMPACT is a central repository of activity requests that standardises the way work is declared at CERN," says Benoit Daudin, GS-AIS-PM Section Leader. "If you need to intervene in any of CERN's major facilities, you need to declare this work on IMPACT. The tool will analyse the job and see whose approval is required. This could simply b...

  15. Structured prediction for urban scene semantic segmentation with geographic context

    OpenAIRE

    Volpi, M.; Ferrari, V.

    2015-01-01

    In this work we address the problem of semantic segmentation of urban remote sensing images into land cover maps. We propose to tackle this task by learning the geographic context of classes and use it to favor or discourage certain spatial configuration of label assignments. For this reason, we learn from training data two spatial priors enforcing different key aspects of the geographical space: local co-occurrence and relative location of land cover classes. We propose to embed these geogra...

  16. "Biennale en scene" keskendus hääle erinevatele võimalustele / Diana Kiivit

    Index Scriptorium Estoniae

    Kiivit, Diana

    2006-01-01

    7. - 26. märtsil Lyonis toimunud festivalist "Biennale en scene" ja seal etendunud kolmest ooperist: G. Aperghis "Entre chien et loup", C. Ambrosini "Il canbto della pelle", P. Dusapini "Faustus, la derniere nuit"

  17. Adaptive Rate Control Algorithm for H.264/AVC Considering Scene Change

    Directory of Open Access Journals (Sweden)

    Xiao Chen

    2013-01-01

    Full Text Available Scene change in H.264 video sequences has significant impact on the video communication quality. This paper presents a novel adaptive rate control algorithm with little additional calculation for H.264/AVC based on the scene change expression. According to the frame complexity quotiety, we define a scene change factor. It is used to allocate bits for each frame adaptively. Experimental results show that it can handle the scene change effectively. Our algorithm, in comparison to the JVT-G012 algorithm, reduces rate error and improves average peak signal-noise ratio with smaller deviation. It cannot only control bit rate accurately, but also get better video quality with the lower encoder buffer fullness to improve the quality of service.

  18. Iranian audience poll on smoking scenes in Persian movies in 2011

    Directory of Open Access Journals (Sweden)

    Gholamreza Heydari

    2014-01-01

    Conclusions: Despite the prohibition of cigarette advertisements in the mass media and movies, we still witness scenes depicting smoking by the good or bad characters of the movies so more observation in this field is needed.

  19. Smartphone and Tablet Applications for Crime Scene Investigation: State of the Art, Typology, and Assessment Criteria.

    Science.gov (United States)

    Baechler, Simon; Gélinas, Anthony; Tremblay, Rémy; Lu, Karely; Crispino, Frank

    2017-07-01

    The use of applications on mobile devices is gradually becoming a new norm in everyday life, and crime scene investigation is unlikely to escape this reality. The article assesses the current state of research and practices by means of literature reviews, semistructured interviews, and a survey conducted among crime scene investigators from Canada and Switzerland. Attempts at finding a particular strategy to guide the development, usage, and evaluation of applications that can assist crime scene investigation prove to be rather challenging. Therefore, the article proposes a typology for these applications, as well as criteria for evaluating their relevance, reliability, and answer to operational requirements. The study of five applications illustrates the evaluation process. Far away from the revolution announced by some stakeholders, it is required to pursue scientific and pragmatic research to set the theoretical foundations that will allow a significant contribution of applications to crime scene investigation. © 2017 American Academy of Forensic Sciences.

  20. Content Validity of scenes of the Declarative Tactical Knowledge Test in Volleyball – DTKT:Vb

    Directory of Open Access Journals (Sweden)

    Gustavo De Conti Teixeira Costa

    2016-02-01

    Full Text Available DOI: http://dx.doi.org/10.5007/1980-0037.2016v18n6p629   Declarative Tactical Knowledge Tests are presented as important evaluation tools for the teaching-learning-training process regulation. This study aimed to establish the content validity of scenes of the Declarative Tactical Knowledge Test in Volleyball – DTKT:Vb. Five male coaches of the Brazilian Volleyball team who worked with male athletes participated as judges, being responsible for training formation categories up to 21 years, experts in the sport, with minimum ten years of experience. The judges evaluated 212 scenes containing extremity attack (n=55, central attack (n=33, setting (n=68 and block (n=60 situations and used a 1-5 point likert scale to assign a score to the scene according to requisites image clarity, practical relevance and item representativity. The Content Validity Coefficient (CVC was used to determine the CVC for each scene and the instrument as a whole, with cutoff point of 0.80. The results demonstrated that “image clarity” (CVC=0.92, “practical relevance” (CVC=0.96 and “item representativity” criteria (CVC=0.96 showed satisfactory levels. After calculating CVC, the ecological validity of scenes was determined, which consists of the selection of scenes where there was convergence among decision made by judges and decision made by athletes. Thus, from 212 scenes initially prepared, 66 have been validated. Scenes validated using CVC enabled the evaluation of the Declarative Tactical Knowledge, assisting in the planning of teaching-learning-training processes of male volleyball athletes.

  1. Combined Influence of Visual Scene and Body Tilt on Arm Pointing Movements: Gravity Matters!

    Science.gov (United States)

    Scotto Di Cesare, Cécile; Sarlegna, Fabrice R.; Bourdin, Christophe; Mestre, Daniel R.; Bringoux, Lionel

    2014-01-01

    Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., ‘combined’ tilts equal to the sum of ‘single’ tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues. PMID:24925371

  2. A sport scene images segmentation method based on edge detection algorithm

    Science.gov (United States)

    Chen, Biqing

    2011-12-01

    This paper proposes a simple, fast sports scene image segmentation method; a lot of work so far has been looking for a way to reduce the different shades of emotions in smooth area. A novel method of pretreatment, proposed the elimination of different shades feelings. Internal filling mechanism is used to change the pixels enclosed by the interest as interest pixels. For some test has achieved harvest sports scene images has been confirmed.

  3. Real-time, Adaptive Plane Sweeping for Free Viewpoint Navigation in Soccer Scenes

    OpenAIRE

    Goorts, Patrik

    2014-01-01

    In this dissertation, we present a system to generate a novel viewpoint using a virtual camera, specifically for soccer scenes. We demonstrate the applicability for following players, freezing the scene, generating 3D images, et cetera. The method is demonstrated and investigated for 2 camera arrangements, i.e. a curved and a linear setup, where the distance between the cameras can be up to 10 meters. The virtual camera should be located on a position between the real camera positions. ...

  4. Virtual Relighting of a Virtualized Scene by Estimating Surface Reflectance Properties

    OpenAIRE

    福富, 弘敦; 町田, 貴史; 横矢, 直和

    2011-01-01

    In mixed reality that merges real and virtual worlds, it is required to interactively manipulate the illumination conditions in a virtualized space. In general, specular reflections in a scene make it difficult to interactively manipulate the illumination conditions. Our goal is to provide an opportunity to simulate the original scene, including diffuse and specular relfections, with novel viewpoints and illumination conditions. Thus, we propose a new method for estimating diffuse and specula...

  5. Knowledge Guided Disambiguation for Large-Scale Scene Classification With Multi-Resolution CNNs

    Science.gov (United States)

    Wang, Limin; Guo, Sheng; Huang, Weilin; Xiong, Yuanjun; Qiao, Yu

    2017-04-01

    Convolutional Neural Networks (CNNs) have made remarkable progress on scene recognition, partially due to these recent large-scale scene datasets, such as the Places and Places2. Scene categories are often defined by multi-level information, including local objects, global layout, and background environment, thus leading to large intra-class variations. In addition, with the increasing number of scene categories, label ambiguity has become another crucial issue in large-scale classification. This paper focuses on large-scale scene recognition and makes two major contributions to tackle these issues. First, we propose a multi-resolution CNN architecture that captures visual content and structure at multiple levels. The multi-resolution CNNs are composed of coarse resolution CNNs and fine resolution CNNs, which are complementary to each other. Second, we design two knowledge guided disambiguation techniques to deal with the problem of label ambiguity. (i) We exploit the knowledge from the confusion matrix computed on validation data to merge ambiguous classes into a super category. (ii) We utilize the knowledge of extra networks to produce a soft label for each image. Then the super categories or soft labels are employed to guide CNN training on the Places2. We conduct extensive experiments on three large-scale image datasets (ImageNet, Places, and Places2), demonstrating the effectiveness of our approach. Furthermore, our method takes part in two major scene recognition challenges, and achieves the second place at the Places2 challenge in ILSVRC 2015, and the first place at the LSUN challenge in CVPR 2016. Finally, we directly test the learned representations on other scene benchmarks, and obtain the new state-of-the-art results on the MIT Indoor67 (86.7\\%) and SUN397 (72.0\\%). We release the code and models at~\\url{https://github.com/wanglimin/MRCNN-Scene-Recognition}.

  6. Task leaders reports

    International Nuclear Information System (INIS)

    Loriaux, E.F.; Jehee, J.N.T.

    1995-01-01

    Report on CRP-OSS Task 4.1.1. ''Survey of existing documentation relevant to this programme's goals'' and report on CRP-OSS Task 4.1.2. ''Survey of existing Operator Support Systems and the experience with them'' are presented. 2 tabs

  7. India's Unfinished Telecom Tasks

    Indian Academy of Sciences (India)

    India's Unfinished Telecom Tasks · India's Telecom Story is now well known · Indian Operators become an enviable force · At the same time · India Amongst the Leaders · Unfinished Tasks as Operators · LightGSM ON: Innovation for Rural Area from Midas · Broadband Access Options for India · Broadband driven by DSL: ...

  8. Z-depth integration: a new technique for manipulating z-depth properties in composited scenes

    Science.gov (United States)

    Steckel, Kayla; Whittinghill, David

    2014-02-01

    This paper presents a new technique in the production pipeline of asset creation for virtual environments called Z-Depth Integration (ZeDI). ZeDI is intended to reduce the time required to place elements at the appropriate z-depth within a scene. Though ZeDI is intended for use primarily in two-dimensional scene composition, depth-dependent "flat" animated objects are often critical elements of augmented and virtual reality applications (AR/VR). ZeDI is derived from "deep image compositing", a capacity implemented within the OpenEXR file format. In order to trick the human eye into perceiving overlapping scene elements as being in front of or behind one another, the developer must manually manipulate which pixels of an element are visible in relation to other objects embedded within the environment's image sequence. ZeDI improves on this process by providing a means for interacting with procedurally extracted z-depth data from a virtual environment scene. By streamlining the process of defining objects' depth characteristics, it is expected that the time and energy required for developers to create compelling AR/VR scenes will be reduced. In the proof of concept presented in this manuscript, ZeDI is implemented for pre-rendered virtual scene construction via an AfterEffects software plug-in.

  9. A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification

    Directory of Open Access Journals (Sweden)

    Yunlong Yu

    2018-01-01

    Full Text Available One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references.

  10. The development of brain systems associated with successful memory retrieval of scenes.

    Science.gov (United States)

    Ofen, Noa; Chai, Xiaoqian J; Schuil, Karen D I; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-07-18

    Neuroanatomical and psychological evidence suggests prolonged maturation of declarative memory systems in the human brain from childhood into young adulthood. Here, we examine functional brain development during successful memory retrieval of scenes in children, adolescents, and young adults ages 8-21 via functional magnetic resonance imaging. Recognition memory improved with age, specifically for accurate identification of studied scenes (hits). Successful retrieval (correct old-new decisions for studied vs unstudied scenes) was associated with activations in frontal, parietal, and medial temporal lobe (MTL) regions. Activations associated with successful retrieval increased with age in left parietal cortex (BA7), bilateral prefrontal, and bilateral caudate regions. In contrast, activations associated with successful retrieval did not change with age in the MTL. Psychophysiological interaction analysis revealed that there were, however, age-relate changes in differential connectivity for successful retrieval between MTL and prefrontal regions. These results suggest that neocortical regions related to attentional or strategic control show the greatest developmental changes for memory retrieval of scenes. Furthermore, these results suggest that functional interactions between MTL and prefrontal regions during memory retrieval also develop into young adulthood. The developmental increase of memory-related activations in frontal and parietal regions for retrieval of scenes and the absence of such an increase in MTL regions parallels what has been observed for memory encoding of scenes.

  11. Cybersickness in the presence of scene rotational movements along different axes.

    Science.gov (United States)

    Lo, W T; So, R H

    2001-02-01

    Compelling scene movements in a virtual reality (VR) system can cause symptoms of motion sickness (i.e., cybersickness). A within-subject experiment has been conducted to investigate the effects of scene oscillations along different axes on the level of cybersickness. Sixteen male participants were exposed to four 20-min VR simulation sessions. The four sessions used the same virtual environment but with scene oscillations along different axes, i.e., pitch, yaw, roll, or no oscillation (speed: 30 degrees/s, range: +/- 60 degrees). Verbal ratings of the level of nausea were taken at 5-min intervals during the sessions and sickness symptoms were also measured before and after the sessions using the Simulator Sickness Questionnaire (SSQ). In the presence of scene oscillation, both nausea ratings and SSQ scores increased at significantly higher rates than with no oscillation. While individual participants exhibited different susceptibilities to nausea associated with VR simulation containing scene oscillations along different rotational axes, the overall effects of axis among our group of 16 randomly selected participants were not significant. The main effects of, and interactions among, scene oscillation, duration, and participants are discussed in the paper.

  12. System analysis task group

    International Nuclear Information System (INIS)

    Anon.

    1981-01-01

    At this meeting, the main tasks of the study group were to discuss their task report with other task groups and to formulate the five-year research program, including next year's plans. A summary of the discussion with other task groups is presented. The general objective of the five-year program is to gather all elements necessary for a decision on the technical feasibility of the subseabed option. In addition, site selection criteria consistent with both radiological assessment and engineering capability will be produced. The task group report discussed radiological assessments, normal or base-case assessments, operational failures, low-probability postdisposal events, engineering studies, radiological criteria, legal aspects, social aspects, institutional aspects, generic comparison with other disposal options, and research priorities. The text of the report is presented along with supporting documents

  13. Task Description Language

    Science.gov (United States)

    Simmons, Reid; Apfelbaum, David

    2005-01-01

    Task Description Language (TDL) is an extension of the C++ programming language that enables programmers to quickly and easily write complex, concurrent computer programs for controlling real-time autonomous systems, including robots and spacecraft. TDL is based on earlier work (circa 1984 through 1989) on the Task Control Architecture (TCA). TDL provides syntactic support for hierarchical task-level control functions, including task decomposition, synchronization, execution monitoring, and exception handling. A Java-language-based compiler transforms TDL programs into pure C++ code that includes calls to a platform-independent task-control-management (TCM) library. TDL has been used to control and coordinate multiple heterogeneous robots in projects sponsored by NASA and the Defense Advanced Research Projects Agency (DARPA). It has also been used in Brazil to control an autonomous airship and in Canada to control a robotic manipulator.

  14. Energy Efficient Task Light

    DEFF Research Database (Denmark)

    Logadottir, Asta; Ardkapan, Siamak Rahimi; Johnsen, Kjeld

    2014-01-01

    made lenses, capable of providing the desired light distribution. The user test shows that when working with general lighti ng of 100 lx in the room the developed task lig ht with its wide light distribution provides flexibility in choosing a reading task area on the desk and provides more visibility......The objectives of this work is to develop a task light for office lighting that fulfils the minimum requirements of the European standard EN12464 - 1 : Light and lighting – Lighting of work places, Part 1: Indoor workplaces and the Danish standard DS 700 : Lys og belysning I arbejdsrum , or more...... specifically the requirements that apply to the work area and the immediate surrounding area. By providing a task light that fulfils the requirements for task lighting and the immediate surrounding area, the general lighting only needs to provide the illuminance levels required for background lighting...

  15. The influence of action video game playing on eye movement behaviour during visual search in abstract, in-game and natural scenes.

    Science.gov (United States)

    Azizi, Elham; Abel, Larry A; Stainer, Matthew J

    2017-02-01

    Action game playing has been associated with several improvements in visual attention tasks. However, it is not clear how such changes might influence the way we overtly select information from our visual world (i.e. eye movements). We examined whether action-video-game training changed eye movement behaviour in a series of visual search tasks including conjunctive search (relatively abstracted from natural behaviour), game-related search, and more naturalistic scene search. Forty nongamers were trained in either an action first-person shooter game or a card game (control) for 10 hours. As a further control, we recorded eye movements of 20 experienced action gamers on the same tasks. The results did not show any change in duration of fixations or saccade amplitude either from before to after the training or between all nongamers (pretraining) and experienced action gamers. However, we observed a change in search strategy, reflected by a reduction in the vertical distribution of fixations for the game-related search task in the action-game-trained group. This might suggest learning the likely distribution of targets. In other words, game training only skilled participants to search game images for targets important to the game, with no indication of transfer to the more natural scene search. Taken together, these results suggest no modification in overt allocation of attention. Either the skills that can be trained with action gaming are not powerful enough to influence information selection through eye movements, or action-game-learned skills are not used when deciding where to move the eyes.

  16. From image statistics to scene gist: evoked neural activity reveals transition from low-level natural image structure to scene category

    NARCIS (Netherlands)

    Groen, I.I.A.; Ghebreab, S.; Prins, H.; Lamme, V.A.F.; Scholte, H.S.

    2013-01-01

    The visual system processes natural scenes in a split second. Part of this process is the extraction of "gist," a global first impression. It is unclear, however, how the human visual system computes this information. Here, we show that, when human observers categorize global information in

  17. The polymorphism of crime scene investigation: An exploratory analysis of the influence of crime and forensic intelligence on decisions made by crime scene examiners.

    Science.gov (United States)

    Resnikoff, Tatiana; Ribaux, Olivier; Baylon, Amélie; Jendly, Manon; Rossy, Quentin

    2015-12-01

    A growing body of scientific literature recurrently indicates that crime and forensic intelligence influence how crime scene investigators make decisions in their practices. This study scrutinises further this intelligence-led crime scene examination view. It analyses results obtained from two questionnaires. Data have been collected from nine chiefs of Intelligence Units (IUs) and 73 Crime Scene Examiners (CSEs) working in forensic science units (FSUs) in the French speaking part of Switzerland (six cantonal police agencies). Four salient elements emerged: (1) the actual existence of communication channels between IUs and FSUs across the police agencies under consideration; (2) most CSEs take into account crime intelligence disseminated; (3) a differentiated, but significant use by CSEs in their daily practice of this kind of intelligence; (4) a probable deep influence of this kind of intelligence on the most concerned CSEs, specially in the selection of the type of material/trace to detect, collect, analyse and exploit. These results contribute to decipher the subtle dialectic articulating crime intelligence and crime scene investigation, and to express further the polymorph role of CSEs, beyond their most recognised input to the justice system. Indeed, they appear to be central, but implicit, stakeholders in intelligence-led style of policing. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  18. Simulating the directional, spectral and textural properties of a large-scale scene at high resolution using a MODIS BRDF product

    Science.gov (United States)

    Rengarajan, Rajagopalan; Goodenough, Adam A.; Schott, John R.

    2016-10-01

    view angles show the expected variations in the reflectance due to the BRDF effects of the Harvard forest. The effectiveness of this technique to simulate real sensor data is evaluated by comparing the simulated data with the Landsat 8 Operational Land Image (OLI) data over the Harvard forest. Regions of interest were selected from the simulated and the real data for different targets and their Top-of-Atmospheric (TOA) radiance were compared. After adjusting for scaling correction due to the difference in atmospheric conditions between the simulated and the real data, the TOA radiance is found to agree within 5 % in the NIR band and 10 % in the visible bands for forest targets under similar illumination conditions. The technique presented in this paper can be extended for other biomes (e.g. desert regions and agricultural regions) by using the appropriate geographic regions. Since the entire scene is constructed in a simulated environment, parameters such as BRDF or its effects can be analyzed for general or target specific algorithm improvements. Also, the modeling and simulation techniques can be used as a baseline for the development and comparison of new sensor designs and to investigate the operational and environmental factors that affects the sensor constellations such as Sentinel and Landsat missions.

  19. Independent task Fourier filters

    Science.gov (United States)

    Caulfield, H. John

    2001-11-01

    Since the early 1960s, a major part of optical computing systems has been Fourier pattern recognition, which takes advantage of high speed filter changes to enable powerful nonlinear discrimination in `real time.' Because filter has a task quite independent of the tasks of the other filters, they can be applied and evaluated in parallel or, in a simple approach I describe, in sequence very rapidly. Thus I use the name ITFF (independent task Fourier filter). These filters can also break very complex discrimination tasks into easily handled parts, so the wonderful space invariance properties of Fourier filtering need not be sacrificed to achieve high discrimination and good generalizability even for ultracomplex discrimination problems. The training procedure proceeds sequentially, as the task for a given filter is defined a posteriori by declaring it to be the discrimination of particular members of set A from all members of set B with sufficient margin. That is, we set the threshold to achieve the desired margin and note the A members discriminated by that threshold. Discriminating those A members from all members of B becomes the task of that filter. Those A members are then removed from the set A, so no other filter will be asked to perform that already accomplished task.

  20. Task-baseret kommunikativ sprogundervisning

    DEFF Research Database (Denmark)

    Pedersen, Michael Svendsen

    2015-01-01

    Definition af task-baseret sprogundervisning, kriterier for task. Forskning i Second Language Acquisition med brug af task, tilrettelæggelse af task-baseret kommunikativ undervisning. Begrænsninger i og perspektiver for videreudvikling af task-baseret sprogundervising-......Definition af task-baseret sprogundervisning, kriterier for task. Forskning i Second Language Acquisition med brug af task, tilrettelæggelse af task-baseret kommunikativ undervisning. Begrænsninger i og perspektiver for videreudvikling af task-baseret sprogundervising-...

  1. PM 3655 PHILIPS Logic analyzer

    CERN Multimedia

    A logic analyzer is an electronic instrument that captures and displays multiple signals from a digital system or digital circuit. A logic analyzer may convert the captured data into timing diagrams, protocol decodes, state machine traces, assembly language, or may correlate assembly with source-level software. Logic Analyzers have advanced triggering capabilities, and are useful when a user needs to see the timing relationships between many signals in a digital system.

  2. Digital Multi Channel Analyzer Enhancement

    International Nuclear Information System (INIS)

    Gonen, E.; Marcus, E.; Wengrowicz, U.; Beck, A.; Nir, J.; Sheinfeld, M.; Broide, A.; Tirosh, D.

    2002-01-01

    A cement analyzing system based on radiation spectroscopy had been developed [1], using novel digital approach for real-time, high-throughput and low-cost Multi Channel Analyzer. The performance of the developed system had a severe problem: the resulted spectrum suffered from lack of smoothness, it was very noisy and full of spikes and surges, therefore it was impossible to use this spectrum for analyzing the cement substance. This paper describes the work carried out to improve the system performance

  3. Board Task Performance

    DEFF Research Database (Denmark)

    Minichilli, Alessandro; Zattoni, Alessandro; Nielsen, Sabina

    2012-01-01

    influence board tasks, and how the context moderates the relationship between processes and tasks. Our hypotheses are tested on a survey-based dataset of 535 medium-sized and large industrial firms in Italy and Norway, which are considered to substantially differ along legal and cultural dimensions...... identify three board processes as micro-level determinants of board effectiveness. Specifically, we focus on effort norms, cognitive conflicts and the use of knowledge and skills as determinants of board control and advisory task performance. Further, we consider how two different institutional settings...

  4. Remote Laser Diffraction Particle Size Distribution Analyzer

    Energy Technology Data Exchange (ETDEWEB)

    Batcheller, Thomas Aquinas; Huestis, Gary Michael; Bolton, Steven Michael

    2001-03-01

    In support of a radioactive slurry sampling and physical characterization task, an “off-the-shelf” laser diffraction (classical light scattering) particle size analyzer was utilized for remote particle size distribution (PSD) analysis. Spent nuclear fuel was previously reprocessed at the Idaho Nuclear Technology and Engineering Center (INTEC—formerly recognized as the Idaho Chemical Processing Plant) which is on DOE’s INEEL site. The acidic, radioactive aqueous raffinate streams from these processes were transferred to 300,000 gallon stainless steel storage vessels located in the INTEC Tank Farm area. Due to the transfer piping configuration in these vessels, complete removal of the liquid can not be achieved. Consequently, a “heel” slurry remains at the bottom of an “emptied” vessel. Particle size distribution characterization of the settled solids in this remaining heel slurry, as well as suspended solids in the tank liquid, is the goal of this remote PSD analyzer task. A Horiba Instruments Inc. Model LA-300 PSD analyzer, which has a 0.1 to 600 micron measurement range, was modified for remote application in a “hot cell” (gamma radiation) environment. This technology provides rapid and simple PSD analysis, especially down in the fine and microscopic particle size regime. Particle size analysis of these radioactive slurries down in this smaller range was not previously achievable—making this technology far superior than the traditional methods used. Successful acquisition of this data, in conjunction with other characterization analyses, provides important information that can be used in the myriad of potential radioactive waste management alternatives.

  5. Utilizing Electroencephalography Measurements for Comparison of Task-Specific Neural Efficiencies: Spatial Intelligence Tasks.

    Science.gov (United States)

    Call, Benjamin J; Goodridge, Wade; Villanueva, Idalis; Wan, Nicholas; Jordan, Kerry

    2016-08-09

    Spatial intelligence is often linked to success in engineering education and engineering professions. The use of electroencephalography enables comparative calculation of individuals' neural efficiency as they perform successive tasks requiring spatial ability to derive solutions. Neural efficiency here is defined as having less beta activation, and therefore expending fewer neural resources, to perform a task in comparison to other groups or other tasks. For inter-task comparisons of tasks with similar durations, these measurements may enable a comparison of task type difficulty. For intra-participant and inter-participant comparisons, these measurements provide potential insight into the participant's level of spatial ability and different engineering problem solving tasks. Performance on the selected tasks can be analyzed and correlated with beta activities. This work presents a detailed research protocol studying the neural efficiency of students engaged in the solving of typical spatial ability and Statics problems. Students completed problems specific to the Mental Cutting Test (MCT), Purdue Spatial Visualization test of Rotations (PSVT:R), and Statics. While engaged in solving these problems, participants' brain waves were measured with EEG allowing data to be collected regarding alpha and beta brain wave activation and use. The work looks to correlate functional performance on pure spatial tasks with spatially intensive engineering tasks to identify the pathways to successful performance in engineering and the resulting improvements in engineering education that may follow.

  6. Review of On-Scene Management of Mass-Casualty Attacks

    Directory of Open Access Journals (Sweden)

    Annelie Holgersson

    2016-02-01

    Full Text Available Background: The scene of a mass-casualty attack (MCA entails a crime scene, a hazardous space, and a great number of people needing medical assistance. Public transportation has been the target of such attacks and involves a high probability of generating mass casualties. The review aimed to investigate challenges for on-scene responses to MCAs and suggestions made to counter these challenges, with special attention given to attacks on public transportation and associated terminals. Methods: Articles were found through PubMed and Scopus, “relevant articles” as defined by the databases, and a manual search of references. Inclusion criteria were that the article referred to attack(s and/or a public transportation-related incident and issues concerning formal on-scene response. An appraisal of the articles’ scientific quality was conducted based on an evidence hierarchy model developed for the study. Results: One hundred and five articles were reviewed. Challenges for command and coordination on scene included establishing leadership, inter-agency collaboration, multiple incident sites, and logistics. Safety issues entailed knowledge and use of personal protective equipment, risk awareness and expectations, cordons, dynamic risk assessment, defensive versus offensive approaches, and joining forces. Communication concerns were equipment shortfalls, dialoguing, and providing information. Assessment problems were scene layout and interpreting environmental indicators as well as understanding setting-driven needs for specialist skills and resources. Triage and treatment difficulties included differing triage systems, directing casualties, uncommon injuries, field hospitals, level of care, providing psychological and pediatric care. Transportation hardships included scene access, distance to hospitals, and distribution of casualties. Conclusion: Commonly encountered challenges during unintentional incidents were added to during MCAs, implying

  7. Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification

    Science.gov (United States)

    Anwer, Rao Muhammad; Khan, Fahad Shahbaz; van de Weijer, Joost; Molinier, Matthieu; Laaksonen, Jorma

    2018-04-01

    Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification.

  8. Sonification of in-vehicle interface reduces gaze movements under dual-task condition.

    Science.gov (United States)

    Tardieu, Julien; Misdariis, Nicolas; Langlois, Sabine; Gaillard, Pascal; Lemercier, Céline

    2015-09-01

    In-car infotainment systems (ICIS) often degrade driving performances since they divert the driver's gaze from the driving scene. Sonification of hierarchical menus (such as those found in most ICIS) is examined in this paper as one possible solution to reduce gaze movements towards the visual display. In a dual-task experiment in the laboratory, 46 participants were requested to prioritize a primary task (a continuous target detection task) and to simultaneously navigate in a realistic mock-up of an ICIS, either sonified or not. Results indicated that sonification significantly increased the time spent looking at the primary task, and significantly decreased the number and the duration of gaze saccades towards the ICIS. In other words, the sonified ICIS could be used nearly exclusively by ear. On the other hand, the reaction times in the primary task were increased in both silent and sonified conditions. This study suggests that sonification of secondary tasks while driving could improve the driver's visual attention of the driving scene. Copyright © 2015 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  9. Task-Driven Computing

    National Research Council Canada - National Science Library

    Wang, Zhenyu

    2000-01-01

    .... They will want to use the resources to perform computing tasks. Today's computing infrastructure does not support this model of computing very well because computers interact with users in terms of low level abstractions...

  10. Organizing Core Tasks

    DEFF Research Database (Denmark)

    Boll, Karen

    Civil servants conduct the work which makes welfare states functions on an everyday bases: Police men police, school teachers teach, and tax inspectors inspect. Focus in this paper is on the core tasks of tax inspectors. The paper argues that their core task of securing the collection of revenue...... projects influence the organization of core tasks within the tax administration. The paper shows that the organizational transformations based on the use of these devices have had consequences both for the overall collection of revenue and for the employees’ feeling of “making a difference”. All in all...... has remained much the same within the last 10 years. However, how the core task has been organized has changed considerable under the influence of various “organizing devices”. The paper focusses on how organizing devices such as risk assessment, output-focus, effect orientation, and treatment...

  11. Multichannel analyzer type CMA-3

    International Nuclear Information System (INIS)

    Czermak, A.; Jablonski, J.; Ostrowicz, A.

    1978-01-01

    Multichannel analyzer CMA-3 is designed for two-parametric analysis with operator controlled logical windows. It is implemented in CAMAC standard. A single crate contains all required modules and is controlled by the PDP-11/10 minicomputer. Configuration of CMA-3 is shown. CMA-3 is the next version of the multichannel analyzer described in report No 958/E-8. (author)

  12. Comparison of fiber length analyzers

    Science.gov (United States)

    Don Guay; Nancy Ross Sutherland; Walter Rantanen; Nicole Malandri; Aimee Stephens; Kathleen Mattingly; Matt Schneider

    2005-01-01

    In recent years, several fiber new fiber length analyzers have been developed and brought to market. The new instruments provide faster measurements and the capability of both laboratory and on-line analysis. Do the various fiber analyzers provide the same length, coarseness, width, and fines measurements for a given fiber sample? This paper provides a comparison of...

  13. Analysis of body fluids for forensic purposes: from laboratory testing to non-destructive rapid confirmatory identification at a crime scene.

    Science.gov (United States)

    Virkler, Kelly; Lednev, Igor K

    2009-07-01

    Body fluid traces recovered at crime scenes are among the most important types of evidence to forensic investigators. They contain valuable DNA evidence which can identify a suspect or victim as well as exonerate an innocent individual. The first step of identifying a particular body fluid is highly important since the nature of the fluid is itself very informative to the investigation, and the destructive nature of a screening test must be considered when only a small amount of material is available. The ability to characterize an unknown stain at the scene of the crime without having to wait for results from a laboratory is another very critical step in the development of forensic body fluid analysis. Driven by the importance for forensic applications, body fluid identification methods have been extensively developed in recent years. The systematic analysis of these new developments is vital for forensic investigators to be continuously educated on possible superior techniques. Significant advances in laser technology and the development of novel light detectors have dramatically improved spectroscopic methods for molecular characterization over the last decade. The application of this novel biospectroscopy for forensic purposes opens new and exciting opportunities for the development of on-field, non-destructive, confirmatory methods for body fluid identification at a crime scene. In addition, the biospectroscopy methods are universally applicable to all body fluids unlike the majority of current techniques which are valid for individual fluids only. This article analyzes the current methods being used to identify body fluid stains including blood, semen, saliva, vaginal fluid, urine, and sweat, and also focuses on new techniques that have been developed in the last 5-6 years. In addition, the potential of new biospectroscopic techniques based on Raman and fluorescence spectroscopy is evaluated for rapid, confirmatory, non-destructive identification of a body

  14. Emotion has no impact on attention in a change detection flicker task

    Directory of Open Access Journals (Sweden)

    Robert Colin Alan Bendall

    2015-10-01

    Full Text Available Past research provides conflicting findings regarding the influence of emotion on visual attention. Early studies suggested a broadening of attentional resources in relation to positive mood. However, more recent evidence indicates that positive emotions may not have a beneficial impact on attention, and that the relationship between emotion and attention may be mitigated by factors such as task demand or stimulus valence. The current study explored the effect of emotion on attention using the change detection flicker paradigm. Participants were induced into positive, neutral, and negative mood states and then completed a change detection task. A series of neutral scenes were presented and participants had to identify the location of a disappearing item in each scene. The change was made to the centre or the periphery of each scene and it was predicted that peripheral changes would be detected quicker in the positive mood condition and slower in the negative mood condition, compared to the neutral condition. In contrast to previous findings emotion had no influence on attention and whilst central changes were detected faster than peripheral changes, change blindness was not affected by mood. The findings suggest that the relationship between emotion and visual attention is influenced by the characteristics of a task, and any beneficial impact of positive emotion may be related to processing style rather than a broadening of attentional resources.

  15. Good for Her: empowerment scenes in feminist pornography

    Directory of Open Access Journals (Sweden)

    Fernanda Capibaribe Leite

    2012-07-01

    Full Text Available This article discusses the notion of women’s empowerment through the audiovisual products covered by the Feminist Porn Award. The intention is to analyze in which sense an initiative that stimulates a pornography production dislocated from the phallocentric male gaze to the affirmation of female sexuality and pleasure promotes breaks in the pornography production and consumption logics, and triggers autonomy processes to women in a broader perspective. To sustain this discussion, its being related the triad composed by: a the subjectivity narratives and processes linked to them; b the discourses construction focused on women as social minorities and c the analyzes approaching filmic addressing modes and its associated events.

  16. A potential spatial working memory training task to improve both episodic memory and fluid intelligence.

    Science.gov (United States)

    Rudebeck, Sarah R; Bor, Daniel; Ormond, Angharad; O'Reilly, Jill X; Lee, Andy C H

    2012-01-01

    One current challenge in cognitive training is to create a training regime that benefits multiple cognitive domains, including episodic memory, without relying on a large battery of tasks, which can be time-consuming and difficult to learn. By giving careful consideration to the neural correlates underlying episodic and working memory, we devised a computerized working memory training task in which neurologically healthy participants were required to monitor and detect repetitions in two streams of spatial information (spatial location and scene identity) presented simultaneously (i.e. a dual n-back paradigm). Participants' episodic memory abilities were assessed before and after training using two object and scene recognition memory tasks incorporating memory confidence judgments. Furthermore, to determine the generalizability of the effects of training, we also assessed fluid intelligence using a matrix reasoning task. By examining the difference between pre- and post-training performance (i.e. gain scores), we found that the trainers, compared to non-trainers, exhibited a significant improvement in fluid intelligence after 20 days. Interestingly, pre-training fluid intelligence performance, but not training task improvement, was a significant predictor of post-training fluid intelligence improvement, with lower pre-training fluid intelligence associated with greater post-training gain. Crucially, trainers who improved the most on the training task also showed an improvement in recognition memory as captured by d-prime scores and estimates of recollection and familiarity memory. Training task improvement was a significant predictor of gains in recognition and familiarity memory performance, with greater training improvement leading to more marked gains. In contrast, lower pre-training recollection memory scores, and not training task improvement, led to greater recollection memory performance after training. Our findings demonstrate that practice on a single

  17. A potential spatial working memory training task to improve both episodic memory and fluid intelligence.

    Directory of Open Access Journals (Sweden)

    Sarah R Rudebeck

    Full Text Available One current challenge in cognitive training is to create a training regime that benefits multiple cognitive domains, including episodic memory, without relying on a large battery of tasks, which can be time-consuming and difficult to learn. By giving careful consideration to the neural correlates underlying episodic and working memory, we devised a computerized working memory training task in which neurologically healthy participants were required to monitor and detect repetitions in two streams of spatial information (spatial location and scene identity presented simultaneously (i.e. a dual n-back paradigm. Participants' episodic memory abilities were assessed before and after training using two object and scene recognition memory tasks incorporating memory confidence judgments. Furthermore, to determine the generalizability of the effects of training, we also assessed fluid intelligence using a matrix reasoning task. By examining the difference between pre- and post-training performance (i.e. gain scores, we found that the trainers, compared to non-trainers, exhibited a significant improvement in fluid intelligence after 20 days. Interestingly, pre-training fluid intelligence performance, but not training task improvement, was a significant predictor of post-training fluid intelligence improvement, with lower pre-training fluid intelligence associated with greater post-training gain. Crucially, trainers who improved the most on the training task also showed an improvement in recognition memory as captured by d-prime scores and estimates of recollection and familiarity memory. Training task improvement was a significant predictor of gains in recognition and familiarity memory performance, with greater training improvement leading to more marked gains. In contrast, lower pre-training recollection memory scores, and not training task improvement, led to greater recollection memory performance after training. Our findings demonstrate that practice

  18. Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach

    Directory of Open Access Journals (Sweden)

    Mengyun Liu

    2017-12-01

    Full Text Available After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an indoor scene constrained method for localization is proposed in this paper, which is inspired by the visual cognition ability of the human brain and the progress in the computer vision field regarding high-level image understanding. Furthermore, a multi-sensor fusion method is implemented on a commercial smartphone including cameras, WiFi and inertial sensors. Compared to former research, the camera on a smartphone is used to “see” which scene the user is in. With this information, a particle filter algorithm constrained by scene information is adopted to determine the final location. For indoor scene recognition, we take advantage of deep learning that has been proven to be highly effective in the computer vision community. For particle filter, both WiFi and magnetic field signals are used to update the weights of particles. Similar to other fingerprinting localization methods, there are two stages in the proposed system, offline training and online localization. In the offline stage, an indoor scene model is trained by Caffe (one of the most popular open source frameworks for deep learning and a fingerprint database is constructed by user trajectories in different scenes. To reduce the volume requirement of training data for deep learning, a fine-tuned method is adopted for model training. In the online stage, a camera in a smartphone is used to recognize the initial scene. Then a particle filter algorithm is used to fuse the sensor data and determine the final location. To prove the effectiveness of the proposed method, an Android client and a web server are implemented. The Android client is used to collect data and locate a user. The web

  19. Error Detection, Factorization and Correction for Multi-View Scene Reconstruction from Aerial Imagery

    Energy Technology Data Exchange (ETDEWEB)

    Hess-Flores, Mauricio [Univ. of California, Davis, CA (United States)

    2011-11-10

    Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in

  20. Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach.

    Science.gov (United States)

    Liu, Mengyun; Chen, Ruizhi; Li, Deren; Chen, Yujin; Guo, Guangyi; Cao, Zhipeng; Pan, Yuanjin

    2017-12-08

    After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System) solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an indoor scene constrained method for localization is proposed in this paper, which is inspired by the visual cognition ability of the human brain and the progress in the computer vision field regarding high-level image understanding. Furthermore, a multi-sensor fusion method is implemented on a commercial smartphone including cameras, WiFi and inertial sensors. Compared to former research, the camera on a smartphone is used to "see" which scene the user is in. With this information, a particle filter algorithm constrained by scene information is adopted to determine the final location. For indoor scene recognition, we take advantage of deep learning that has been proven to be highly effective in the computer vision community. For particle filter, both WiFi and magnetic field signals are used to update the weights of particles. Similar to other fingerprinting localization methods, there are two stages in the proposed system, offline training and online localization. In the offline stage, an indoor scene model is trained by Caffe (one of the most popular open source frameworks for deep learning) and a fingerprint database is constructed by user trajectories in different scenes. To reduce the volume requirement of training data for deep learning, a fine-tuned method is adopted for model training. In the online stage, a camera in a smartphone is used to recognize the initial scene. Then a particle filter algorithm is used to fuse the sensor data and determine the final location. To prove the effectiveness of the proposed method, an Android client and a web server are implemented. The Android client is used to collect data and locate a user. The web server is developed for

  1. Deconstructing visual scenes in cortex: gradients of object and spatial layout information.

    Science.gov (United States)

    Harel, Assaf; Kravitz, Dwight J; Baker, Chris I

    2013-04-01

    Real-world visual scenes are complex cluttered, and heterogeneous stimuli engaging scene- and object-selective cortical regions including parahippocampal place area (PPA), retrosplenial complex (RSC), and lateral occipital complex (LOC). To understand the unique contribution of each region to distributed scene representations, we generated predictions based on a neuroanatomical framework adapted from monkey and tested them using minimal scenes in which we independently manipulated both spatial layout (open, closed, and gradient) and object content (furniture, e.g., bed, dresser). Commensurate with its strong connectivity with posterior parietal cortex, RSC evidenced strong spatial layout information but no object information, and its response was not even modulated by object presence. In contrast, LOC, which lies within the ventral visual pathway, contained strong object information but no background information. Finally, PPA, which is connected with both the dorsal and the ventral visual pathway, showed information about both objects and spatial backgrounds and was sensitive to the presence or absence of either. These results suggest that 1) LOC, PPA, and RSC have distinct representations, emphasizing different aspects of scenes, 2) the specific representations in each region are predictable from their patterns of connectivity, and 3) PPA combines both spatial layout and object information as predicted by connectivity.

  2. The perception of naturalness correlates with low-level visual features of environmental scenes.

    Directory of Open Access Journals (Sweden)

    Marc G Berman

    Full Text Available Previous research has shown that interacting with natural environments vs. more urban or built environments can have salubrious psychological effects, such as improvements in attention and memory. Even viewing pictures of nature vs. pictures of built environments can produce similar effects. A major question is: What is it about natural environments that produces these benefits? Problematically, there are many differing qualities between natural and urban environments, making it difficult to narrow down the dimensions of nature that may lead to these benefits. In this study, we set out to uncover visual features that related to individuals' perceptions of naturalness in images. We quantified naturalness in two ways: first, implicitly using a multidimensional scaling analysis and second, explicitly with direct naturalness ratings. Features that seemed most related to perceptions of naturalness were related to the density of contrast changes in the scene, the density of straight lines in the scene, the average color saturation in the scene and the average hue diversity in the scene. We then trained a machine-learning algorithm to predict whether a scene was perceived as being natural or not based on these low-level visual features and we could do so with 81% accuracy. As such we were able to reliably predict subjective perceptions of naturalness with objective low-level visual features. Our results can be used in future studies to determine if these features, which are related to naturalness, may also lead to the benefits attained from interacting with nature.

  3. Qualitative spatial logic descriptors from 3D indoor scenes to generate explanations in natural language.

    Science.gov (United States)

    Falomir, Zoe; Kluth, Thomas

    2017-06-24

    The challenge of describing 3D real scenes is tackled in this paper using qualitative spatial descriptors. A key point to study is which qualitative descriptors to use and how these qualitative descriptors must be organized to produce a suitable cognitive explanation. In order to find answers, a survey test was carried out with human participants which openly described a scene containing some pieces of furniture. The data obtained in this survey are analysed, and taking this into account, the QSn3D computational approach was developed which uses a XBox 360 Kinect to obtain 3D data from a real indoor scene. Object features are computed on these 3D data to identify objects in indoor scenes. The object orientation is computed, and qualitative spatial relations between the objects are extracted. These qualitative spatial relations are the input to a grammar which applies saliency rules obtained from the survey study and generates cognitive natural language descriptions of scenes. Moreover, these qualitative descriptors can be expressed as first-order logical facts in Prolog for further reasoning. Finally, a validation study is carried out to test whether the descriptions provided by QSn3D approach are human readable. The obtained results show that their acceptability is higher than 82%.

  4. Real-time mid-wavelength infrared scene rendering with a feasible BRDF model

    Science.gov (United States)

    Wu, Xin; Zhang, Jianqi; Chen, Yang; Huang, Xi

    2015-01-01

    Practically modeling and rendering the surface-leaving radiance of large-scale scenes in mid-wavelength infrared (MWIR) is an important feature of Battlefield Environment Simulation (BES). Since radiation transfer in realistic scenes is complex, it is difficult to develop real-time simulations directly from first principle. Nevertheless, it is crucial to minimize distortions in the rendering of virtual scenes. This paper proposes a feasible bidirectional reflectance distribution function (BRDF) model to deal with a large-scale scene in the MWIR band. Our BRDF model is spectrally dependent and evolved from previous BRDFs, and meets both Helmholtz reciprocity and energy conservation. We employ our BRDF model to calculate the direct solar and sky contributions. Both of them are added to the surface thermal emission in order to give the surface-leaving radiance. Atmospheric path radiance and transmission are pre-calculated to speed up the programming for rendering large scale scenes. Quantitative and qualitative comparisons with MWIR field data are made to assess the render results of our proposed method.

  5. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features

    Directory of Open Access Journals (Sweden)

    Linyi Li

    2017-01-01

    Full Text Available In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.

  6. Nuclear fuel microsphere gamma analyzer

    International Nuclear Information System (INIS)

    Valentine, K.H.; Long, E.L. Jr.; Willey, M.G.

    1977-01-01

    A gamma analyzer system is provided for the analysis of nuclear fuel microspheres and other radioactive particles. The system consists of an analysis turntable with means for loading, in sequence, a plurality of stations within the turntable; a gamma ray detector for determining the spectrum of a sample in one section; means for analyzing the spectrum; and a receiver turntable to collect the analyzed material in stations according to the spectrum analysis. Accordingly, particles may be sorted according to their quality; e.g., fuel particles with fractured coatings may be separated from those that are not fractured, or according to other properties. 4 claims, 3 figures

  7. Death Scene Investigation and Autopsy Practices in Sudden Unexpected Infant Deaths

    Science.gov (United States)

    Erck Lambert, Alexa B.; Parks, Sharyn E.; Camperlengo, Lena; Cottengim, Carri; Anderson, Rebecca L.; Covington, Theresa M.; Shapiro-Mendoza, Carrie K.

    2016-01-01

    Objective To describe and compare sudden unexpected infant death (SUID) investigations among states participating in the SUID Case Registry from 2010 through 2012. Study design We analyzed observational data from 770 SUID cases identified and entered into the National Child Death Review Case Reporting System. We examined data on autopsy and death scene investigation (DSI) components, including key information about the infant sleep environment. We calculated the percentage of components that were complete, incomplete, and missing/unknown. Results Most cases (98%) had a DSI. The DSI components most frequently reported as done were the narrative description of the circumstances (90%; range, 85%–99%), and witness interviews (88%, range, 85%–98%). Critical information about 10 infant sleep environment components was available for 85% of cases for all states combined. All 770 cases had an autopsy performed. The autopsy components most frequently reported as done were histology, microbiology, and other pathology (98%; range, 94%–100%) and toxicology (97%; range, 94%–100%). Conclusions This study serves as a baseline to understand the scope of infant death investigations in selected states. Standardized and comprehensive DSI and autopsy practices across jurisdictions and states may increase knowledge about SUID etiology and also lead to an improved understanding of the cause-specific SUID risk and protective factors. Additionally, these results demonstrate practices in the field showing what is feasible in these select states. We encourage pediatricians, forensic pathologists, and other medicolegal experts to use these findings to inform system changes and improvements in DSI and autopsy practices and SUID prevention efforts. PMID:27113380

  8. Estimating trace deposition time with circadian biomarkers: a prospective and versatile tool for crime scene reconstruction

    Science.gov (United States)

    Ackermann, Katrin; Ballantyne, Kaye N.

    2010-01-01

    Linking biological samples found at a crime scene with the actual crime event represents the most important aspect of forensic investigation, together with the identification of the sample donor. While DNA profiling is well established for donor identification, no reliable methods exist for timing forensic samples. Here, we provide for the first time a biochemical approach for determining deposition time of human traces. Using commercial enzyme-linked immunosorbent assays we showed that the characteristic 24-h profiles of two circadian hormones, melatonin (concentration peak at late night) and cortisol (peak in the morning) can be reproduced from small samples of whole blood and saliva. We further demonstrated by analyzing small stains dried and stored up to 4 weeks the in vitro stability of melatonin, whereas for cortisol a statistically significant decay with storage time was observed, although the hormone was still reliably detectable in 4-week-old samples. Finally, we showed that the total protein concentration, also assessed using a commercial assay, can be used for normalization of hormone signals in blood, but less so in saliva. Our data thus demonstrate that estimating normalized concentrations of melatonin and cortisol represents a prospective approach for determining deposition time of biological trace samples, at least from blood, with promising expectations for forensic applications. In the broader context, our study opens up a new field of circadian biomarkers for deposition timing of forensic traces; future studies using other circadian biomarkers may reveal if the time range offered by the two hormones studied here can be specified more exactly. Electronic supplementary material The online version of this article (doi:10.1007/s00414-010-0457-1) contains supplementary material, which is available to authorized users. PMID:20419380

  9. The formation of music-scenes in Manchester and their relation to urban space and the image of the city

    DEFF Research Database (Denmark)

    Nielsen, Tom

    2013-01-01

    The formation of music-scenes in Manchester and their relation to urban space and the image of the city The paper I would like to present derives from a study of the relation between the atmospheric qualities of a city and the formation of music scenes. I have studied Manchester which is a known...... example of a music city with its heyday in from the late 1970ies with post-punk and into the 1990ies with Madchester and brit-pop. The post-punk scene with Joy Division as the primary exponent was very much embedded in the specific atmosphere and physical structure of certain parts of Manchester from...... which it took inspiration. Later on other scenes developed on the basis of the infrastructure (recordcompanies, clubs, rehearsalspaces etc) that was put in place by the postpunk-scene. This culminated in the Madchester scene which quite contrary from postpunk, had a direct influence on the atmosphere...

  10. Market study: Whole blood analyzer

    Science.gov (United States)

    1977-01-01

    A market survey was conducted to develop findings relative to the commercialization potential and key market factors of the whole blood analyzer which is being developed in conjunction with NASA's Space Shuttle Medical System.

  11. CSTT Update: Fuel Quality Analyzer

    Energy Technology Data Exchange (ETDEWEB)

    Brosha, Eric L. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Lujan, Roger W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Mukundan, Rangachary [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Rockward, Tommy [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Romero, Christopher J. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Williams, Stefan [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Wilson, Mahlon S. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-02-06

    These are slides from a presentation. The following topics are covered: project background (scope and approach), developing the prototype (timeline), update on intellectual property, analyzer comparisons (improving humidification, stabilizing the baseline, applying clean-up strategy, impact of ionomer content and improving clean-up), proposed operating mode, considerations for testing in real-world conditions (Gen 1 analyzer electronics development, testing partner identified, field trial planning), summary, and future work.

  12. Reward Selectively Modulates the Lingering Neural Representation of Recently Attended Objects in Natural Scenes.

    Science.gov (United States)

    Hickey, Clayton; Peelen, Marius V

    2017-08-02

    Theories of reinforcement learning and approach behavior suggest that reward can increase the perceptual salience of environmental stimuli, ensuring that potential predictors of outcome are noticed in the future. However, outcome commonly follows visual processing of the environment, occurring even when potential reward cues have long disappeared. How can reward feedback retroactively cause now-absent stimuli to become attention-drawing in the future? One possibility is that reward and attention interact to prime lingering visual representations of attended stimuli that sustain through the interval separating stimulus and outcome. Here, we test this idea using multivariate pattern analysis of fMRI data collected from male and female humans. While in the scanner, participants searched for examples of target categories in briefly presented pictures of cityscapes and landscapes. Correct task performance was followed by reward feedback that could randomly have either high or low magnitude. Analysis showed that high-magnitude reward feedback boosted the lingering representation of target categories while reducing the representation of nontarget categories. The magnitude of this effect in each participant predicted the behavioral impact of reward on search performance in subsequent trials. Other analyses show that sensitivity to reward-as expressed in a personality questionnaire and in reactivity to reward feedback in the dopaminergic midbrain-predicted reward-elicited variance in lingering target and nontarget representations. Credit for rewarding outcome thus appears to be assigned to the target representation, causing the visual system to become sensitized for similar objects in the future. SIGNIFICANCE STATEMENT How do reward-predictive visual stimuli become salient and attention-drawing? In the real world, reward cues precede outcome and reward is commonly received long after potential predictors have disappeared. How can the representation of environmental stimuli

  13. Sound Scene Database in Real Acoustical Environments, Proc. First International Workshop on East-Asian Language Resource and Evaluation

    OpenAIRE

    Satoshi Nakamura; Kazuo Hiyane; Futoshi Asano; Takashi Endo

    1998-01-01

    This paper describes a sound scene database for studies such as sound source localization, sound retrieval, sound recognition and speech recognition in real acoustical environments. Many speech databases have been released for speech recognition. However, only a few databases for non-speech sound in the real sound scene exist. It is clear that common databases for acoustical signal processing and sound recognition are necessary. Two approaches are taken to build the sound scene database in ou...

  14. [Documentation of course and results of crime scene reconstruction and virtual crime scene reconstruction possibility by means of 3D laser scanning technology].

    Science.gov (United States)

    Maksymowicz, Krzysztof; Zołna, Małgorzata M; Kościuk, Jacek; Dawidowicz, Bartosz

    2010-01-01

    The objective of the study was to present both the possibilities of documenting the course and results of crime scene reconstruction using 3D laser scanning technology and the legal basis for application of this technology in evidence collection. The authors present the advantages of the aforementioned method, such as precision, objectivity, resistance of the measurement parameters to manipulation (comparing to other methods), high imaging resolution, touchless data recording, nondestructive testing, etc. Moreover, trough the analysis of the current legal regulations concerning image recording in criminal proceedings, the authors show 3D laser scanning technology to have a full complete ability to be applied in practice in documentation of the course and results of crime scene reconstruction.

  15. Scenes of fathering: The automobile as a place of occupation.

    Science.gov (United States)

    Bonsall, Aaron

    2015-01-01

    While occupations are increasingly analyzed within contexts, other than the home, the ordinary places that facilitate occupations have been overlooked. The aim of this article is to explore the automobile as a place of occupation using data from an ethnographic study of fathers of children with disabilities. Qualitative data obtained through observations and interviews with the fathers and their families were analyzed using a narrative approach. Properties that influence interactions include opportunities to communicate, the vehicle itself, and electronics. Driving children in the automobile fulfills fathering responsibilities and is a time for connecting. For the fathers in this study, the automobile represents a place for negotiating complex demands of fathering. This study demonstrates not only the importance of the automobile, but also the influence of the immediate space on the construction of occupations.

  16. "Homoaffectivity" in the adoption scene: An anthropological debate

    Directory of Open Access Journals (Sweden)

    Ricardo Andrade Coitinho Filho

    2015-06-01

    Full Text Available This paper analyzes the legal treatment given to adoption in the courts of Rio de Janeiro in the context when the adoption requests are done by homosexual couples. The main purpose is to understand, through the analysis of eight legal adoption documents required by homosexuals, in which way the conceptions about family, parenthood and sexualities are produced by the legal practitioners responsible for conducting adoption applications in the period between 2000 untill 2013.

  17. Application of composite small calibration objects in traffic accident scene photogrammetry.

    Science.gov (United States)

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.

  18. Desirable and undesirable future thoughts call for different scene construction processes.

    Science.gov (United States)

    de Vito, S; Neroni, M A; Gamboz, N; Della Sala, S; Brandimonte, M A

    2015-01-01

    Despite the growing interest in the ability of foreseeing (episodic future thinking), it is still unclear how healthy people construct possible future scenarios. We suggest that different future thoughts require different processes of scene construction. Thirty-five participants were asked to imagine desirable and less desirable future events. Imagining desirable events increased the ease of scene construction, the frequency of life scripts, the number of internal details, and the clarity of sensorial and spatial temporal information. The initial description of general personal knowledge lasted longer in undesirable than in desirable anticipations. Finally, participants were more prone to explicitly indicate autobiographical memory as the main source of their simulations of undesirable episodes, whereas they equally related the simulations of desirable events to autobiographical events or semantic knowledge. These findings show that desirable and undesirable scenarios call for different mechanisms of scene construction. The present study emphasizes that future thinking cannot be considered as a monolithic entity.

  19. Molecular identification of blow flies recovered from human cadavers during crime scene investigations in Malaysia.

    Science.gov (United States)

    Kavitha, Rajagopal; Nazni, Wasi Ahmad; Tan, Tian Chye; Lee, Han Lim; Isa, Mohd Noor Mat; Azirun, Mohd Sofian

    2012-12-01

    Forensic entomology applies knowledge about insects associated with decedent in crime scene investigation. It is possible to calculate a minimum postmortem interval (PMI) by determining the age and species of the oldest blow fly larvae feeding on decedent. This study was conducted in Malaysia to identify maggot specimens collected during crime scene investigations. The usefulness of the molecular and morphological approach in species identifications was evaluated in 10 morphologically identified blow fly larvae sampled from 10 different crime scenes in Malaysia. The molecular identification method involved the sequencing of a total length of 2.2 kilo base pairs encompassing the 'barcode' fragments of the mitochondrial cytochrome oxidase I (COI), cytochrome oxidase II (COII) and t-RNA leucine genes. Phylogenetic analyses confirmed the presence of Chrysomya megacephala, Chrysomya rufifacies and Chrysomya nigripes. In addition, one unidentified blow fly species was found based on phylogenetic tree analysis.

  20. Diffraction analysis for DMD-based scene projectors in the long-wave infrared.

    Science.gov (United States)

    Han, Qing; Zhang, Jianzhong; Wang, Jian; Sun, Qiang

    2016-10-01

    Diffraction effects play a significant role in the digital micromirror device (DMD)-based scene projectors in the long-wave infrared (IR) band (8-12 μm). The contrast provided by these projector systems can become noticeably worse because of the diffraction characteristics of the DMD. We apply a diffraction grating model of the DMD based on the scalar diffraction theory and the Fourier transform to address this issue. In addition, a simulation calculation is conducted with MATLAB. Finally, the simulation result is verified with an experiment. The simulation and experimental results indicate that, when the incident azimuth angle is 0° and the zenith angle is between 42°and 46°, the scene projectors will have a good imaging contrast in the long-wave IR. The diffraction grating model proposed in this study provides a method to improve the contrast of DMD-based scene projectors in the long-wave IR.