WorldWideScience

Sample records for ir scene generation

  1. PC Scene Generation

    Science.gov (United States)

    Buford, James A., Jr.; Cosby, David; Bunfield, Dennis H.; Mayhall, Anthony J.; Trimble, Darian E.

    2007-04-01

    AMRDEC has successfully tested hardware and software for Real-Time Scene Generation for IR and SAL Sensors on COTS PC based hardware and video cards. AMRDEC personnel worked with nVidia and Concurrent Computer Corporation to develop a Scene Generation system capable of frame rates of at least 120Hz while frame locked to an external source (such as a missile seeker) with no dropped frames. Latency measurements and image validation were performed using COTS and in-house developed hardware and software. Software for the Scene Generation system was developed using OpenSceneGraph.

  2. IR characteristic simulation of city scenes based on radiosity model

    Science.gov (United States)

    Xiong, Xixian; Zhou, Fugen; Bai, Xiangzhi; Yu, Xiyu

    2013-09-01

    Reliable modeling for thermal infrared (IR) signatures of real-world city scenes is required for signature management of civil and military platforms. Traditional modeling methods generally assume that scene objects are individual entities during the physical processes occurring in infrared range. However, in reality, the physical scene involves convective and conductive interactions between objects as well as the radiations interactions between objects. A method based on radiosity model describes these complex effects. It has been developed to enable an accurate simulation for the radiance distribution of the city scenes. Firstly, the physical processes affecting the IR characteristic of city scenes were described. Secondly, heat balance equations were formed on the basis of combining the atmospheric conditions, shadow maps and the geometry of scene. Finally, finite difference method was used to calculate the kinetic temperature of object surface. A radiosity model was introduced to describe the scattering effect of radiation between surface elements in the scene. By the synthesis of objects radiance distribution in infrared range, we could obtain the IR characteristic of scene. Real infrared images and model predictions were shown and compared. The results demonstrate that this method can realistically simulate the IR characteristic of city scenes. It effectively displays the infrared shadow effects and the radiation interactions between objects in city scenes.

  3. Imaging infrared: Scene simulation, modeling, and real image tracking; Proceedings of the Meeting, Orlando, FL, Mar. 30, 31, 1989

    Science.gov (United States)

    Triplett, Milton J.; Wolverton, James R.; Hubert, August J.

    1989-09-01

    Various papers on scene simulation, modeling, and real image tracking using IR imaging are presented. Individual topics addressed include: tactical IR scene generator, dynamic FLIR simulation in flight training research, high-speed dynamic scene simulation in UV to IR spectra, development of an IR sensor calibration facility, IR celestial background scene description, transmission measurement of optical components at cryogenic temperatures, diffraction model for a point-source generator, silhouette-based tracking for tactical IR systems, use of knowledge in electrooptical trackers, detection and classification of target formations in IR image sequences, SMPRAD: simplified three-dimensional cloud radiance model, IR target generator, recent advances in testing of thermal imagers, generic IR system models with dynamic image generation, modeling realistic target acquisition using IR sensors in multiple-observer scenarios, and novel concept of scene generation and comprehensive dynamic sensor test.

  4. Characterization, propagation, and simulation of infrared scenes; Proceedings of the Meeting, Orlando, FL, Apr. 16-20, 1990

    Science.gov (United States)

    Watkins, Wendell R.; Zegel, Ferdinand H.; Triplett, Milton J.

    1990-09-01

    Various papers on the characterization, propagation, and simulation of IR scenes are presented. Individual topics addressed include: total radiant exitance measurements, absolute measurement of diffuse and specular reflectance using an FTIR spectrometer with an integrating sphere, fundamental limits in temperature estimation, incorporating the BRDF into an IR scene-generation system, characterizing IR dynamic response for foliage backgrounds, modeling sea surface effects in FLIR performance codes, automated imaging IR seeker performance evaluation system, generation of signature data bases with fast codes, background measurements using the NPS-IRST system. Also discussed are: naval ocean IR background analysis, camouflage simulation and effectiveness assessment for the individual soldier, discussion of IR scene generators, multiwavelength Scophony IR scene projector, LBIR target generator and calibrator for preflight seeker tests, dual-mode hardware-in-the-loop simulation facility, development of the IR blackbody source of gravity-type heat pipe and study of its characteristic.

  5. Integration of an open interface PC scene generator using COTS DVI converter hardware

    Science.gov (United States)

    Nordland, Todd; Lyles, Patrick; Schultz, Bret

    2006-05-01

    Commercial-Off-The-Shelf (COTS) personal computer (PC) hardware is increasingly capable of computing high dynamic range (HDR) scenes for military sensor testing at high frame rates. New electro-optical and infrared (EO/IR) scene projectors feature electrical interfaces that can accept the DVI output of these PC systems. However, military Hardware-in-the-loop (HWIL) facilities such as those at the US Army Aviation and Missile Research Development and Engineering Center (AMRDEC) utilize a sizeable inventory of existing projection systems that were designed to use the Silicon Graphics Incorporated (SGI) digital video port (DVP, also known as DVP2 or DD02) interface. To mate the new DVI-based scene generation systems to these legacy projection systems, CG2 Inc., a Quantum3D Company (CG2), has developed a DVI-to-DVP converter called Delta DVP. This device takes progressive scan DVI input, converts it to digital parallel data, and combines and routes color components to derive a 16-bit wide luminance channel replicated on a DVP output interface. The HWIL Functional Area of AMRDEC has developed a suite of modular software to perform deterministic real-time, wave band-specific rendering of sensor scenes, leveraging the features of commodity graphics hardware and open source software. Together, these technologies enable sensor simulation and test facilities to integrate scene generation and projection components with diverse pedigrees.

  6. Dynamic Frames Based Generation of 3D Scenes and Applications

    Directory of Open Access Journals (Sweden)

    Danijel Radošević

    2015-05-01

    Full Text Available Modern graphic/programming tools like Unity enables the possibility of creating 3D scenes as well as making 3D scene based program applications, including full physical model, motion, sounds, lightning effects etc. This paper deals with the usage of dynamic frames based generator in the automatic generation of 3D scene and related source code. The suggested model enables the possibility to specify features of the 3D scene in a form of textual specification, as well as exporting such features from a 3D tool. This approach enables higher level of code generation flexibility and the reusability of the main code and scene artifacts in a form of textual templates. An example of the generated application is presented and discussed.

  7. Graphics processing unit (GPU) real-time infrared scene generation

    Science.gov (United States)

    Christie, Chad L.; Gouthas, Efthimios (Themie); Williams, Owen M.

    2007-04-01

    VIRSuite, the GPU-based suite of software tools developed at DSTO for real-time infrared scene generation, is described. The tools include the painting of scene objects with radiometrically-associated colours, translucent object generation, polar plot validation and versatile scene generation. Special features include radiometric scaling within the GPU and the presence of zoom anti-aliasing at the core of VIRSuite. Extension of the zoom anti-aliasing construct to cover target embedding and the treatment of translucent objects is described.

  8. SAR Raw Data Generation for Complex Airport Scenes

    Directory of Open Access Journals (Sweden)

    Jia Li

    2014-10-01

    Full Text Available The method of generating the SAR raw data of complex airport scenes is studied in this paper. A formulation of the SAR raw signal model of airport scenes is given. Via generating the echoes from the background, aircrafts and buildings, respectively, the SAR raw data of the unified SAR imaging geometry is obtained from their vector additions. The multipath scattering and the shadowing between the background and different ground covers of standing airplanes and buildings are analyzed. Based on the scattering characteristics, coupling scattering models and SAR raw data models of different targets are given, respectively. A procedure is given to generate the SAR raw data of airport scenes. The SAR images from the simulated raw data demonstrate the validity of the proposed method.

  9. CSIR optronic scene simulator finds real application in self-protection mechanisms of the South African Air Force

    CSIR Research Space (South Africa)

    Willers, CJ

    2010-09-01

    Full Text Available The Optronic Scene Simulator (OSSIM) is a second-generation scene simulator that creates synthetic images of arbitrary complex scenes in the visual and infrared (IR) bands, covering the 0.2 to 20 μm spectral region. These images are radiometrically...

  10. Thermal-to-visible transducer (TVT) for thermal-IR imaging

    Science.gov (United States)

    Flusberg, Allen; Swartz, Stephen; Huff, Michael; Gross, Steven

    2008-04-01

    We have been developing a novel thermal-to-visible transducer (TVT), an uncooled thermal-IR imager that is based on a Fabry-Perot Interferometer (FPI). The FPI-based IR imager can convert a thermal-IR image to a video electronic image. IR radiation that is emitted by an object in the scene is imaged onto an IR-absorbing material that is located within an FPI. Temperature variations generated by the spatial variations in the IR image intensity cause variations in optical thickness, modulating the reflectivity seen by a probe laser beam. The reflected probe is imaged onto a visible array, producing a visible image of the IR scene. This technology can provide low-cost IR cameras with excellent sensitivity, low power consumption, and the potential for self-registered fusion of thermal-IR and visible images. We will describe characteristics of requisite pixelated arrays that we have fabricated.

  11. Numerical method for IR background and clutter simulation

    Science.gov (United States)

    Quaranta, Carlo; Daniele, Gina; Balzarotti, Giorgio

    1997-06-01

    The paper describes a fast and accurate algorithm of IR background noise and clutter generation for application in scene simulations. The process is based on the hypothesis that background might be modeled as a statistical process where amplitude of signal obeys to the Gaussian distribution rule and zones of the same scene meet a correlation function with exponential form. The algorithm allows to provide an accurate mathematical approximation of the model and also an excellent fidelity with reality, that appears from a comparison with images from IR sensors. The proposed method shows advantages with respect to methods based on the filtering of white noise in time or frequency domain as it requires a limited number of computation and, furthermore, it is more accurate than the quasi random processes. The background generation starts from a reticule of few points and by means of growing rules the process is extended to the whole scene of required dimension and resolution. The statistical property of the model are properly maintained in the simulation process. The paper gives specific attention to the mathematical aspects of the algorithm and provides a number of simulations and comparisons with real scenes.

  12. Semi-automatic scene generation using the Digital Anatomist Foundational Model.

    Science.gov (United States)

    Wong, B A; Rosse, C; Brinkley, J F

    1999-01-01

    A recent survey shows that a major impediment to more widespread use of computers in anatomy education is the inability to directly manipulate 3-D models, and to relate these to corresponding textual information. In the University of Washington Digital Anatomist Project we have developed a prototype Web-based scene generation program that combines the symbolic Foundational Model of Anatomy with 3-D models. A Web user can browse the Foundational Model (FM), then click to request that a 3-D scene be created of an object and its parts or branches. The scene is rendered by a graphics server, and a snapshot is sent to the Web client. The user can then manipulate the scene, adding new structures, deleting structures, rotating the scene, zooming, and saving the scene as a VRML file. Applications such as this, when fully realized with fast rendering and more anatomical content, have the potential to significantly change the way computers are used in anatomy education.

  13. Real-time scene and signature generation for ladar and imaging sensors

    Science.gov (United States)

    Swierkowski, Leszek; Christie, Chad L.; Antanovskii, Leonid; Gouthas, Efthimios

    2014-05-01

    This paper describes development of two key functionalities within the VIRSuite scene simulation program, broadening its scene generation capabilities and increasing accuracy of thermal signatures. Firstly, a new LADAR scene generation module has been designed. It is capable of simulating range imagery for Geiger mode LADAR, in addition to the already existing functionality for linear mode systems. Furthermore, a new 3D heat diffusion solver has been developed within the VIRSuite signature prediction module. It is capable of calculating the temperature distribution in complex three-dimensional objects for enhanced dynamic prediction of thermal signatures. With these enhancements, VIRSuite is now a robust tool for conducting dynamic simulation for missiles with multi-mode seekers.

  14. Exploiting current-generation graphics hardware for synthetic-scene generation

    Science.gov (United States)

    Tanner, Michael A.; Keen, Wayne A.

    2010-04-01

    Increasing seeker frame rate and pixel count, as well as the demand for higher levels of scene fidelity, have driven scene generation software for hardware-in-the-loop (HWIL) and software-in-the-loop (SWIL) testing to higher levels of parallelization. Because modern PC graphics cards provide multiple computational cores (240 shader cores for a current NVIDIA Corporation GeForce and Quadro cards), implementation of phenomenology codes on graphics processing units (GPUs) offers significant potential for simultaneous enhancement of simulation frame rate and fidelity. To take advantage of this potential requires algorithm implementation that is structured to minimize data transfers between the central processing unit (CPU) and the GPU. In this paper, preliminary methodologies developed at the Kinetic Hardware In-The-Loop Simulator (KHILS) will be presented. Included in this paper will be various language tradeoffs between conventional shader programming, Compute Unified Device Architecture (CUDA) and Open Computing Language (OpenCL), including performance trades and possible pathways for future tool development.

  15. Impact of sensor-scene interaction on the design of an IR security surveillance system

    International Nuclear Information System (INIS)

    Claassen, J.P.; Phipps, G.S.

    1982-01-01

    Recent encouraging developments in infrared staring arrays with CCD readouts and in real time image processors working on and off the focal plane have suggested that technologies suitable for infrared security surveillance may be available in a two-to-five year time frame. In anticipation of these emerging technologies, an investigation has been undertaken to establish the design potential of a passive IR perimeter security system incorporating both detection and verification capabilities. To establish the design potential, it is necessary to characterize the interactions between the scene ad the sensor. To this end, theoretical and experimental findings were employed to document (1) the emission properties of scenes to include an intruder, (2) the propagation and emission characteristics of the intervening atmosphere, and (3) the reception properties of the imaging sensor. The impact of these findings are summarized in the light of the application constraints. Optimal wavelengths, intruder and background emission characteristics, weather limitations, and basic sensor design considerations are treated. Although many system design features have been identified to this date, continued efforts are required to complete a detailed system design to include the identifying processing requirements. A program to accomplish these objectives is presented

  16. Three-dimensional scene encryption and display based on computer-generated holograms.

    Science.gov (United States)

    Kong, Dezhao; Cao, Liangcai; Jin, Guofan; Javidi, Bahram

    2016-10-10

    An optical encryption and display method for a three-dimensional (3D) scene is proposed based on computer-generated holograms (CGHs) using a single phase-only spatial light modulator. The 3D scene is encoded as one complex Fourier CGH. The Fourier CGH is then decomposed into two phase-only CGHs with random distributions by the vector stochastic decomposition algorithm. Two CGHs are interleaved as one final phase-only CGH for optical encryption and reconstruction. The proposed method can support high-level nonlinear optical 3D scene security and complex amplitude modulation of the optical field. The exclusive phase key offers strong resistances of decryption attacks. Experimental results demonstrate the validity of the novel method.

  17. Third-generation intelligent IR focal plane arrays

    Science.gov (United States)

    Caulfield, H. John; Jack, Michael D.; Pettijohn, Kevin L.; Schlesselmann, John D.; Norworth, Joe

    1998-03-01

    SBRC is at the forefront of industry in developing IR focal plane arrays including multi-spectral technology and '3rd generation' functions that mimic the human eye. 3rd generation devices conduct advanced processing on or near the FPA that serve to reduce bandwidth while performing needed functions such as automatic target recognition, uniformity correction and dynamic range enhancement. These devices represent a solution for processing the exorbitantly high bandwidth coming off large area FPAs without sacrificing systems sensitivity. SBRC's two-color approach leverages the company's HgCdTe technology to provide simultaneous multiband coverage, from short through long wave IR, with near theoretical performance. IR systems that are sensitive to different spectral bands achieve enhanced capabilities for target identification and advanced discrimination. This paper will provide a summary of the issues, the technology and the benefits of SBRC's third generation smart and two-color FPAs.

  18. Review of infrared scene projector technology-1993

    Science.gov (United States)

    Driggers, Ronald G.; Barnard, Kenneth J.; Burroughs, E. E.; Deep, Raymond G.; Williams, Owen M.

    1994-07-01

    The importance of testing IR imagers and missile seekers with realistic IR scenes warrants a review of the current technologies used in dynamic infrared scene projection. These technologies include resistive arrays, deformable mirror arrays, mirror membrane devices, liquid crystal light valves, laser writers, laser diode arrays, and CRTs. Other methods include frustrated total internal reflection, thermoelectric devices, galvanic cells, Bly cells, and vanadium dioxide. A description of each technology is presented along with a discussion of their relative benefits and disadvantages. The current state of each methodology is also summarized. Finally, the methods are compared and contrasted in terms of their performance parameters.

  19. Octave-Spanning Mid-IR Supercontinuum Generation with Ultrafast Cascaded Nonlinearities

    DEFF Research Database (Denmark)

    Zhou, Binbin; Guo, Hairun; Liu, Xing

    2014-01-01

    An octave-spanning mid-IR supercontinuum is observed experimentally using ultrafast cascaded nonlinearities in an LiInS2 quadratic nonlinear crystal pumped with 70 fs energetic mid-IR pulses and cut for strongly phase-mismatched second-harmonic generation.......An octave-spanning mid-IR supercontinuum is observed experimentally using ultrafast cascaded nonlinearities in an LiInS2 quadratic nonlinear crystal pumped with 70 fs energetic mid-IR pulses and cut for strongly phase-mismatched second-harmonic generation....

  20. Infrared radiation scene generation of stars and planets in celestial background

    Science.gov (United States)

    Guo, Feng; Hong, Yaohui; Xu, Xiaojian

    2014-10-01

    An infrared (IR) radiation generation model of stars and planets in celestial background is proposed in this paper. Cohen's spectral template1 is modified for high spectral resolution and accuracy. Based on the improved spectral template for stars and the blackbody assumption for planets, an IR radiation model is developed which is able to generate the celestial IR background for stars and planets appearing in sensor's field of view (FOV) for specified observing date and time, location, viewpoint and spectral band over 1.2μm ~ 35μm. In the current model, the initial locations of stars are calculated based on midcourse space experiment (MSX) IR astronomical catalogue (MSX-IRAC) 2 , while the initial locations of planets are calculated using secular variations of the planetary orbits (VSOP) theory. Simulation results show that the new IR radiation model has higher resolution and accuracy than common model.

  1. CarSim: Automatic 3D Scene Generation of a Car Accident Description

    NARCIS (Netherlands)

    Egges, A.; Nijholt, A.; Nugues, P.

    2001-01-01

    The problem of generating a 3D simulation of a car accident from a written description can be divided into two subtasks: the linguistic analysis and the virtual scene generation. As a means of communication between these two system parts, we designed a template formalism to represent a written

  2. Generation of Variations on Theme Music Based on Impressions of Story Scenes Considering Human's Feeling of Music and Stories

    Directory of Open Access Journals (Sweden)

    Kenkichi Ishizuka

    2008-01-01

    Full Text Available This paper describes a system which generates variations on theme music fitting to story scenes represented by texts and/or pictures. Inputs to the present system are original theme music and numerical information on given story scenes. The present system varies melodies, tempos, tones, tonalities, and accompaniments of given theme music based on impressions of story scenes. Genetic algorithms (GAs using modular neural network (MNN models as fitness functions are applied to music generation in order to reflect user's feeling of music and stories. The present system adjusts MNN models for each user on line. This paper also describes the evaluation experiments to confirm whether the generated variations on theme music reflect impressions of story scenes appropriately or not.

  3. Carrier-free 194Ir from an 194Os/194Ir generator - a new candidate for radioimmunotherapy

    International Nuclear Information System (INIS)

    Mirzadeh, S.; Rice, D.E.; Knapp, F.F. Jr.

    1992-01-01

    Iridium-194 (t 1/2 = 19.15 h) decays by beta-particle emission (E max = 2.236 MeV) and is a potential candidate for radioimmunotherapy. An important characteristic is availability of 194 Ir from decay of reactor-produced 194 Os (t 1/2 = 6y). We report the fabrication of the first 194 Os/ 194 Ir generator system using activated carbon. In addition, a novel gas thermochromatographic method was developed for the one step conversion of metallic Os to OsO 4 and subsequent separation and purification of OsO 4 . In this manner, the reactor irradiated enriched 192 Os target was converted to 194 OsO 4 , which was then converted to the K 2 OsCl 6 for generator loading. The yield and the elution profile of carrier-free 194 Ir, and 194 Os breakthrough were determined for a prototype generator which was evaluated over a 10-month period. (author)

  4. Validation of the thermal code of RadTherm-IR, IR-Workbench, and F-TOM

    Science.gov (United States)

    Schwenger, Frédéric; Grossmann, Peter; Malaplate, Alain

    2009-05-01

    System assessment by image simulation requires synthetic scenarios that can be viewed by the device to be simulated. In addition to physical modeling of the camera, a reliable modeling of scene elements is necessary. Software products for modeling of target data in the IR should be capable of (i) predicting surface temperatures of scene elements over a long period of time and (ii) computing sensor views of the scenario. For such applications, FGAN-FOM acquired the software products RadTherm-IR (ThermoAnalytics Inc., Calumet, USA; IR-Workbench (OKTAL-SE, Toulouse, France). Inspection of the accuracy of simulation results by validation is necessary before using these products for applications. In the first step of validation, the performance of both "thermal solvers" was determined through comparison of the computed diurnal surface temperatures of a simple object with the corresponding values from measurements. CUBI is a rather simple geometric object with well known material parameters which makes it suitable for testing and validating object models in IR. It was used in this study as a test body. Comparison of calculated and measured surface temperature values will be presented, together with the results from the FGAN-FOM thermal object code F-TOM. In the second validation step, radiances of the simulated sensor views computed by RadTherm-IR and IR-Workbench will be compared with radiances retrieved from the recorded sensor images taken by the sensor that was simulated. Strengths and weaknesses of the models RadTherm-IR, IR-Workbench and F-TOM will be discussed.

  5. CarSim: Automatic 3D Scene Generation of a Car Accident Description

    OpenAIRE

    Egges, A.; Nijholt, A.; Nugues, P.

    2001-01-01

    The problem of generating a 3D simulation of a car accident from a written description can be divided into two subtasks: the linguistic analysis and the virtual scene generation. As a means of communication between these two system parts, we designed a template formalism to represent a written accident report. The CarSim system processes formal descriptions of accidents and creates corresponding 3D simulations. A planning component models the trajectories and temporal values of every vehicle ...

  6. Generation of binary holograms for deep scenes captured with a camera and a depth sensor

    Science.gov (United States)

    Leportier, Thibault; Park, Min-Chul

    2017-01-01

    This work presents binary hologram generation from images of a real object acquired from a Kinect sensor. Since hologram calculation from a point-cloud or polygon model presents a heavy computational burden, we adopted a depth-layer approach to generate the holograms. This method enables us to obtain holographic data of large scenes quickly. Our investigations focus on the performance of different methods, iterative and noniterative, to convert complex holograms into binary format. Comparisons were performed to examine the reconstruction of the binary holograms at different depths. We also propose to modify the direct binary search algorithm to take into account several reference image planes. Then, deep scenes featuring multiple planes of interest can be reconstructed with better efficiency.

  7. Dynamic thermal signature prediction for real-time scene generation

    Science.gov (United States)

    Christie, Chad L.; Gouthas, Efthimios (Themie); Williams, Owen M.; Swierkowski, Leszek

    2013-05-01

    At DSTO, a real-time scene generation framework, VIRSuite, has been developed in recent years, within which trials data are predominantly used for modelling the radiometric properties of the simulated objects. Since in many cases the data are insufficient, a physics-based simulator capable of predicting the infrared signatures of objects and their backgrounds has been developed as a new VIRSuite module. It includes transient heat conduction within the materials, and boundary conditions that take into account the heat fluxes due to solar radiation, wind convection and radiative transfer. In this paper, an overview is presented, covering both the steady-state and transient performance.

  8. Qualitative spatial logic descriptors from 3D indoor scenes to generate explanations in natural language.

    Science.gov (United States)

    Falomir, Zoe; Kluth, Thomas

    2018-05-01

    The challenge of describing 3D real scenes is tackled in this paper using qualitative spatial descriptors. A key point to study is which qualitative descriptors to use and how these qualitative descriptors must be organized to produce a suitable cognitive explanation. In order to find answers, a survey test was carried out with human participants which openly described a scene containing some pieces of furniture. The data obtained in this survey are analysed, and taking this into account, the QSn3D computational approach was developed which uses a XBox 360 Kinect to obtain 3D data from a real indoor scene. Object features are computed on these 3D data to identify objects in indoor scenes. The object orientation is computed, and qualitative spatial relations between the objects are extracted. These qualitative spatial relations are the input to a grammar which applies saliency rules obtained from the survey study and generates cognitive natural language descriptions of scenes. Moreover, these qualitative descriptors can be expressed as first-order logical facts in Prolog for further reasoning. Finally, a validation study is carried out to test whether the descriptions provided by QSn3D approach are human readable. The obtained results show that their acceptability is higher than 82%.

  9. Emotional Scene Content Drives the Saccade Generation System Reflexively

    Science.gov (United States)

    Nummenmaa, Lauri; Hyona, Jukka; Calvo, Manuel G.

    2009-01-01

    The authors assessed whether parafoveal perception of emotional content influences saccade programming. In Experiment 1, paired emotional and neutral scenes were presented to parafoveal vision. Participants performed voluntary saccades toward either of the scenes according to an imperative signal (color cue). Saccadic reaction times were faster…

  10. Advanced radiometric and interferometric milimeter-wave scene simulations

    Science.gov (United States)

    Hauss, B. I.; Moffa, P. J.; Steele, W. G.; Agravante, H.; Davidheiser, R.; Samec, T.; Young, S. K.

    1993-01-01

    Smart munitions and weapons utilize various imaging sensors (including passive IR, active and passive millimeter-wave, and visible wavebands) to detect/identify targets at short standoff ranges and in varied terrain backgrounds. In order to design and evaluate these sensors under a variety of conditions, a high-fidelity scene simulation capability is necessary. Such a capability for passive millimeter-wave scene simulation exists at TRW. TRW's Advanced Radiometric Millimeter-Wave Scene Simulation (ARMSS) code is a rigorous, benchmarked, end-to-end passive millimeter-wave scene simulation code for interpreting millimeter-wave data, establishing scene signatures and evaluating sensor performance. In passive millimeter-wave imaging, resolution is limited due to wavelength and aperture size. Where high resolution is required, the utility of passive millimeter-wave imaging is confined to short ranges. Recent developments in interferometry have made possible high resolution applications on military platforms. Interferometry or synthetic aperture radiometry allows the creation of a high resolution image with a sparsely filled aperture. Borrowing from research work in radio astronomy, we have developed and tested at TRW scene reconstruction algorithms that allow the recovery of the scene from a relatively small number of spatial frequency components. In this paper, the TRW modeling capability is described and numerical results are presented.

  11. Utilising E-on Vue and Unity 3D scenes to generate synthetic images and videos for visible signature analysis

    Science.gov (United States)

    Madden, Christopher S.; Richards, Noel J.; Culpepper, Joanne B.

    2016-10-01

    This paper investigates the ability to develop synthetic scenes in an image generation tool, E-on Vue, and a gaming engine, Unity 3D, which can be used to generate synthetic imagery of target objects across a variety of conditions in land environments. Developments within these tools and gaming engines have allowed the computer gaming industry to dramatically enhance the realism of the games they develop; however they utilise short cuts to ensure that the games run smoothly in real-time to create an immersive effect. Whilst these short cuts may have an impact upon the realism of the synthetic imagery, they do promise a much more time efficient method of developing imagery of different environmental conditions and to investigate the dynamic aspect of military operations that is currently not evaluated in signature analysis. The results presented investigate how some of the common image metrics used in target acquisition modelling, namely the Δμ1, Δμ2, Δμ3, RSS, and Doyle metrics, perform on the synthetic scenes generated by E-on Vue and Unity 3D compared to real imagery of similar scenes. An exploration of the time required to develop the various aspects of the scene to enhance its realism are included, along with an overview of the difficulties associated with trying to recreate specific locations as a virtual scene. This work is an important start towards utilising virtual worlds for visible signature evaluation, and evaluating how equivalent synthetic imagery is to real photographs.

  12. Correlated Topic Vector for Scene Classification.

    Science.gov (United States)

    Wei, Pengxu; Qin, Fei; Wan, Fang; Zhu, Yi; Jiao, Jianbin; Ye, Qixiang

    2017-07-01

    Scene images usually involve semantic correlations, particularly when considering large-scale image data sets. This paper proposes a novel generative image representation, correlated topic vector, to model such semantic correlations. Oriented from the correlated topic model, correlated topic vector intends to naturally utilize the correlations among topics, which are seldom considered in the conventional feature encoding, e.g., Fisher vector, but do exist in scene images. It is expected that the involvement of correlations can increase the discriminative capability of the learned generative model and consequently improve the recognition accuracy. Incorporated with the Fisher kernel method, correlated topic vector inherits the advantages of Fisher vector. The contributions to the topics of visual words have been further employed by incorporating the Fisher kernel framework to indicate the differences among scenes. Combined with the deep convolutional neural network (CNN) features and Gibbs sampling solution, correlated topic vector shows great potential when processing large-scale and complex scene image data sets. Experiments on two scene image data sets demonstrate that correlated topic vector improves significantly the deep CNN features, and outperforms existing Fisher kernel-based features.

  13. Scene recognition and colorization for vehicle infrared images

    Science.gov (United States)

    Hou, Junjie; Sun, Shaoyuan; Shen, Zhenyi; Huang, Zhen; Zhao, Haitao

    2016-10-01

    In order to make better use of infrared technology for driving assistance system, a scene recognition and colorization method is proposed in this paper. Various objects in a queried infrared image are detected and labelled with proper categories by a combination of SIFT-Flow and MRF model. The queried image is then colorized by assigning corresponding colors according to the categories of the objects appeared. The results show that the strategy here emphasizes important information of the IR images for human vision and could be used to broaden the application of IR images for vehicle driving.

  14. Few-cycle nonlinear mid-IR pulse generated with cascaded quadratic nonlinearities

    DEFF Research Database (Denmark)

    Bache, Morten; Liu, Xing; Zhou, Binbin

    Generating few-cycle energetic and broadband mid-IR pulses is an urgent current challenge in nonlinear optics. Cascaded second-harmonic generation (SHG) gives access to an ultrafast and octave-spanning self-defocusing nonlinearity: when ΔkL >> 2π the pump experiences a Kerr-like nonlinear index...

  15. Portable generator-based X RF instrument for non-destructive analysis at crime scenes

    Energy Technology Data Exchange (ETDEWEB)

    Schweitzer, Jeffrey S. [University of Connecticut, Department of Physics, Unit 3046 Storrs, CT 06269-3046 (United States)]. E-mail: schweitz@phys.uconn.edu; Trombka, Jacob I. [Goddard Space Flight Center, Code 691, Greenbelt Road, Greenbelt, MD 20771 (United States); Floyd, Samuel [Goddard Space Flight Center, Code 691, Greenbelt Road, Greenbelt, MD 20771 (United States); Selavka, Carl [Massachusetts State Police Crime Laboratory, 59 Horse Pond Road, Sudbury, MA 01776 (United States); Zeosky, Gerald [Forensic Investigation Center, Crime Laboratory Building, 22 State Campus, Albany, NY 12226 (United States); Gahn, Norman [Assistant District Attorney, Milwaukee County, District Attorney' s Office, 821 West State Street, Milwaukee, WI 53233-1427 (United States); McClanahan, Timothy [Goddard Space Flight Center, Code 691, Greenbelt Road, Greenbelt, MD 20771 (United States); Burbine, Thomas [Goddard Space Flight Center, Code 691, Greenbelt Road, Greenbelt, MD 20771 (United States)

    2005-12-15

    Unattended and remote detection systems find applications in space exploration, telemedicine, teleforensics, homeland security and nuclear non-proliferation programs. The National Institute of Justice (NIJ) and the National Aeronautics and Space Administration's (NASA) Goddard Space Flight Center (GSFC) have teamed up to explore the use of NASA developed technologies to help criminal justice agencies and professionals investigate crimes. The objective of the program is to produce instruments and communication networks that have application within both NASA's space program and NIJ, together with state and local forensic laboratories. A general-purpose X-ray fluorescence system has been built for non-destructive analyses of trace and invisible material at crime scenes. This portable instrument is based on a generator that can operate to 60 kV and a Schottky CdTe detector. The instrument has been shown to be successful for the analysis of gunshot residue and a number of bodily fluids at crime scenes.

  16. Portable generator-based X RF instrument for non-destructive analysis at crime scenes

    International Nuclear Information System (INIS)

    Schweitzer, Jeffrey S.; Trombka, Jacob I.; Floyd, Samuel; Selavka, Carl; Zeosky, Gerald; Gahn, Norman; McClanahan, Timothy; Burbine, Thomas

    2005-01-01

    Unattended and remote detection systems find applications in space exploration, telemedicine, teleforensics, homeland security and nuclear non-proliferation programs. The National Institute of Justice (NIJ) and the National Aeronautics and Space Administration's (NASA) Goddard Space Flight Center (GSFC) have teamed up to explore the use of NASA developed technologies to help criminal justice agencies and professionals investigate crimes. The objective of the program is to produce instruments and communication networks that have application within both NASA's space program and NIJ, together with state and local forensic laboratories. A general-purpose X-ray fluorescence system has been built for non-destructive analyses of trace and invisible material at crime scenes. This portable instrument is based on a generator that can operate to 60 kV and a Schottky CdTe detector. The instrument has been shown to be successful for the analysis of gunshot residue and a number of bodily fluids at crime scenes

  17. Scene construction in schizophrenia.

    Science.gov (United States)

    Raffard, Stéphane; D'Argembeau, Arnaud; Bayard, Sophie; Boulenger, Jean-Philippe; Van der Linden, Martial

    2010-09-01

    Recent research has revealed that schizophrenia patients are impaired in remembering the past and imagining the future. In this study, we examined patients' ability to engage in scene construction (i.e., the process of mentally generating and maintaining a complex and coherent scene), which is a key part of retrieving past experiences and episodic future thinking. 24 participants with schizophrenia and 25 healthy controls were asked to imagine new fictitious experiences and described their mental representations of the scenes in as much detail as possible. Descriptions were scored according to various dimensions (e.g., sensory details, spatial reference), and participants also provided ratings of their subjective experience when imagining the scenes (e.g., their sense of presence, the perceived similarity of imagined events to past experiences). Imagined scenes contained less phenomenological details (d = 1.11) and were more fragmented (d = 2.81) in schizophrenia patients compared to controls. Furthermore, positive symptoms were positively correlated to the sense of presence (r = .43) and the perceived similarity of imagined events to past episodes (r = .47), whereas negative symptoms were negatively related to the overall richness of the imagined scenes (r = -.43). The results suggest that schizophrenic patients' impairments in remembering the past and imagining the future are, at least in part, due to deficits in the process of scene construction. The relationships between the characteristics of imagined scenes and positive and negative symptoms could be related to reality monitoring deficits and difficulties in strategic retrieval processes, respectively. Copyright 2010 APA, all rights reserved.

  18. Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes.

    Science.gov (United States)

    Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S

    2015-02-09

    A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.

  19. Generation of various carbon nanostructures in water using IR/UV laser ablation

    International Nuclear Information System (INIS)

    Mortazavi, Seyedeh Zahra; Parvin, Parviz; Reyhani, Ali; Mirershadi, Soghra; Sadighi-Bonabi, Rasoul

    2013-01-01

    A wide variety of carbon nanostructures were generated by a Q-switched Nd : YAG laser (1064 nm) while mostly nanodiamonds were created by an ArF excimer laser (193 nm) in deionized water. They were characterized by transmission electron microscopy, Raman spectroscopy and x-ray photoelectron spectroscopy. It was found that the IR laser affected the morphology and structure of the nanostructures due to the higher inverse bremsstrahlung absorption rate within the plasma plume with respect to the UV laser. Moreover, laser-induced breakdown spectroscopy was carried out so that the plasma created by the IR laser was more energetic than that generated by the UV laser. (paper)

  20. Real-time maritime scene simulation for ladar sensors

    Science.gov (United States)

    Christie, Chad L.; Gouthas, Efthimios; Swierkowski, Leszek; Williams, Owen M.

    2011-06-01

    Continuing interest exists in the development of cost-effective synthetic environments for testing Laser Detection and Ranging (ladar) sensors. In this paper we describe a PC-based system for real-time ladar scene simulation of ships and small boats in a dynamic maritime environment. In particular, we describe the techniques employed to generate range imagery accompanied by passive radiance imagery. Our ladar scene generation system is an evolutionary extension of the VIRSuite infrared scene simulation program and includes all previous features such as ocean wave simulation, the physically-realistic representation of boat and ship dynamics, wake generation and simulation of whitecaps, spray, wake trails and foam. A terrain simulation extension is also under development. In this paper we outline the development, capabilities and limitations of the VIRSuite extensions.

  1. Invited Article: Multiple-octave spanning high-energy mid-IR supercontinuum generation in bulk quadratic nonlinear crystals

    Directory of Open Access Journals (Sweden)

    Binbin Zhou

    2016-08-01

    Full Text Available Bright and broadband coherent mid-IR radiation is important for exciting and probing molecular vibrations. Using cascaded nonlinearities in conventional quadratic nonlinear crystals like lithium niobate, self-defocusing near-IR solitons have been demonstrated that led to very broadband supercontinuum generation in the visible, near-IR, and short-wavelength mid-IR. Here we conduct an experiment where a mid-IR crystal is pumped in the mid-IR. The crystal is cut for noncritical interaction, so the three-wave mixing of a single mid-IR femtosecond pump source leads to highly phase-mismatched second-harmonic generation. This self-acting cascaded process leads to the formation of a self-defocusing soliton at the mid-IR pump wavelength and after the self-compression point multiple octave-spanning supercontinua are observed. The results were recorded in a commercially available crystal LiInS2 pumped in the 3-4 μm range with 85 fs 50 μJ pulse energy, with the broadest supercontinuum covering 1.6-7.0 μm. We measured up 30 μJ energy in the supercontinuum, and the energy promises to scale favorably with an increased pump energy. Other mid-IR crystals can readily be used as well to cover other pump wavelengths and target other supercontinuum wavelength ranges.

  2. 3D Traffic Scene Understanding From Movable Platforms.

    Science.gov (United States)

    Geiger, Andreas; Lauer, Martin; Wojek, Christian; Stiller, Christoph; Urtasun, Raquel

    2014-05-01

    In this paper, we present a novel probabilistic generative model for multi-object traffic scene understanding from movable platforms which reasons jointly about the 3D scene layout as well as the location and orientation of objects in the scene. In particular, the scene topology, geometry, and traffic activities are inferred from short video sequences. Inspired by the impressive driving capabilities of humans, our model does not rely on GPS, lidar, or map knowledge. Instead, it takes advantage of a diverse set of visual cues in the form of vehicle tracklets, vanishing points, semantic scene labels, scene flow, and occupancy grids. For each of these cues, we propose likelihood functions that are integrated into a probabilistic generative model. We learn all model parameters from training data using contrastive divergence. Experiments conducted on videos of 113 representative intersections show that our approach successfully infers the correct layout in a variety of very challenging scenarios. To evaluate the importance of each feature cue, experiments using different feature combinations are conducted. Furthermore, we show how by employing context derived from the proposed method we are able to improve over the state-of-the-art in terms of object detection and object orientation estimation in challenging and cluttered urban environments.

  3. Chemical and physical parameters affecting the performance of the Os-191/Ir-191m generator

    International Nuclear Information System (INIS)

    Packard, A.B.; Butler, T.A.; Knapp, F.F.; O'Brien, G.M.; Treves, S.

    1984-01-01

    The development of an Os-191/Ir-191m generator suitable for radionuclide angiography in humans has elicited much interest. This generator employs ''(OsO 2 Cl 4 ) 2- '' on AG MP-1 anion exchange resin with a Dowex-2 scavenger column and is eluted with normal saline at pH 1. The parent Os species is, however, neither welldefined nor homogeneous leading to less than optimal breakthrough of Os-191 (5 x 10 -3 %) and modest Ir-191m yield (10-15%). The effect of a range of parameters on generator performance has been evaluated as has been the way in which the assembly and loading process affects generator performance. In addition, a number of potential alternative generator systems have been evaluated

  4. Defining spatial relations in a specific ontology for automated scene creation

    Directory of Open Access Journals (Sweden)

    D. Contraş

    2013-06-01

    Full Text Available This paper presents the approach of building an ontology for automatic scene generation. Every scene contains various elements (backgrounds, characters, objects which are spatially interrelated. The article focuses on these spatial and temporal relationships of the elements constituting a scene.

  5. Multi- and hyperspectral scene modeling

    Science.gov (United States)

    Borel, Christoph C.; Tuttle, Ronald F.

    2011-06-01

    This paper shows how to use a public domain raytracer POV-Ray (Persistence Of Vision Raytracer) to render multiand hyper-spectral scenes. The scripting environment allows automatic changing of the reflectance and transmittance parameters. The radiosity rendering mode allows accurate simulation of multiple-reflections between surfaces and also allows semi-transparent surfaces such as plant leaves. We show that POV-Ray computes occlusion accurately using a test scene with two blocks under a uniform sky. A complex scene representing a plant canopy is generated using a few lines of script. With appropriate rendering settings, shadows cast by leaves are rendered in many bands. Comparing single and multiple reflection renderings, the effect of multiple reflections is clearly visible and accounts for 25% of the overall apparent canopy reflectance in the near infrared.

  6. Application of a flow generated by IR laser and AC electric field in micropumping and micromixing

    International Nuclear Information System (INIS)

    Nakano, M; Mizuno, A

    2008-01-01

    In this paper, it is described that measurement of fluid flow generated by simultaneous operation of an infrared (IR) laser and AC electric field in a microfabricated channel. When an IR laser (1026 nm) was focused under an intense AC electric field, a circulating flow was generated around the laser focus. The IR laser and the electric field generate two flow patterns of the electrohydrodynamicss. When the laser focus is placed at the center of the gap between electrodes, the flow pattern is parallel to the AC electric field toward electrodes from the centre. On the other hand, when the laser focus is placed close to one of the electrodes, one directional flow is generated. First flow pattern can be used as a micromixer and the second one as a micropump. Flow velocity profiles of the two flow patterns were measured as a function of the laser power, intensity of the AC electric field and AC frequency.

  7. Joint Navy and Air Force Infrared Sensor Stimulator (IRSS) Program for Installed Systems Test Facilities (ISTFs)

    National Research Council Canada - National Science Library

    Joyner, Tom

    1998-01-01

    ...) sensors undergoing integrated developmental and operational testing. IRSS generates digital IR scenes in real time to provide a realistic portrayal of IR scene radiance as viewed by and IR system under test in a threat engagement scenario...

  8. Using VIS/NIR and IR spectral cameras for detecting and separating crime scene details

    Science.gov (United States)

    Kuula, Jaana; Pölönen, Ilkka; Puupponen, Hannu-Heikki; Selander, Tuomas; Reinikainen, Tapani; Kalenius, Tapani; Saari, Heikki

    2012-06-01

    Detecting invisible details and separating mixed evidence is critical for forensic inspection. If this can be done reliably and fast at the crime scene, irrelevant objects do not require further examination at the laboratory. This will speed up the inspection process and release resources for other critical tasks. This article reports on tests which have been carried out at the University of Jyväskylä in Finland together with the Central Finland Police Department and the National Bureau of Investigation for detecting and separating forensic details with hyperspectral technology. In the tests evidence was sought after at an assumed violent burglary scene with the use of VTT's 500-900 nm wavelength VNIR camera, Specim's 400- 1000 nm VNIR camera, and Specim's 1000-2500 nm SWIR camera. The tested details were dried blood on a ceramic plate, a stain of four types of mixed and absorbed blood, and blood which had been washed off a table. Other examined details included untreated latent fingerprints, gunshot residue, primer residue, and layered paint on small pieces of wood. All cameras could detect visible details and separate mixed paint. The SWIR camera could also separate four types of human and animal blood which were mixed in the same stain and absorbed into a fabric. None of the cameras could however detect primer residue, untreated latent fingerprints, or blood that had been washed off. The results are encouraging and indicate the need for further studies. The results also emphasize the importance of creating optimal imaging conditions into the crime scene for each kind of subjects and backgrounds.

  9. Multi-pollutants sensors based on near-IR telecom lasers and mid-IR difference frequency generation: development and applications

    International Nuclear Information System (INIS)

    Cousin, J.

    2006-12-01

    At present the detection of VOC and other anthropic trace pollutants is an important challenge in the measurement of air quality. Infrared spectroscopy, allowing spectral regions rich in molecular absorption to be probed, is a suitable technique for in-situ monitoring of the air pollution. Thus the aim of this work was to develop instruments capable of detecting multiple pollutants for in-situ monitoring by IR spectroscopy. A first project benefited from the availability of the telecommunications lasers emitting in near-IR. This instrument was based on an external cavity diode laser (1500 - 1640 nm) in conjunction with a multipass cell (100 m). The detection sensitivity was optimised by employing a balanced detection and a sweep integration procedure. The instrument developed is deployable for in-situ measurements with a sensitivity of -8 cm -1 Hz -1/2 and allowed the quantification of chemical species such as CO 2 , CO, C 2 H 2 , CH 4 and the determination of the isotopic ratio 13 CO 2 / 12 CO 2 in combustion environment The second project consisted in mixing two near-IR fiber lasers in a non-linear crystal (PPLN) in order to produce a laser radiation by difference frequency generation in the middle-IR (3.15 - 3.43 μm), where the absorption bands of the molecules are the most intense. The first studies with this source were carried out on detection of ethylene (C 2 H 4 ) and benzene (C 6 H 6 ). Developments, characterizations and applications of these instruments in the near and middle IR are detailed and the advantages of the 2 spectral ranges is highlighted. (author)

  10. Hydrogen generation from decomposition of hydrous hydrazine over Ni-Ir/CeO2 catalyst

    Directory of Open Access Journals (Sweden)

    Hongbin Dai

    2017-02-01

    Full Text Available The synthesis of highly active and selective catalysts is the central issue in the development of hydrous hydrazine (N2H4·H2O as a viable hydrogen carrier. Herein, we report the synthesis of bimetallic Ni-Ir nanocatalyts supported on CeO2 using a one-pot coprecipitation method. A combination of XRD, HRTEM and XPS analyses indicate that the Ni-Ir/CeO2 catalyst is composed of tiny Ni-Ir alloy nanoparticles with an average size of around 4 nm and crystalline CeO2 matrix. The Ni-Ir/CeO2 catalyst exhibits high catalytic activity and excellent selectivity towards hydrogen generation from N2H4·H2O at mild temperatures. Furthermore, in contrast to previously reported Ni-Pt catalysts, the Ni-Ir/CeO2 catalyst shows an alleviated requirement on alkali promoter to achieve its optimal catalytic performance.

  11. Semantic guidance of eye movements in real-world scenes.

    Science.gov (United States)

    Hwang, Alex D; Wang, Hsueh-Cheng; Pomplun, Marc

    2011-05-25

    The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movements by semantic similarity among objects during real-world scene inspection and search. By selecting scenes from the LabelMe object-annotated image database and applying latent semantic analysis (LSA) to the object labels, we generated semantic saliency maps of real-world scenes based on the semantic similarity of scene objects to the currently fixated object or the search target. An ROC analysis of these maps as predictors of subjects' gaze transitions between objects during scene inspection revealed a preference for transitions to objects that were semantically similar to the currently inspected one. Furthermore, during the course of a scene search, subjects' eye movements were progressively guided toward objects that were semantically similar to the search target. These findings demonstrate substantial semantic guidance of eye movements in real-world scenes and show its importance for understanding real-world attentional control. Copyright © 2011 Elsevier Ltd. All rights reserved.

  12. Automatic video surveillance of outdoor scenes using track before detect

    DEFF Research Database (Denmark)

    Hansen, Morten; Sørensen, Helge Bjarup Dissing; Birkemark, Christian M.

    2005-01-01

    This paper concerns automatic video surveillance of outdoor scenes using a single camera. The first step in automatic interpretation of the video stream is activity detection based on background subtraction. Usually, this process will generate a large number of false alarms in outdoor scenes due...

  13. Modeling and performance assessment in QinetiQ of EO and IR airborne reconnaissance systems

    Science.gov (United States)

    Williams, John W.; Potter, Gary E.

    2002-11-01

    QinetiQ are the technical authority responsible for specifying the performance requirements for the procurement of airborne reconnaissance systems, on behalf of the UK MoD. They are also responsible for acceptance of delivered systems, overseeing and verifying the installed system performance as predicted and then assessed by the contractor. Measures of functional capability are central to these activities. The conduct of these activities utilises the broad technical insight and wide range of analysis tools and models available within QinetiQ. This paper focuses on the tools, methods and models that are applicable to systems based on EO and IR sensors. The tools, methods and models are described, and representative output for systems that QinetiQ has been responsible for is presented. The principle capability applicable to EO and IR airborne reconnaissance systems is the STAR (Simulation Tools for Airborne Reconnaissance) suite of models. STAR generates predictions of performance measures such as GRD (Ground Resolved Distance) and GIQE (General Image Quality) NIIRS (National Imagery Interpretation Rating Scales). It also generates images representing sensor output, using the scene generation software CAMEO-SIM and the imaging sensor model EMERALD. The simulated image 'quality' is fully correlated with the predicted non-imaging performance measures. STAR also generates image and table data that is compliant with STANAG 7023, which may be used to test ground station functionality.

  14. Energetic mid-IR femtosecond pulse generation by self-defocusing soliton-induced dispersive waves in a bulk quadratic nonlinear crystal

    DEFF Research Database (Denmark)

    Zhou, Binbin; Guo, Hairun; Bache, Morten

    2015-01-01

    Generating energetic femtosecond mid-IR pulses is crucial for ultrafast spectroscopy, and currently relies on parametric processes that, while efficient, are also complex. Here we experimentally show a simple alternative that uses a single pump wavelength without any pump synchronization and with...... by using large-aperture crystals. The technique can readily be implemented with other crystals and laser wavelengths, and can therefore potentially replace current ultrafast frequency-conversion processes to the mid-IR....... and without critical phase-matching requirements. Pumping a bulk quadratic nonlinear crystal (unpoled LiNbO3 cut for noncritical phase-mismatched interaction) with sub-mJ near-IR 50-fs pulses, tunable and broadband (∼ 1,000 cm−1) mid-IR pulses around 3.0 μm are generated with excellent spatio-temporal pulse...... quality, having up to 10.5 μJ energy (6.3% conversion). The mid-IR pulses are dispersive waves phase-matched to near-IR self-defocusing solitons created by the induced self-defocusing cascaded nonlinearity. This process is filament-free and the input pulse energy can therefore be scaled arbitrarily...

  15. Phase-matched generation of coherent soft and hard X-rays using IR lasers

    Science.gov (United States)

    Popmintchev, Tenio V.; Chen, Ming-Chang; Bahabad, Alon; Murnane, Margaret M.; Kapteyn, Henry C.

    2013-06-11

    Phase-matched high-order harmonic generation of soft and hard X-rays is accomplished using infrared driving lasers in a high-pressure non-linear medium. The pressure of the non-linear medium is increased to multi-atmospheres and a mid-IR (or higher) laser device provides the driving pulse. Based on this scaling, also a general method for global optimization of the flux of phase-matched high-order harmonic generation at a desired wavelength is designed.

  16. Laguerre-Gauss beam generation in IR and UV by subwavelength surface-relief gratings

    DEFF Research Database (Denmark)

    Vertchenko, Larissa; Shkondin, Evgeniy; Malureanu, Radu

    2017-01-01

    layerdepositions and dry etch techniques. We exploit the phenomenon of formbirefringence to give rise to the spin-to-orbital angular momentum conversion.We demonstrate that these plates can generate beams with high quality for theUV and IR range, allowing them to interact with high power laser sources orinside...... laser cavities....

  17. Scene incongruity and attention.

    Science.gov (United States)

    Mack, Arien; Clarke, Jason; Erol, Muge; Bert, John

    2017-02-01

    Does scene incongruity, (a mismatch between scene gist and a semantically incongruent object), capture attention and lead to conscious perception? We explored this question using 4 different procedures: Inattention (Experiment 1), Scene description (Experiment 2), Change detection (Experiment 3), and Iconic Memory (Experiment 4). We found no differences between scene incongruity and scene congruity in Experiments 1, 2, and 4, although in Experiment 3 change detection was faster for scenes containing an incongruent object. We offer an explanation for why the change detection results differ from the results of the other three experiments. In all four experiments, participants invariably failed to report the incongruity and routinely mis-described it by normalizing the incongruent object. None of the results supports the claim that semantic incongruity within a scene invariably captures attention and provide strong evidence of the dominant role of scene gist in determining what is perceived. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Improved content aware scene retargeting for retinitis pigmentosa patients

    Directory of Open Access Journals (Sweden)

    Al-Atabany Walid I

    2010-09-01

    Full Text Available Abstract Background In this paper we present a novel scene retargeting technique to reduce the visual scene while maintaining the size of the key features. The algorithm is scalable to implementation onto portable devices, and thus, has potential for augmented reality systems to provide visual support for those with tunnel vision. We therefore test the efficacy of our algorithm on shrinking the visual scene into the remaining field of view for those patients. Methods Simple spatial compression of visual scenes makes objects appear further away. We have therefore developed an algorithm which removes low importance information, maintaining the size of the significant features. Previous approaches in this field have included seam carving, which removes low importance seams from the scene, and shrinkability which dynamically shrinks the scene according to a generated importance map. The former method causes significant artifacts and the latter is inefficient. In this work we have developed a new algorithm, combining the best aspects of both these two previous methods. In particular, our approach is to generate a shrinkability importance map using as seam based approach. We then use it to dynamically shrink the scene in similar fashion to the shrinkability method. Importantly, we have implemented it so that it can be used in real time without prior knowledge of future frames. Results We have evaluated and compared our algorithm to the seam carving and image shrinkability approaches from a content preservation perspective and a compression quality perspective. Also our technique has been evaluated and tested on a trial included 20 participants with simulated tunnel vision. Results show the robustness of our method at reducing scenes up to 50% with minimal distortion. We also demonstrate efficacy in its use for those with simulated tunnel vision of 22 degrees of field of view or less. Conclusions Our approach allows us to perform content aware video

  19. Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes.

    Science.gov (United States)

    Kwon, Min-Woo; Kim, Seung-Cheol; Kim, Eun-Soo

    2016-01-20

    A three-directional motion-compensation mask-based novel look-up table method is proposed and implemented on graphics processing units (GPUs) for video-rate generation of digital holographic videos of three-dimensional (3D) scenes. Since the proposed method is designed to be well matched with the software and memory structures of GPUs, the number of compute-unified-device-architecture kernel function calls can be significantly reduced. This results in a great increase of the computational speed of the proposed method, allowing video-rate generation of the computer-generated hologram (CGH) patterns of 3D scenes. Experimental results reveal that the proposed method can generate 39.8 frames of Fresnel CGH patterns with 1920×1080 pixels per second for the test 3D video scenario with 12,088 object points on dual GPU boards of NVIDIA GTX TITANs, and they confirm the feasibility of the proposed method in the practical application fields of electroholographic 3D displays.

  20. Framework of passive millimeter-wave scene simulation based on material classification

    Science.gov (United States)

    Park, Hyuk; Kim, Sung-Hyun; Lee, Ho-Jin; Kim, Yong-Hoon; Ki, Jae-Sug; Yoon, In-Bok; Lee, Jung-Min; Park, Soon-Jun

    2006-05-01

    Over the past few decades, passive millimeter-wave (PMMW) sensors have emerged as useful implements in transportation and military applications such as autonomous flight-landing system, smart weapons, night- and all weather vision system. As an efficient way to predict the performance of a PMMW sensor and apply it to system, it is required to test in SoftWare-In-the-Loop (SWIL). The PMMW scene simulation is a key component for implementation of this simulator. However, there is no commercial on-the-shelf available to construct the PMMW scene simulation; only there have been a few studies on this technology. We have studied the PMMW scene simulation method to develop the PMMW sensor SWIL simulator. This paper describes the framework of the PMMW scene simulation and the tentative results. The purpose of the PMMW scene simulation is to generate sensor outputs (or image) from a visible image and environmental conditions. We organize it into four parts; material classification mapping, PMMW environmental setting, PMMW scene forming, and millimeter-wave (MMW) sensorworks. The background and the objects in the scene are classified based on properties related with MMW radiation and reflectivity. The environmental setting part calculates the following PMMW phenomenology; atmospheric propagation and emission including sky temperature, weather conditions, and physical temperature. Then, PMMW raw images are formed with surface geometry. Finally, PMMW sensor outputs are generated from PMMW raw images by applying the sensor characteristics such as an aperture size and noise level. Through the simulation process, PMMW phenomenology and sensor characteristics are simulated on the output scene. We have finished the design of framework of the simulator, and are working on implementation in detail. As a tentative result, the flight observation was simulated in specific conditions. After implementation details, we plan to increase the reliability of the simulation by data collecting

  1. Hierarchy-associated semantic-rule inference framework for classifying indoor scenes

    Science.gov (United States)

    Yu, Dan; Liu, Peng; Ye, Zhipeng; Tang, Xianglong; Zhao, Wei

    2016-03-01

    Typically, the initial task of classifying indoor scenes is challenging, because the spatial layout and decoration of a scene can vary considerably. Recent efforts at classifying object relationships commonly depend on the results of scene annotation and predefined rules, making classification inflexible. Furthermore, annotation results are easily affected by external factors. Inspired by human cognition, a scene-classification framework was proposed using the empirically based annotation (EBA) and a match-over rule-based (MRB) inference system. The semantic hierarchy of images is exploited by EBA to construct rules empirically for MRB classification. The problem of scene classification is divided into low-level annotation and high-level inference from a macro perspective. Low-level annotation involves detecting the semantic hierarchy and annotating the scene with a deformable-parts model and a bag-of-visual-words model. In high-level inference, hierarchical rules are extracted to train the decision tree for classification. The categories of testing samples are generated from the parts to the whole. Compared with traditional classification strategies, the proposed semantic hierarchy and corresponding rules reduce the effect of a variable background and improve the classification performance. The proposed framework was evaluated on a popular indoor scene dataset, and the experimental results demonstrate its effectiveness.

  2. Recent Progress on the Second Generation CMORPH: LEO-IR Based Precipitation Estimates and Cloud Motion Vector

    Science.gov (United States)

    Xie, Pingping; Joyce, Robert; Wu, Shaorong

    2015-04-01

    As reported at the EGU General Assembly of 2014, a prototype system was developed for the second generation CMORPH to produce global analyses of 30-min precipitation on a 0.05olat/lon grid over the entire globe from pole to pole through integration of information from satellite observations as well as numerical model simulations. The second generation CMORPH is built upon the Kalman Filter based CMORPH algorithm of Joyce and Xie (2011). Inputs to the system include rainfall and snowfall rate retrievals from passive microwave (PMW) measurements aboard all available low earth orbit (LEO) satellites, precipitation estimates derived from infrared (IR) observations of geostationary (GEO) as well as LEO platforms, and precipitation simulations from numerical global models. Key to the success of the 2nd generation CMORPH, among a couple of other elements, are the development of a LEO-IR based precipitation estimation to fill in the polar gaps and objectively analyzed cloud motion vectors to capture the cloud movements of various spatial scales over the entire globe. In this presentation, we report our recent work on the refinement for these two important algorithm components. The prototype algorithm for the LEO IR precipitation estimation is refined to achieve improved quantitative accuracy and consistency with PMW retrievals. AVHRR IR TBB data from all LEO satellites are first remapped to a 0.05olat/lon grid over the entire globe and in a 30-min interval. Temporally and spatially co-located data pairs of the LEO TBB and inter-calibrated combined satellite PMW retrievals (MWCOMB) are then collected to construct tables. Precipitation at a grid box is derived from the TBB through matching the PDF tables for the TBB and the MWCOMB. This procedure is implemented for different season, latitude band and underlying surface types to account for the variations in the cloud - precipitation relationship. At the meantime, a sub-system is developed to construct analyzed fields of

  3. General review of multispectral cooled IR development at CEA-Leti, France

    Science.gov (United States)

    Boulard, F.; Marmonier, F.; Grangier, C.; Adelmini, L.; Gravrand, O.; Ballet, P.; Baudry, X.; Baylet, J.; Badano, G.; Espiau de Lamaestre, R.; Bisotto, S.

    2017-02-01

    Multicolor detection capabilities, which bring information on the thermal and chemical composition of the scene, are desirable for advanced infrared (IR) imaging systems. This communication reviews intra and multiband solutions developed at CEA-Leti, from dual-band molecular beam epitaxy grown Mercury Cadmium Telluride (MCT) photodiodes to plasmon-enhanced multicolor IR detectors and backside pixelated filters. Spectral responses, quantum efficiency and detector noise performances, pros and cons regarding global system are discussed in regards to technology maturity, pixel pitch reduction, and affordability. From MWIR-LWIR large band to intra MWIR or LWIR bands peaked detection, results underline the full possibility developed at CEA-Leti.

  4. Categorization of natural dynamic audiovisual scenes.

    Directory of Open Access Journals (Sweden)

    Olli Rummukainen

    Full Text Available This work analyzed the perceptual attributes of natural dynamic audiovisual scenes. We presented thirty participants with 19 natural scenes in a similarity categorization task, followed by a semi-structured interview. The scenes were reproduced with an immersive audiovisual display. Natural scene perception has been studied mainly with unimodal settings, which have identified motion as one of the most salient attributes related to visual scenes, and sound intensity along with pitch trajectories related to auditory scenes. However, controlled laboratory experiments with natural multimodal stimuli are still scarce. Our results show that humans pay attention to similar perceptual attributes in natural scenes, and a two-dimensional perceptual map of the stimulus scenes and perceptual attributes was obtained in this work. The exploratory results show the amount of movement, perceived noisiness, and eventfulness of the scene to be the most important perceptual attributes in naturalistically reproduced real-world urban environments. We found the scene gist properties openness and expansion to remain as important factors in scenes with no salient auditory or visual events. We propose that the study of scene perception should move forward to understand better the processes behind multimodal scene processing in real-world environments. We publish our stimulus scenes as spherical video recordings and sound field recordings in a publicly available database.

  5. Smart IR780 Theranostic Nanocarrier for Tumor-Specific Therapy: Hyperthermia-Mediated Bubble-Generating and Folate-Targeted Liposomes.

    Science.gov (United States)

    Guo, Fang; Yu, Meng; Wang, Jinping; Tan, Fengping; Li, Nan

    2015-09-23

    The therapeutic effectiveness of chemotherapy was hampered by dose-limiting toxicity and was optimal only when tumor cells were subjected to a maximum drug exposure. The purpose of this work was to design a dual-functional thermosensitive bubble-generating liposome (BTSL) combined with conjugated targeted ligand (folate, FA) and photothermal agent (IR780), to realize enhanced therapeutic and diagnostic functions. This drug carrier was proposed to target tumor cells owing to FA-specific binding, followed by triggering drug release due to the decomposition of encapsulated ammonium bicarbonate (NH4HCO3) (generated CO2 bubbles) by being subjected to near-infrared (near-IR) laser irradiation, creating permeable defects in the lipid bilayer that rapidly release drug. In vitro temperature-triggered release study indicated the BTSL system was sensitive to heat triggering, resulting in rapid drug release under hyperthermia. For in vitro cellular uptake experiments, different results were observed on human epidermoid carcinoma cells (KB cells) and human lung cancer cells (A549 cells) due to their different (positive or negative) response to FA receptor. Furthermore, in vivo biodistribution analysis and antitumor study indicated IR780-BTSL-FA could specifically target KB tumor cells, exhibiting longer circulation time than free drug. In the pharmacodynamics experiments, IR780-BTSL-FA efficiently inhibited tumor growth in nude mice with no evident side effect to normal tissues and organs. Results of this study demonstrated that the constructed smart theranostic nanocarrier IR780-BTSL-FA might contribute to establishment of tumor-selective and effective chemotherapy.

  6. Generating mid-IR octave-spanning supercontinua and few-cycle pulses with solitons in phase-mismatched quadratic nonlinear crystals

    DEFF Research Database (Denmark)

    Bache, Morten; Guo, Hairun; Zhou, Binbin

    2013-01-01

    We discuss a novel method for generating octave-spanning supercontinua and few-cycle pulses in the important mid-IR wavelength range. The technique relies on strongly phase-mismatched cascaded second-harmonic generation (SHG) in mid-IR nonlinear frequency conversion crystals. Importantly we here...... of the promising crystals: in one case soliton pulse compression from 50 fs to 15 fs (1.5 cycles) at 3.0 μm is achieved, and at the same time a 3-cycle dispersive wave at 5.0 μm is formed that can be isolated using a long-pass filter. In another example we show that extremely broadband supercontinua can form...

  7. New impressive capabilities of SE-workbench for EO/IR real-time rendering of animated scenarios including flares

    Science.gov (United States)

    Le Goff, Alain; Cathala, Thierry; Latger, Jean

    2015-10-01

    To provide technical assessments of EO/IR flares and self-protection systems for aircraft, DGA Information superiority resorts to synthetic image generation to model the operational battlefield of an aircraft, as viewed by EO/IR threats. For this purpose, it completed the SE-Workbench suite from OKTAL-SE with functionalities to predict a realistic aircraft IR signature and is yet integrating the real-time EO/IR rendering engine of SE-Workbench called SE-FAST-IR. This engine is a set of physics-based software and libraries that allows preparing and visualizing a 3D scene for the EO/IR domain. It takes advantage of recent advances in GPU computing techniques. The recent past evolutions that have been performed concern mainly the realistic and physical rendering of reflections, the rendering of both radiative and thermal shadows, the use of procedural techniques for the managing and the rendering of very large terrains, the implementation of Image- Based Rendering for dynamic interpolation of plume static signatures and lastly for aircraft the dynamic interpolation of thermal states. The next step is the representation of the spectral, directional, spatial and temporal signature of flares by Lacroix Defense using OKTAL-SE technology. This representation is prepared from experimental data acquired during windblast tests and high speed track tests. It is based on particle system mechanisms to model the different components of a flare. The validation of a flare model will comprise a simulation of real trials and a comparison of simulation outputs to experimental results concerning the flare signature and above all the behavior of the stimulated threat.

  8. Scene Integration Without Awareness: No Conclusive Evidence for Processing Scene Congruency During Continuous Flash Suppression.

    Science.gov (United States)

    Moors, Pieter; Boelens, David; van Overwalle, Jaana; Wagemans, Johan

    2016-07-01

    A recent study showed that scenes with an object-background relationship that is semantically incongruent break interocular suppression faster than scenes with a semantically congruent relationship. These results implied that semantic relations between the objects and the background of a scene could be extracted in the absence of visual awareness of the stimulus. In the current study, we assessed the replicability of this finding and tried to rule out an alternative explanation dependent on low-level differences between the stimuli. Furthermore, we used a Bayesian analysis to quantify the evidence in favor of the presence or absence of a scene-congruency effect. Across three experiments, we found no convincing evidence for a scene-congruency effect or a modulation of scene congruency by scene inversion. These findings question the generalizability of previous observations and cast doubt on whether genuine semantic processing of object-background relationships in scenes can manifest during interocular suppression. © The Author(s) 2016.

  9. The design and application of a multi-band IR imager

    Science.gov (United States)

    Li, Lijuan

    2018-02-01

    Multi-band IR imaging system has many applications in security, national defense, petroleum and gas industry, etc. So the relevant technologies are getting more and more attention in rent years. As we know, when used in missile warning and missile seeker systems, multi-band IR imaging technology has the advantage of high target recognition capability and low false alarm rate if suitable spectral bands are selected. Compared with traditional single band IR imager, multi-band IR imager can make use of spectral features in addition to space and time domain features to discriminate target from background clutters and decoys. So, one of the key work is to select the right spectral bands in which the feature difference between target and false target is evident and is well utilized. Multi-band IR imager is a useful instrument to collect multi-band IR images of target, backgrounds and decoys for spectral band selection study at low cost and with adjustable parameters and property compared with commercial imaging spectrometer. In this paper, a multi-band IR imaging system is developed which is suitable to collect 4 spectral band images of various scenes at every turn and can be expanded to other short-wave and mid-wave IR spectral bands combination by changing filter groups. The multi-band IR imaging system consists of a broad band optical system, a cryogenic InSb large array detector, a spinning filter wheel and electronic processing system. The multi-band IR imaging system's performance is tested in real data collection experiments.

  10. SCEGRAM: An image database for semantic and syntactic inconsistencies in scenes.

    Science.gov (United States)

    Öhlschläger, Sabine; Võ, Melissa Le-Hoa

    2017-10-01

    Our visual environment is not random, but follows compositional rules according to what objects are usually found where. Despite the growing interest in how such semantic and syntactic rules - a scene grammar - enable effective attentional guidance and object perception, no common image database containing highly-controlled object-scene modifications has been publically available. Such a database is essential in minimizing the risk that low-level features drive high-level effects of interest, which is being discussed as possible source of controversial study results. To generate the first database of this kind - SCEGRAM - we took photographs of 62 real-world indoor scenes in six consistency conditions that contain semantic and syntactic (both mild and extreme) violations as well as their combinations. Importantly, always two scenes were paired, so that an object was semantically consistent in one scene (e.g., ketchup in kitchen) and inconsistent in the other (e.g., ketchup in bathroom). Low-level salience did not differ between object-scene conditions and was generally moderate. Additionally, SCEGRAM contains consistency ratings for every object-scene condition, as well as object-absent scenes and object-only images. Finally, a cross-validation using eye-movements replicated previous results of longer dwell times for both semantic and syntactic inconsistencies compared to consistent controls. In sum, the SCEGRAM image database is the first to contain well-controlled semantic and syntactic object-scene inconsistencies that can be used in a broad range of cognitive paradigms (e.g., verbal and pictorial priming, change detection, object identification, etc.) including paradigms addressing developmental aspects of scene grammar. SCEGRAM can be retrieved for research purposes from http://www.scenegrammarlab.com/research/scegram-database/ .

  11. Scene perception in posterior cortical atrophy: categorization, description and fixation patterns.

    Science.gov (United States)

    Shakespeare, Timothy J; Yong, Keir X X; Frost, Chris; Kim, Lois G; Warrington, Elizabeth K; Crutch, Sebastian J

    2013-01-01

    Partial or complete Balint's syndrome is a core feature of the clinico-radiological syndrome of posterior cortical atrophy (PCA), in which individuals experience a progressive deterioration of cortical vision. Although multi-object arrays are frequently used to detect simultanagnosia in the clinical assessment and diagnosis of PCA, to date there have been no group studies of scene perception in patients with the syndrome. The current study involved three linked experiments conducted in PCA patients and healthy controls. Experiment 1 evaluated the accuracy and latency of complex scene perception relative to individual faces and objects (color and grayscale) using a categorization paradigm. PCA patients were both less accurate (faces < scenes < objects) and slower (scenes < objects < faces) than controls on all categories, with performance strongly associated with their level of basic visual processing impairment; patients also showed a small advantage for color over grayscale stimuli. Experiment 2 involved free description of real world scenes. PCA patients generated fewer features and more misperceptions than controls, though perceptual errors were always consistent with the patient's global understanding of the scene (whether correct or not). Experiment 3 used eye tracking measures to compare patient and control eye movements over initial and subsequent fixations of scenes. Patients' fixation patterns were significantly different to those of young and age-matched controls, with comparable group differences for both initial and subsequent fixations. Overall, these findings describe the variability in everyday scene perception exhibited by individuals with PCA, and indicate the importance of exposure duration in the perception of complex scenes.

  12. Scene perception in Posterior Cortical Atrophy: categorisation, description and fixation patterns

    Directory of Open Access Journals (Sweden)

    Timothy J Shakespeare

    2013-10-01

    Full Text Available Partial or complete Balint’s syndrome is a core feature of the clinico-radiological syndrome of posterior cortical atrophy (PCA, in which individuals experience a progressive deterioration of cortical vision. Although multi-object arrays are frequently used to detect simultanagnosia in the clinical assessment and diagnosis of PCA, to date there have been no group studies of scene perception in patients with the syndrome. The current study involved three linked experiments conducted in PCA patients and healthy controls. Experiment 1 evaluated the accuracy and latency of complex scene perception relative to individual faces and objects (colour and greyscale using a categorisation paradigm. PCA patients were both less accurate (faces<scenesscenesscenes. PCA patients generated fewer features and more misperceptions than controls, though perceptual errors were always consistent with the patient’s global understanding of the scene (whether correct or not. Experiment 3 used eye tracking measures to compare patient and control eye movements over initial and subsequent fixations of scenes. Patients’ fixation patterns were significantly different to those of young and age-matched controls, with comparable group differences for both initial and subsequent fixations. Overall, these findings describe the variability in everyday scene perception exhibited by individuals with PCA, and indicate the importance of exposure duration in the perception of complex scenes.

  13. Innovations in IR projector arrays

    Science.gov (United States)

    Cole, Barry E.; Higashi, B.; Ridley, Jeff A.; Holmen, J.; Newstrom, K.; Zins, C.; Nguyen, K.; Weeres, Steven R.; Johnson, Burgess R.; Stockbridge, Robert G.; Murrer, Robert Lee; Olson, Eric M.; Bergin, Thomas P.; Kircher, James R.; Flynn, David S.

    2000-07-01

    In the past year, Honeywell has developed a 512 X 512 snapshot scene projector containing pixels with very high radiance efficiency. The array can operate in both snapshot and raster mode. The array pixels have near black body characteristics, high radiance outputs, broad band performance, and high speed. IR measurements and performance of these pixels will be described. In addition, a vacuum probe station that makes it possible to select the best die for packaging and delivery based on wafer level radiance screening, has been developed and is in operation. This system, as well as other improvements, will be described. Finally, a review of the status of the present projectors and plans for future arrays is included.

  14. Narrative Collage of Image Collections by Scene Graph Recombination.

    Science.gov (United States)

    Fang, Fei; Yi, Miao; Feng, Hui; Hu, Shenghong; Xiao, Chunxia

    2017-10-04

    Narrative collage is an interesting image editing art to summarize the main theme or storyline behind an image collection. We present a novel method to generate narrative images with plausible semantic scene structures. To achieve this goal, we introduce a layer graph and a scene graph to represent relative depth order and semantic relationship between image objects, respectively. We firstly cluster the input image collection to select representative images, and then extract a group of semantic salient objects from each representative image. Both Layer graphs and scene graphs are constructed and combined according to our specific rules for reorganizing the extracted objects in every image. We design an energy model to appropriately locate every object on the final canvas. Experiment results show that our method can produce competitive narrative collage result and works well on a wide range of image collections.

  15. Automatic temperature computation for realistic IR simulation

    Science.gov (United States)

    Le Goff, Alain; Kersaudy, Philippe; Latger, Jean; Cathala, Thierry; Stolte, Nilo; Barillot, Philippe

    2000-07-01

    Polygon temperature computation in 3D virtual scenes is fundamental for IR image simulation. This article describes in detail the temperature calculation software and its current extensions, briefly presented in [1]. This software, called MURET, is used by the simulation workshop CHORALE of the French DGA. MURET is a one-dimensional thermal software, which accurately takes into account the material thermal attributes of three-dimensional scene and the variation of the environment characteristics (atmosphere) as a function of the time. Concerning the environment, absorbed incident fluxes are computed wavelength by wavelength, for each half an hour, druing 24 hours before the time of the simulation. For each polygon, incident fluxes are compsed of: direct solar fluxes, sky illumination (including diffuse solar fluxes). Concerning the materials, classical thermal attributes are associated to several layers, such as conductivity, absorption, spectral emissivity, density, specific heat, thickness and convection coefficients are taken into account. In the future, MURET will be able to simulate permeable natural materials (water influence) and vegetation natural materials (woods). This model of thermal attributes induces a very accurate polygon temperature computation for the complex 3D databases often found in CHORALE simulations. The kernel of MUET consists of an efficient ray tracer allowing to compute the history (over 24 hours) of the shadowed parts of the 3D scene and a library, responsible for the thermal computations. The great originality concerns the way the heating fluxes are computed. Using ray tracing, the flux received in each 3D point of the scene accurately takes into account the masking (hidden surfaces) between objects. By the way, this library supplies other thermal modules such as a thermal shows computation tool.

  16. Cross-sensor comparisons between Landsat 5 TM and IRS-P6 AWiFS and disturbance detection using integrated Landsat and AWiFS time-series images

    Science.gov (United States)

    Chen, Xuexia; Vogelmann, James E.; Chander, Gyanesh; Ji, Lei; Tolk, Brian; Huang, Chengquan; Rollins, Matthew

    2013-01-01

    Routine acquisition of Landsat 5 Thematic Mapper (TM) data was discontinued recently and Landsat 7 Enhanced Thematic Mapper Plus (ETM+) has an ongoing problem with the scan line corrector (SLC), thereby creating spatial gaps when covering images obtained during the process. Since temporal and spatial discontinuities of Landsat data are now imminent, it is therefore important to investigate other potential satellite data that can be used to replace Landsat data. We thus cross-compared two near-simultaneous images obtained from Landsat 5 TM and the Indian Remote Sensing (IRS)-P6 Advanced Wide Field Sensor (AWiFS), both captured on 29 May 2007 over Los Angeles, CA. TM and AWiFS reflectances were compared for the green, red, near-infrared (NIR), and shortwave infrared (SWIR) bands, as well as the normalized difference vegetation index (NDVI) based on manually selected polygons in homogeneous areas. All R2 values of linear regressions were found to be higher than 0.99. The temporally invariant cluster (TIC) method was used to calculate the NDVI correlation between the TM and AWiFS images. The NDVI regression line derived from selected polygons passed through several invariant cluster centres of the TIC density maps and demonstrated that both the scene-dependent polygon regression method and TIC method can generate accurate radiometric normalization. A scene-independent normalization method was also used to normalize the AWiFS data. Image agreement assessment demonstrated that the scene-dependent normalization using homogeneous polygons provided slightly higher accuracy values than those obtained by the scene-independent method. Finally, the non-normalized and relatively normalized ‘Landsat-like’ AWiFS 2007 images were integrated into 1984 to 2010 Landsat time-series stacks (LTSS) for disturbance detection using the Vegetation Change Tracker (VCT) model. Both scene-dependent and scene-independent normalized AWiFS data sets could generate disturbance maps similar to

  17. Underwater Scene Composition

    Science.gov (United States)

    Kim, Nanyoung

    2009-01-01

    In this article, the author describes an underwater scene composition for elementary-education majors. This project deals with watercolor with crayon or oil-pastel resist (medium); the beauty of nature represented by fish in the underwater scene (theme); texture and pattern (design elements); drawing simple forms (drawing skill); and composition…

  18. Scene-Based Contextual Cueing in Pigeons

    Science.gov (United States)

    Wasserman, Edward A.; Teng, Yuejia; Brooks, Daniel I.

    2014-01-01

    Repeated pairings of a particular visual context with a specific location of a target stimulus facilitate target search in humans. We explored an animal model of such contextual cueing. Pigeons had to peck a target which could appear in one of four locations on color photographs of real-world scenes. On half of the trials, each of four scenes was consistently paired with one of four possible target locations; on the other half of the trials, each of four different scenes was randomly paired with the same four possible target locations. In Experiments 1 and 2, pigeons exhibited robust contextual cueing when the context preceded the target by 1 s to 8 s, with reaction times to the target being shorter on predictive-scene trials than on random-scene trials. Pigeons also responded more frequently during the delay on predictive-scene trials than on random-scene trials; indeed, during the delay on predictive-scene trials, pigeons predominately pecked toward the location of the upcoming target, suggesting that attentional guidance contributes to contextual cueing. In Experiment 3, involving left-right and top-bottom scene reversals, pigeons exhibited stronger control by global than by local scene cues. These results attest to the robustness and associative basis of contextual cueing in pigeons. PMID:25546098

  19. Infrared (IR) vs x-ray power generation in the SLAC Linac Coherent Light Source (LCLS)

    International Nuclear Information System (INIS)

    Tatchyn, R.

    1993-05-01

    The LCLS, a Free-Electron Laser (FEL) designed for operation at a first harmonic energy of 300 eV (λ congruent 40 Angstrom) in the Self-Amplified Spontaneous Emission (SASE) regime, will utilize electron bunches compressed down to durations of <0.5ps, or lengths of <150 μ. It is natural to inquire whether coherent radiation of this (and longer) wavelength will constitute a significant component of the total coherent output of the FEL. In this paper a determination of a simple upper bound on the IR that can be generated by the compressed bunches is outlines. Under the assumed operating parameters of the LCLS undulator, it is shown that that IR component of the coherent output should be strongly dominated by the x-ray component

  20. Clandestine laboratory scene investigation and processing using portable GC/MS

    Science.gov (United States)

    Matejczyk, Raymond J.

    1997-02-01

    This presentation describes the use of portable gas chromatography/mass spectrometry for on-scene investigation and processing of clandestine laboratories. Clandestine laboratory investigations present special problems to forensic investigators. These crime scenes contain many chemical hazards that must be detected, identified and collected as evidence. Gas chromatography/mass spectrometry performed on-scene with a rugged, portable unit is capable of analyzing a variety of matrices for drugs and chemicals used in the manufacture of illicit drugs, such as methamphetamine. Technologies used to detect various materials at a scene have particular applications but do not address the wide range of samples, chemicals, matrices and mixtures that exist in clan labs. Typical analyses performed by GC/MS are for the purpose of positively establishing the identity of starting materials, chemicals and end-product collected from clandestine laboratories. Concerns for the public and investigator safety and the environment are also important factors for rapid on-scene data generation. Here is described the implementation of a portable multiple-inlet GC/MS system designed for rapid deployment to a scene to perform forensic investigations of clandestine drug manufacturing laboratories. GC/MS has long been held as the 'gold standard' in performing forensic chemical analyses. With the capability of GC/MS to separate and produce a 'chemical fingerprint' of compounds, it is utilized as an essential technique for detecting and positively identifying chemical evidence. Rapid and conclusive on-scene analysis of evidence will assist the forensic investigators in collecting only pertinent evidence thereby reducing the amount of evidence to be transported, reducing chain of custody concerns, reducing costs and hazards, maintaining sample integrity and speeding the completion of the investigative process.

  1. Research on hyperspectral dynamic scene and image sequence simulation

    Science.gov (United States)

    Sun, Dandan; Liu, Fang; Gao, Jiaobo; Sun, Kefeng; Hu, Yu; Li, Yu; Xie, Junhu; Zhang, Lei

    2016-10-01

    This paper presents a simulation method of hyperspectral dynamic scene and image sequence for hyperspectral equipment evaluation and target detection algorithm. Because of high spectral resolution, strong band continuity, anti-interference and other advantages, in recent years, hyperspectral imaging technology has been rapidly developed and is widely used in many areas such as optoelectronic target detection, military defense and remote sensing systems. Digital imaging simulation, as a crucial part of hardware in loop simulation, can be applied to testing and evaluation hyperspectral imaging equipment with lower development cost and shorter development period. Meanwhile, visual simulation can produce a lot of original image data under various conditions for hyperspectral image feature extraction and classification algorithm. Based on radiation physic model and material characteristic parameters this paper proposes a generation method of digital scene. By building multiple sensor models under different bands and different bandwidths, hyperspectral scenes in visible, MWIR, LWIR band, with spectral resolution 0.01μm, 0.05μm and 0.1μm have been simulated in this paper. The final dynamic scenes have high real-time and realistic, with frequency up to 100 HZ. By means of saving all the scene gray data in the same viewpoint image sequence is obtained. The analysis results show whether in the infrared band or the visible band, the grayscale variations of simulated hyperspectral images are consistent with the theoretical analysis results.

  2. Review of On-Scene Management of Mass-Casualty Attacks

    Directory of Open Access Journals (Sweden)

    Annelie Holgersson

    2016-02-01

    Full Text Available Background: The scene of a mass-casualty attack (MCA entails a crime scene, a hazardous space, and a great number of people needing medical assistance. Public transportation has been the target of such attacks and involves a high probability of generating mass casualties. The review aimed to investigate challenges for on-scene responses to MCAs and suggestions made to counter these challenges, with special attention given to attacks on public transportation and associated terminals. Methods: Articles were found through PubMed and Scopus, “relevant articles” as defined by the databases, and a manual search of references. Inclusion criteria were that the article referred to attack(s and/or a public transportation-related incident and issues concerning formal on-scene response. An appraisal of the articles’ scientific quality was conducted based on an evidence hierarchy model developed for the study. Results: One hundred and five articles were reviewed. Challenges for command and coordination on scene included establishing leadership, inter-agency collaboration, multiple incident sites, and logistics. Safety issues entailed knowledge and use of personal protective equipment, risk awareness and expectations, cordons, dynamic risk assessment, defensive versus offensive approaches, and joining forces. Communication concerns were equipment shortfalls, dialoguing, and providing information. Assessment problems were scene layout and interpreting environmental indicators as well as understanding setting-driven needs for specialist skills and resources. Triage and treatment difficulties included differing triage systems, directing casualties, uncommon injuries, field hospitals, level of care, providing psychological and pediatric care. Transportation hardships included scene access, distance to hospitals, and distribution of casualties. Conclusion: Commonly encountered challenges during unintentional incidents were added to during MCAs

  3. Frequency notching applicable to CMOS implementation of WLAN compatible IR-UWB pulse generators

    DEFF Research Database (Denmark)

    Shen, Ming; Mikkelsen, Jan H.; Jiang, Hao

    2012-01-01

    Due to overlapping frequency bands, IEEE 802.11a WLAN and Ultra Wide-Band systems potentially suffer from mutual interference problems. This paper proposes a method for inserting frequency notches into the IR-UWB power spectrum to ensure compatibility with WLAN systems. In contrast to conventional...... approaches where complicated waveform equations are used, the proposed method uses a dual-pulse frequency notching approach to achieve frequency suppression in selected bands. The proposed method offers a solution that is generically applicable to UWB pulse generators using different pulse waveforms...

  4. Generation of thermo-acoustic waves from pulsed solar/IR radiation

    Science.gov (United States)

    Rahman, Aowabin

    Acoustic waves could potentially be used in a wide range of engineering applications; however, the high energy consumption in generating acoustic waves from electrical energy and the cost associated with the process limit the use of acoustic waves in industrial processes. Acoustic waves converted from solar radiation provide a feasible way of obtaining acoustic energy, without relying on conventional nonrenewable energy sources. One of the goals of this thesis project was to experimentally study the conversion of thermal to acoustic energy using pulsed radiation. The experiments were categorized into "indoor" and "outdoor" experiments, each with a separate experimental setup. The indoor experiments used an IR heater to power the thermo-acoustic lasers and were primarily aimed at studying the effect of various experimental parameters on the amplitude of sound waves in the low frequency range (below 130 Hz). The IR radiation was modulated externally using a chopper wheel and then impinged on a porous solid, which was housed inside a thermo-acoustic (TA) converter. A microphone located at a certain distance from the porous solid inside the TA converter detected the acoustic signals. The "outdoor" experiments, which were targeted at TA conversion at comparatively higher frequencies (in 200 Hz-3 kHz range) used solar energy to power the thermo-acoustic laser. The amplitudes (in RMS) of thermo-acoustic signals obtained in experiments using IR heater as radiation source were in the 80-100 dB range. The frequency of acoustic waves corresponded to the frequency of interceptions of the radiation beam by the chopper. The amplitudes of acoustic waves were influenced by several factors, including the chopping frequency, magnitude of radiation flux, type of porous material, length of porous material, external heating of the TA converter housing, location of microphone within the air column, and design of the TA converter. The time-dependent profile of the thermo-acoustic signals

  5. Reconstruction and simplification of urban scene models based on oblique images

    Science.gov (United States)

    Liu, J.; Guo, B.

    2014-08-01

    We describe a multi-view stereo reconstruction and simplification algorithms for urban scene models based on oblique images. The complexity, diversity, and density within the urban scene, it increases the difficulty to build the city models using the oblique images. But there are a lot of flat surfaces existing in the urban scene. One of our key contributions is that a dense matching algorithm based on Self-Adaptive Patch in view of the urban scene is proposed. The basic idea of matching propagating based on Self-Adaptive Patch is to build patches centred by seed points which are already matched. The extent and shape of the patches can adapt to the objects of urban scene automatically: when the surface is flat, the extent of the patch would become bigger; while the surface is very rough, the extent of the patch would become smaller. The other contribution is that the mesh generated by Graph Cuts is 2-manifold surface satisfied the half edge data structure. It is solved by clustering and re-marking tetrahedrons in s-t graph. The purpose of getting 2- manifold surface is to simply the mesh by edge collapse algorithm which can preserve and stand out the features of buildings.

  6. Conceptual Design Study of Nb(3)Sn Low-beta Quadrupoles for 2nd Generation LHC IRs

    Science.gov (United States)

    Zlobin, A. V.; Ambrosio, G.; Andreev, N.; Barzi, E.; Bauer, P.

    2002-10-01

    Conceptual designs of 90-mm aperture high gradient quadrupoles based on the Nb3Sn superconductor, are being developed at Fermilab for possible 2nd generation IRs with the similar optics as in the current low-beta insertions. Magnet designs and results of magnetic, mechanical, thermal and quench protection analysis for these magnets are presented and discussed.

  7. Conceptual design study of Nb3Sn low-beta quadrupoles for 2nd generation LHC IRs

    International Nuclear Information System (INIS)

    Alexander V Zlobin et al.

    2002-01-01

    Conceptual designs of 90-mm aperture high-gradient quadrupoles based on the Nb 3 Sn superconductor, are being developed at Fermilab for possible 2nd generation IRs with the similar optics as in the current low-beta insertions. Magnet designs and results of magnetic, mechanical, thermal and quench protection analysis for these magnets are presented and discussed

  8. Associative Processing Is Inherent in Scene Perception

    Science.gov (United States)

    Aminoff, Elissa M.; Tarr, Michael J.

    2015-01-01

    How are complex visual entities such as scenes represented in the human brain? More concretely, along what visual and semantic dimensions are scenes encoded in memory? One hypothesis is that global spatial properties provide a basis for categorizing the neural response patterns arising from scenes. In contrast, non-spatial properties, such as single objects, also account for variance in neural responses. The list of critical scene dimensions has continued to grow—sometimes in a contradictory manner—coming to encompass properties such as geometric layout, big/small, crowded/sparse, and three-dimensionality. We demonstrate that these dimensions may be better understood within the more general framework of associative properties. That is, across both the perceptual and semantic domains, features of scene representations are related to one another through learned associations. Critically, the components of such associations are consistent with the dimensions that are typically invoked to account for scene understanding and its neural bases. Using fMRI, we show that non-scene stimuli displaying novel associations across identities or locations recruit putatively scene-selective regions of the human brain (the parahippocampal/lingual region, the retrosplenial complex, and the transverse occipital sulcus/occipital place area). Moreover, we find that the voxel-wise neural patterns arising from these associations are significantly correlated with the neural patterns arising from everyday scenes providing critical evidence whether the same encoding principals underlie both types of processing. These neuroimaging results provide evidence for the hypothesis that the neural representation of scenes is better understood within the broader theoretical framework of associative processing. In addition, the results demonstrate a division of labor that arises across scene-selective regions when processing associations and scenes providing better understanding of the functional

  9. Beyond scene gist: Objects guide search more than scene background.

    Science.gov (United States)

    Koehler, Kathryn; Eckstein, Miguel P

    2017-06-01

    Although the facilitation of visual search by contextual information is well established, there is little understanding of the independent contributions of different types of contextual cues in scenes. Here we manipulated 3 types of contextual information: object co-occurrence, multiple object configurations, and background category. We isolated the benefits of each contextual cue to target detectability, its impact on decision bias, confidence, and the guidance of eye movements. We find that object-based information guides eye movements and facilitates perceptual judgments more than scene background. The degree of guidance and facilitation of each contextual cue can be related to its inherent informativeness about the target spatial location as measured by human explicit judgments about likely target locations. Our results improve the understanding of the contributions of distinct contextual scene components to search and suggest that the brain's utilization of cues to guide eye movements is linked to the cue's informativeness about the target's location. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  10. Stages As Models of Scene Geometry

    NARCIS (Netherlands)

    Nedović, V.; Smeulders, A.W.M.; Redert, A.; Geusebroek, J.M.

    2010-01-01

    Reconstruction of 3D scene geometry is an important element for scene understanding, autonomous vehicle and robot navigation, image retrieval, and 3D television. We propose accounting for the inherent structure of the visual world when trying to solve the scene reconstruction problem. Consequently,

  11. When Does Repeated Search in Scenes Involve Memory? Looking at versus Looking for Objects in Scenes

    Science.gov (United States)

    Vo, Melissa L. -H.; Wolfe, Jeremy M.

    2012-01-01

    One might assume that familiarity with a scene or previous encounters with objects embedded in a scene would benefit subsequent search for those items. However, in a series of experiments we show that this is not the case: When participants were asked to subsequently search for multiple objects in the same scene, search performance remained…

  12. Forensic 3D Scene Reconstruction

    International Nuclear Information System (INIS)

    LITTLE, CHARLES Q.; PETERS, RALPH R.; RIGDON, J. BRIAN; SMALL, DANIEL E.

    1999-01-01

    Traditionally law enforcement agencies have relied on basic measurement and imaging tools, such as tape measures and cameras, in recording a crime scene. A disadvantage of these methods is that they are slow and cumbersome. The development of a portable system that can rapidly record a crime scene with current camera imaging, 3D geometric surface maps, and contribute quantitative measurements such as accurate relative positioning of crime scene objects, would be an asset to law enforcement agents in collecting and recording significant forensic data. The purpose of this project is to develop a feasible prototype of a fast, accurate, 3D measurement and imaging system that would support law enforcement agents to quickly document and accurately record a crime scene

  13. Planarity constrained multi-view depth map reconstruction for urban scenes

    Science.gov (United States)

    Hou, Yaolin; Peng, Jianwei; Hu, Zhihua; Tao, Pengjie; Shan, Jie

    2018-05-01

    Multi-view depth map reconstruction is regarded as a suitable approach for 3D generation of large-scale scenes due to its flexibility and scalability. However, there are challenges when this technique is applied to urban scenes where apparent man-made regular shapes may present. To address this need, this paper proposes a planarity constrained multi-view depth (PMVD) map reconstruction method. Starting with image segmentation and feature matching for each input image, the main procedure is iterative optimization under the constraints of planar geometry and smoothness. A set of candidate local planes are first generated by an extended PatchMatch method. The image matching costs are then computed and aggregated by an adaptive-manifold filter (AMF), whereby the smoothness constraint is applied to adjacent pixels through belief propagation. Finally, multiple criteria are used to eliminate image matching outliers. (Vertical) aerial images, oblique (aerial) images and ground images are used for qualitative and quantitative evaluations. The experiments demonstrated that the PMVD outperforms the popular multi-view depth map reconstruction with an accuracy two times better for the aerial datasets and achieves an outcome comparable to the state-of-the-art for ground images. As expected, PMVD is able to preserve the planarity for piecewise flat structures in urban scenes and restore the edges in depth discontinuous areas.

  14. A comparison of directed search target detection versus in-scene target detection in Worldview-2 datasets

    Science.gov (United States)

    Grossman, S.

    2015-05-01

    Since the events of September 11, 2001, the intelligence focus has moved from large order-of-battle targets to small targets of opportunity. Additionally, the business community has discovered the use of remotely sensed data to anticipate demand and derive data on their competition. This requires the finer spectral and spatial fidelity now available to recognize those targets. This work hypothesizes that directed searches using calibrated data perform at least as well as inscene manually intensive target detection searches. It uses calibrated Worldview-2 multispectral images with NEF generated signatures and standard detection algorithms to compare bespoke directed search capabilities against ENVI™ in-scene search capabilities. Multiple execution runs are performed at increasing thresholds to generate detection rates. These rates are plotted and statistically analyzed. While individual head-to-head comparison results vary, 88% of the directed searches performed at least as well as in-scene searches with 50% clearly outperforming in-scene methods. The results strongly support the premise that directed searches perform at least as well as comparable in-scene searches.

  15. Interactive Procedural Modelling of Coherent Waterfall Scenes

    OpenAIRE

    Emilien , Arnaud; Poulin , Pierre; Cani , Marie-Paule; Vimont , Ulysse

    2015-01-01

    International audience; Combining procedural generation and user control is a fundamental challenge for the interactive design of natural scenery. This is particularly true for modelling complex waterfall scenes where, in addition to taking charge of geometric details, an ideal tool should also provide a user with the freedom to shape the running streams and falls, while automatically maintaining physical plausibility in terms of flow network, embedding into the terrain, and visual aspects of...

  16. Integration of virtual and real scenes within an integral 3D imaging environment

    Science.gov (United States)

    Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm

    2002-11-01

    The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.

  17. Scenes of the self, and trance

    Directory of Open Access Journals (Sweden)

    Jan M. Broekman

    2014-02-01

    Full Text Available Trance shows the Self as a process involved in all sorts and forms of life. A Western perspective on a self and its reifying tendencies is only one (or one series of those variations. The process character of the self does not allow any coherent theory but shows, in particular when confronted with trance, its variability in all regards. What is more: the Self is always first on the scene of itself―a situation in which it becomes a sign for itself. That particular semiotic feature is again not a unified one but leads, as the Self in view of itself does, to series of scenes with changing colors, circumstances and environments. Our first scene “Beyond Monotheism” shows semiotic importance in that a self as determining component of a trance-phenomenon must abolish its own referent and seems not able to answer the question, what makes trance a trance. The Pizzica is an example here. Other social features of trance appear in the second scene, US post traumatic psychological treatments included. Our third scene underlines structures of an unfolding self: beginning with ‘split-ego’ conclusions, a self’s engenderment appears dependent on linguistic events and on spoken words in the first place. A fourth scene explores that theme and explains modern forms of an ego ―in particular those inherent to ‘citizenship’ or a ‘corporation’. The legal consequences are concentrated in the fifth scene, which considers a legal subject by revealing its ‘standing’. Our sixth and final scene pertains to the relation between trance and commerce. All scenes tie together and show parallels between Pizzica, rights-based behavior, RAVE music versus disco, commerce and trance; they demonstrate the meaning of trance as a multifaceted social phenomenon.

  18. Deconstructing visual scenes in cortex: gradients of object and spatial layout information.

    Science.gov (United States)

    Harel, Assaf; Kravitz, Dwight J; Baker, Chris I

    2013-04-01

    Real-world visual scenes are complex cluttered, and heterogeneous stimuli engaging scene- and object-selective cortical regions including parahippocampal place area (PPA), retrosplenial complex (RSC), and lateral occipital complex (LOC). To understand the unique contribution of each region to distributed scene representations, we generated predictions based on a neuroanatomical framework adapted from monkey and tested them using minimal scenes in which we independently manipulated both spatial layout (open, closed, and gradient) and object content (furniture, e.g., bed, dresser). Commensurate with its strong connectivity with posterior parietal cortex, RSC evidenced strong spatial layout information but no object information, and its response was not even modulated by object presence. In contrast, LOC, which lies within the ventral visual pathway, contained strong object information but no background information. Finally, PPA, which is connected with both the dorsal and the ventral visual pathway, showed information about both objects and spatial backgrounds and was sensitive to the presence or absence of either. These results suggest that 1) LOC, PPA, and RSC have distinct representations, emphasizing different aspects of scenes, 2) the specific representations in each region are predictable from their patterns of connectivity, and 3) PPA combines both spatial layout and object information as predicted by connectivity.

  19. Characterization, propagation, and simulation of sources and backgrounds; Proceedings of the Meeting, Orlando, FL, Apr. 2, 3, 1991

    Science.gov (United States)

    Watkins, Wendell R.; Clement, Dieter

    The present conference discusses the design of IR imaging radiometers, IR clutter measurements of marine backgrounds, a global evaluation of thermal IR countermeasures, the estimation of scene-correlation lengths, the dimension and lacunarity measurement of IR images using Hilbert scanning, modeling the time-dependent obscuration in simulated imaging of dust and smoke clouds, and the thermal and radiometric modeling of terrain backgrounds. Also discussed are the simulation of partially obscured scenes using the 'radiosity' method, dynamic sea-image generation, atmospheric propagation effects on pattern recognition by neural networks, a thermal model for real-time textured IR background simulation, and interferometric measurements of a high velocity mixing/shear layer. (No individual items are abstracted in this volume)

  20. Lateralized discrimination of emotional scenes in peripheral vision.

    Science.gov (United States)

    Calvo, Manuel G; Rodríguez-Chinea, Sandra; Fernández-Martín, Andrés

    2015-03-01

    This study investigates whether there is lateralized processing of emotional scenes in the visual periphery, in the absence of eye fixations; and whether this varies with emotional valence (pleasant vs. unpleasant), specific emotional scene content (babies, erotica, human attack, mutilation, etc.), and sex of the viewer. Pairs of emotional (positive or negative) and neutral photographs were presented for 150 ms peripherally (≥6.5° away from fixation). Observers judged on which side the emotional picture was located. Low-level image properties, scene visual saliency, and eye movements were controlled. Results showed that (a) correct identification of the emotional scene exceeded the chance level; (b) performance was more accurate and faster when the emotional scene appeared in the left than in the right visual field; (c) lateralization was equivalent for females and males for pleasant scenes, but was greater for females and unpleasant scenes; and (d) lateralization occurred similarly for different emotional scene categories. These findings reveal discrimination between emotional and neutral scenes, and right brain hemisphere dominance for emotional processing, which is modulated by sex of the viewer and scene valence, and suggest that coarse affective significance can be extracted in peripheral vision.

  1. Teaching IR to Medical Students: A Call to Action.

    Science.gov (United States)

    Lee, Aoife M; Lee, Michael J

    2018-02-01

    Interventional radiology (IR) has grown rapidly over the last 20 years and is now an essential component of modern medicine. Despite IR's increasing penetration and reputation in healthcare systems, IR is poorly taught, if taught at all, in most medical schools. Medical students are the referrers of tomorrow and potential IR recruits and deserve to be taught IR by expert IRs. The lack of formal IR teaching curricula in many medical schools needs to be addressed urgently for the continued development and dissemination of, particularly acute, IR services throughout Europe. We call on IRs to take up the baton to teach IR to the next generation of doctors.

  2. Scene grammar shapes the way we interact with objects, strengthens memories, and speeds search.

    Science.gov (United States)

    Draschkow, Dejan; Võ, Melissa L-H

    2017-11-28

    Predictions of environmental rules (here referred to as "scene grammar") can come in different forms: seeing a toilet in a living room would violate semantic predictions, while finding a toilet brush next to the toothpaste would violate syntactic predictions. The existence of such predictions has usually been investigated by showing observers images containing such grammatical violations. Conversely, the generative process of creating an environment according to one's scene grammar and its effects on behavior and memory has received little attention. In a virtual reality paradigm, we either instructed participants to arrange objects according to their scene grammar or against it. Subsequently, participants' memory for the arrangements was probed using a surprise recall (Exp1), or repeated search (Exp2) task. As a result, participants' construction behavior showed strategic use of larger, static objects to anchor the location of smaller objects which are generally the goals of everyday actions. Further analysis of this scene construction data revealed possible commonalities between the rules governing word usage in language and object usage in naturalistic environments. Taken together, we revealed some of the building blocks of scene grammar necessary for efficient behavior, which differentially influence how we interact with objects and what we remember about scenes.

  3. Stages as models of scene geometry.

    Science.gov (United States)

    Nedović, Vladimir; Smeulders, Arnold W M; Redert, André; Geusebroek, Jan-Mark

    2010-09-01

    Reconstruction of 3D scene geometry is an important element for scene understanding, autonomous vehicle and robot navigation, image retrieval, and 3D television. We propose accounting for the inherent structure of the visual world when trying to solve the scene reconstruction problem. Consequently, we identify geometric scene categorization as the first step toward robust and efficient depth estimation from single images. We introduce 15 typical 3D scene geometries called stages, each with a unique depth profile, which roughly correspond to a large majority of broadcast video frames. Stage information serves as a first approximation of global depth, narrowing down the search space in depth estimation and object localization. We propose different sets of low-level features for depth estimation, and perform stage classification on two diverse data sets of television broadcasts. Classification results demonstrate that stages can often be efficiently learned from low-dimensional image representations.

  4. Generation of realistic scene using illuminant estimation and mixed chromatic adaptation

    Science.gov (United States)

    Kim, Jae-Chul; Hong, Sang-Gi; Kim, Dong-Ho; Park, Jong-Hyun

    2003-12-01

    The algorithm of combining a real image with a virtual model was proposed to increase the reality of synthesized images. Currently, synthesizing a real image with a virtual model facilitated the surface reflection model and various geometric techniques. In the current methods, the characteristics of various illuminants in the real image are not sufficiently considered. In addition, despite the chromatic adaptation plays a vital role for accommodating different illuminants in the two media viewing conditions, it is not taken into account in the existing methods. Thus, it is hardly to get high-quality synthesized images. In this paper, we proposed the two-phase image synthesis algorithm. First, the surface reflectance of the maximum high-light region (MHR) was estimated using the three eigenvectors obtained from the principal component analysis (PCA) applied to the surface reflectances of 1269 Munsell samples. The combined spectral value, i.e., the product of surface reflectance and the spectral power distributions (SPDs) of an illuminant, of MHR was then estimated using the three eigenvectors obtained from PCA applied to the products of surface reflectances of Munsell 1269 samples and the SPDs of four CIE Standard Illuminants (A, C, D50, D65). By dividing the average combined spectral values of MHR by the average surface reflectances of MHR, we could estimate the illuminant of a real image. Second, the mixed chromatic adaptation (S-LMS) using an estimated and an external illuminants was applied to the virtual-model image. For evaluating the proposed algorithm, experiments with synthetic and real scenes were performed. It was shown that the proposed method was effective in synthesizing the real and the virtual scenes under various illuminants.

  5. An Analysis of the Max-Min Texture Measure.

    Science.gov (United States)

    1982-01-01

    PANC 33 D2 Confusion Matrices for Scene A, IR 34 D3 Confusion Matrices for Scene B, PANC 35 D4 Confusion Matrices for Scene B, IR 36 D5 Confusion...Matrices for Scene C, PANC 37 D6 Confusion Matrices for Scene C, IR 38 D7 Confusion Matrices for Scene E, PANC 39 D8 Confusion Matrices for Scene E, IR 40...D9 Confusion Matrices for Scene H, PANC 41 DIO Confusion Matrices for Scene H, JR 42 3 .D 10CnuinMtie o cn ,IR4 AN ANALYSIS OF THE MAX-MIN TEXTURE

  6. Land-use Scene Classification in High-Resolution Remote Sensing Images by Multiscale Deeply Described Correlatons

    Science.gov (United States)

    Qi, K.; Qingfeng, G.

    2017-12-01

    With the popular use of High-Resolution Satellite (HRS) images, more and more research efforts have been placed on land-use scene classification. However, it makes the task difficult with HRS images for the complex background and multiple land-cover classes or objects. This article presents a multiscale deeply described correlaton model for land-use scene classification. Specifically, the convolutional neural network is introduced to learn and characterize the local features at different scales. Then, learnt multiscale deep features are explored to generate visual words. The spatial arrangement of visual words is achieved through the introduction of adaptive vector quantized correlograms at different scales. Experiments on two publicly available land-use scene datasets demonstrate that the proposed model is compact and yet discriminative for efficient representation of land-use scene images, and achieves competitive classification results with the state-of-art methods.

  7. Iconic memory for the gist of natural scenes.

    Science.gov (United States)

    Clarke, Jason; Mack, Arien

    2014-11-01

    Does iconic memory contain the gist of multiple scenes? Three experiments were conducted. In the first, four scenes from different basic-level categories were briefly presented in one of two conditions: a cue or a no-cue condition. The cue condition was designed to provide an index of the contents of iconic memory of the display. Subjects were more sensitive to scene gist in the cue condition than in the no-cue condition. In the second, the scenes came from the same basic-level category. We found no difference in sensitivity between the two conditions. In the third, six scenes from different basic level categories were presented in the visual periphery. Subjects were more sensitive to scene gist in the cue condition. These results suggest that scene gist is contained in iconic memory even in the visual periphery; however, iconic representations are not sufficiently detailed to distinguish between scenes coming from the same category. Copyright © 2014 Elsevier Inc. All rights reserved.

  8. HOMA1-IR and HOMA2-IR indexes in identifying insulin resistance and metabolic syndrome: Brazilian Metabolic Syndrome Study (BRAMS).

    Science.gov (United States)

    Geloneze, Bruno; Vasques, Ana Carolina Junqueira; Stabe, Christiane França Camargo; Pareja, José Carlos; Rosado, Lina Enriqueta Frandsen Paez de Lima; Queiroz, Elaine Cristina de; Tambascia, Marcos Antonio

    2009-03-01

    To investigate cut-off values for HOMA1-IR and HOMA2-IR to identify insulin resistance (IR) and metabolic syndrome (MS), and to assess the association of the indexes with components of the MS. Nondiabetic subjects from the Brazilian Metabolic Syndrome Study were studied (n = 1,203, 18 to 78 years). The cut-off values for IR were determined from the 90th percentile in the healthy group (n = 297) and, for MS, a ROC curve was generated for the total sample. In the healthy group, HOMA-IR indexes were associated with central obesity, triglycerides and total cholesterol (p 2.7 and HOMA2-IR > 1.8; and, for MS were: HOMA1-IR > 2.3 (sensitivity: 76.8%; specificity: 66.7%) and HOMA2-IR > 1.4 (sensitivity: 79.2%; specificity: 61.2%). The cut-off values identified for HOMA1-IR and HOMA2-IR indexes have a clinical and epidemiological application for identifying IR and MS in Westernized admixtured multi-ethnic populations.

  9. Statistics of high-level scene context.

    Science.gov (United States)

    Greene, Michelle R

    2013-01-01

    CONTEXT IS CRITICAL FOR RECOGNIZING ENVIRONMENTS AND FOR SEARCHING FOR OBJECTS WITHIN THEM: contextual associations have been shown to modulate reaction time and object recognition accuracy, as well as influence the distribution of eye movements and patterns of brain activations. However, we have not yet systematically quantified the relationships between objects and their scene environments. Here I seek to fill this gap by providing descriptive statistics of object-scene relationships. A total of 48, 167 objects were hand-labeled in 3499 scenes using the LabelMe tool (Russell et al., 2008). From these data, I computed a variety of descriptive statistics at three different levels of analysis: the ensemble statistics that describe the density and spatial distribution of unnamed "things" in the scene; the bag of words level where scenes are described by the list of objects contained within them; and the structural level where the spatial distribution and relationships between the objects are measured. The utility of each level of description for scene categorization was assessed through the use of linear classifiers, and the plausibility of each level for modeling human scene categorization is discussed. Of the three levels, ensemble statistics were found to be the most informative (per feature), and also best explained human patterns of categorization errors. Although a bag of words classifier had similar performance to human observers, it had a markedly different pattern of errors. However, certain objects are more useful than others, and ceiling classification performance could be achieved using only the 64 most informative objects. As object location tends not to vary as a function of category, structural information provided little additional information. Additionally, these data provide valuable information on natural scene redundancy that can be exploited for machine vision, and can help the visual cognition community to design experiments guided by statistics

  10. Semantic Reasoning for Scene Interpretation

    DEFF Research Database (Denmark)

    Jensen, Lars Baunegaard With; Baseski, Emre; Pugeault, Nicolas

    2008-01-01

    In this paper, we propose a hierarchical architecture for representing scenes, covering 2D and 3D aspects of visual scenes as well as the semantic relations between the different aspects. We argue that labeled graphs are a suitable representational framework for this representation and demonstrat...

  11. Improving Remote Sensing Scene Classification by Integrating Global-Context and Local-Object Features

    Directory of Open Access Journals (Sweden)

    Dan Zeng

    2018-05-01

    Full Text Available Recently, many researchers have been dedicated to using convolutional neural networks (CNNs to extract global-context features (GCFs for remote-sensing scene classification. Commonly, accurate classification of scenes requires knowledge about both the global context and local objects. However, unlike the natural images in which the objects cover most of the image, objects in remote-sensing images are generally small and decentralized. Thus, it is hard for vanilla CNNs to focus on both global context and small local objects. To address this issue, this paper proposes a novel end-to-end CNN by integrating the GCFs and local-object-level features (LOFs. The proposed network includes two branches, the local object branch (LOB and global semantic branch (GSB, which are used to generate the LOFs and GCFs, respectively. Then, the concatenation of features extracted from the two branches allows our method to be more discriminative in scene classification. Three challenging benchmark remote-sensing datasets were extensively experimented on; the proposed approach outperformed the existing scene classification methods and achieved state-of-the-art results for all three datasets.

  12. The scene and the unseen: manipulating photographs for experiments on change blindness and scene memory: image manipulation for change blindness.

    Science.gov (United States)

    Ball, Felix; Elzemann, Anne; Busch, Niko A

    2014-09-01

    The change blindness paradigm, in which participants often fail to notice substantial changes in a scene, is a popular tool for studying scene perception, visual memory, and the link between awareness and attention. Some of the most striking and popular examples of change blindness have been demonstrated with digital photographs of natural scenes; in most studies, however, much simpler displays, such as abstract stimuli or "free-floating" objects, are typically used. Although simple displays have undeniable advantages, natural scenes remain a very useful and attractive stimulus for change blindness research. To assist researchers interested in using natural-scene stimuli in change blindness experiments, we provide here a step-by-step tutorial on how to produce changes in natural-scene images with a freely available image-processing tool (GIMP). We explain how changes in a scene can be made by deleting objects or relocating them within the scene or by changing the color of an object, in just a few simple steps. We also explain how the physical properties of such changes can be analyzed using GIMP and MATLAB (a high-level scientific programming tool). Finally, we present an experiment confirming that scenes manipulated according to our guidelines are effective in inducing change blindness and demonstrating the relationship between change blindness and the physical properties of the change and inter-individual differences in performance measures. We expect that this tutorial will be useful for researchers interested in studying the mechanisms of change blindness, attention, or visual memory using natural scenes.

  13. Negotiating place and gendered violence in Canada's largest open drug scene.

    Science.gov (United States)

    McNeil, Ryan; Shannon, Kate; Shaver, Laura; Kerr, Thomas; Small, Will

    2014-05-01

    Vancouver's Downtown Eastside is home to Canada's largest street-based drug scene and only supervised injection facility (Insite). High levels of violence among men and women have been documented in this neighbourhood. This study was undertaken to explore the role of violence in shaping the socio-spatial relations of women and 'marginal men' (i.e., those occupying subordinate positions within the drug scene) in the Downtown Eastside, including access to Insite. Semi-structured qualitative interviews were conducted with 23 people who inject drugs (PWID) recruited through the Vancouver Area Network of Drug Users, a local drug user organization. Interviews included a mapping exercise. Interview transcripts and maps were analyzed thematically, with an emphasis on how gendered violence shaped participants' spatial practices. Hegemonic forms of masculinity operating within the Downtown Eastside framed the everyday violence experienced by women and marginal men. This violence shaped the spatial practices of women and marginal men, in that they avoided drug scene milieus where they had experienced violence or that they perceived to be dangerous. Some men linked their spatial restrictions to the perceived 'dope quality' of neighbourhood drug dealers to maintain claims to dominant masculinities while enacting spatial strategies to promote safety. Environmental supports provided by health and social care agencies were critical in enabling women and marginal men to negotiate place and survival within the context of drug scene violence. Access to Insite did not motivate participants to enter into "dangerous" drug scene milieus but they did venture into these areas if necessary to obtain drugs or generate income. Gendered violence is critical in restricting the geographies of men and marginal men within the street-based drug scene. There is a need to scale up existing environmental interventions, including supervised injection services, to minimize violence and potential drug

  14. Albedo estimation for scene segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C H; Rosenfeld, A

    1983-03-01

    Standard methods of image segmentation do not take into account the three-dimensional nature of the underlying scene. For example, histogram-based segmentation tacitly assumes that the image intensity is piecewise constant, and this is not true when the scene contains curved surfaces. This paper introduces a method of taking 3d information into account in the segmentation process. The image intensities are adjusted to compensate for the effects of estimated surface orientation; the adjusted intensities can be regarded as reflectivity estimates. When histogram-based segmentation is applied to these new values, the image is segmented into parts corresponding to surfaces of constant reflectivity in the scene. 7 references.

  15. Pooling Objects for Recognizing Scenes without Examples

    NARCIS (Netherlands)

    Kordumova, S.; Mensink, T.; Snoek, C.G.M.

    2016-01-01

    In this paper we aim to recognize scenes in images without using any scene images as training data. Different from attribute based approaches, we do not carefully select the training classes to match the unseen scene classes. Instead, we propose a pooling over ten thousand of off-the-shelf object

  16. Predicting top-of-atmosphere radiance for arbitrary viewing geometries from the visible to thermal infrared: generalization to arbitrary average scene temperatures

    Science.gov (United States)

    Florio, Christopher J.; Cota, Steve A.; Gaffney, Stephanie K.

    2010-08-01

    In a companion paper presented at this conference we described how The Aerospace Corporation's Parameterized Image Chain Analysis & Simulation SOftware (PICASSO) may be used in conjunction with a limited number of runs of AFRL's MODTRAN4 radiative transfer code, to quickly predict the top-of-atmosphere (TOA) radiance received in the visible through midwave IR (MWIR) by an earth viewing sensor, for any arbitrary combination of solar and sensor elevation angles. The method is particularly useful for large-scale scene simulations where each pixel could have a unique value of reflectance/emissivity and temperature, making the run-time required for direct prediction via MODTRAN4 prohibitive. In order to be self-consistent, the method described requires an atmospheric model (defined, at a minimum, as a set of vertical temperature, pressure and water vapor profiles) that is consistent with the average scene temperature. MODTRAN4 provides only six model atmospheres, ranging from sub-arctic winter to tropical conditions - too few to cover with sufficient temperature resolution the full range of average scene temperatures that might be of interest. Model atmospheres consistent with intermediate temperature values can be difficult to come by, and in any event, their use would be too cumbersome for use in trade studies involving a large number of average scene temperatures. In this paper we describe and assess a method for predicting TOA radiance for any arbitrary average scene temperature, starting from only a limited number of model atmospheres.

  17. Neural Scene Segmentation by Oscillatory Correlation

    National Research Council Canada - National Science Library

    Wang, DeLiang

    2000-01-01

    The segmentation of a visual scene into a set of coherent patterns (objects) is a fundamental aspect of perception, which underlies a variety of important tasks such as figure/ground segregation, and scene analysis...

  18. HOMA1-IR and HOMA2-IR indexes in identifying insulin resistance and metabolic syndrome - Brazilian Metabolic Syndrome Study (BRAMS)

    OpenAIRE

    Geloneze, B; Vasques, ACJ; Stabe, CFC; Pareja, JC; Rosado, LEFPD; de Queiroz, EC; Tambascia, MA

    2009-01-01

    Objective: To investigate cut-off values for HOMA1-IR and HOMA2-IR to identify insulin resistance (IR) and metabolic syndrome (MS), and to assess the association of the indexes with components of the MS. Methods: Nondiabetic subjects from the Brazilian Metabolic Syndrome Study were studied (n = 1,203, 18 to 78 years). The cut-off values for IR were determined from the 9011 percentile in the healthy group (n = 297) and, for MS, a ROC curve was generated for the total sample. Results: In the he...

  19. A hierarchical inferential method for indoor scene classification

    Directory of Open Access Journals (Sweden)

    Jiang Jingzhe

    2017-12-01

    Full Text Available Indoor scene classification forms a basis for scene interaction for service robots. The task is challenging because the layout and decoration of a scene vary considerably. Previous studies on knowledge-based methods commonly ignore the importance of visual attributes when constructing the knowledge base. These shortcomings restrict the performance of classification. The structure of a semantic hierarchy was proposed to describe similarities of different parts of scenes in a fine-grained way. Besides the commonly used semantic features, visual attributes were also introduced to construct the knowledge base. Inspired by the processes of human cognition and the characteristics of indoor scenes, we proposed an inferential framework based on the Markov logic network. The framework is evaluated on a popular indoor scene dataset, and the experimental results demonstrate its effectiveness.

  20. 3D Aware Correction and Completion of Depth Maps in Piecewise Planar Scenes

    KAUST Repository

    Thabet, Ali Kassem

    2015-04-16

    RGB-D sensors are popular in the computer vision community, especially for problems of scene understanding, semantic scene labeling, and segmentation. However, most of these methods depend on reliable input depth measurements, while discarding unreliable ones. This paper studies how reliable depth values can be used to correct the unreliable ones, and how to complete (or extend) the available depth data beyond the raw measurements of the sensor (i.e. infer depth at pixels with unknown depth values), given a prior model on the 3D scene. We consider piecewise planar environments in this paper, since many indoor scenes with man-made objects can be modeled as such. We propose a framework that uses the RGB-D sensor’s noise profile to adaptively and robustly fit plane segments (e.g. floor and ceiling) and iteratively complete the depth map, when possible. Depth completion is formulated as a discrete labeling problem (MRF) with hard constraints and solved efficiently using graph cuts. To regularize this problem, we exploit 3D and appearance cues that encourage pixels to take on depth values that will be compatible in 3D to the piecewise planar assumption. Extensive experiments, on a new large-scale and challenging dataset, show that our approach results in more accurate depth maps (with 20 % more depth values) than those recorded by the RGB-D sensor. Additional experiments on the NYUv2 dataset show that our method generates more 3D aware depth. These generated depth maps can also be used to improve the performance of a state-of-the-art RGB-D SLAM method.

  1. a Low-Cost Panoramic Camera for the 3d Documentation of Contaminated Crime Scenes

    Science.gov (United States)

    Abate, D.; Toschi, I.; Sturdy-Colls, C.; Remondino, F.

    2017-11-01

    Crime scene documentation is a fundamental task which has to be undertaken in a fast, accurate and reliable way, highlighting evidence which can be further used for ensuring justice for victims and for guaranteeing the successful prosecution of perpetrators. The main focus of this paper is on the documentation of a typical crime scene and on the rapid recording of any possible contamination that could have influenced its original appearance. A 3D reconstruction of the environment is first generated by processing panoramas acquired with the low-cost Ricoh Theta 360 camera, and further analysed to highlight potentials and limits of this emerging and consumer-grade technology. Then, a methodology is proposed for the rapid recording of changes occurring between the original and the contaminated crime scene. The approach is based on an automatic 3D feature-based data registration, followed by a cloud-to-cloud distance computation, given as input the 3D point clouds generated before and after e.g. the misplacement of evidence. All the algorithms adopted for panoramas pre-processing, photogrammetric 3D reconstruction, 3D geometry registration and analysis, are presented and currently available in open-source or low-cost software solutions.

  2. Visual search for arbitrary objects in real scenes

    Science.gov (United States)

    Alvarez, George A.; Rosenholtz, Ruth; Kuzmova, Yoana I.; Sherman, Ashley M.

    2011-01-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4–6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the “functional set size” of items that could possibly be the target. PMID:21671156

  3. Visual search for arbitrary objects in real scenes.

    Science.gov (United States)

    Wolfe, Jeremy M; Alvarez, George A; Rosenholtz, Ruth; Kuzmova, Yoana I; Sherman, Ashley M

    2011-08-01

    How efficient is visual search in real scenes? In searches for targets among arrays of randomly placed distractors, efficiency is often indexed by the slope of the reaction time (RT) × Set Size function. However, it may be impossible to define set size for real scenes. As an approximation, we hand-labeled 100 indoor scenes and used the number of labeled regions as a surrogate for set size. In Experiment 1, observers searched for named objects (a chair, bowl, etc.). With set size defined as the number of labeled regions, search was very efficient (~5 ms/item). When we controlled for a possible guessing strategy in Experiment 2, slopes increased somewhat (~15 ms/item), but they were much shallower than search for a random object among other distinctive objects outside of a scene setting (Exp. 3: ~40 ms/item). In Experiments 4-6, observers searched repeatedly through the same scene for different objects. Increased familiarity with scenes had modest effects on RTs, while repetition of target items had large effects (>500 ms). We propose that visual search in scenes is efficient because scene-specific forms of attentional guidance can eliminate most regions from the "functional set size" of items that could possibly be the target.

  4. Scene analysis in the natural environment

    DEFF Research Database (Denmark)

    Lewicki, Michael S; Olshausen, Bruno A; Surlykke, Annemarie

    2014-01-01

    The problem of scene analysis has been studied in a number of different fields over the past decades. These studies have led to important insights into problems of scene analysis, but not all of these insights are widely appreciated, and there remain critical shortcomings in current approaches th...... ill-posed problems, (2) the ability to integrate and store information across time and modality, (3) efficient recovery and representation of 3D scene structure, and (4) the use of optimal motor actions for acquiring information to progress toward behavioral goals....

  5. The time course of natural scene perception with reduced attention.

    Science.gov (United States)

    Groen, Iris I A; Ghebreab, Sennay; Lamme, Victor A F; Scholte, H Steven

    2016-02-01

    Attention is thought to impose an informational bottleneck on vision by selecting particular information from visual scenes for enhanced processing. Behavioral evidence suggests, however, that some scene information is extracted even when attention is directed elsewhere. Here, we investigated the neural correlates of this ability by examining how attention affects electrophysiological markers of scene perception. In two electro-encephalography (EEG) experiments, human subjects categorized real-world scenes as manmade or natural (full attention condition) or performed tasks on unrelated stimuli in the center or periphery of the scenes (reduced attention conditions). Scene processing was examined in two ways: traditional trial averaging was used to assess the presence of a categorical manmade/natural distinction in event-related potentials, whereas single-trial analyses assessed whether EEG activity was modulated by scene statistics that are diagnostic of naturalness of individual scenes. The results indicated that evoked activity up to 250 ms was unaffected by reduced attention, showing intact categorical differences between manmade and natural scenes and strong modulations of single-trial activity by scene statistics in all conditions. Thus initial processing of both categorical and individual scene information remained intact with reduced attention. Importantly, however, attention did have profound effects on later evoked activity; full attention on the scene resulted in prolonged manmade/natural differences, increased neural sensitivity to scene statistics, and enhanced scene memory. These results show that initial processing of real-world scene information is intact with diminished attention but that the depth of processing of this information does depend on attention. Copyright © 2016 the American Physiological Society.

  6. Transferring Deep Convolutional Neural Networks for the Scene Classification of High-Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Fan Hu

    2015-11-01

    Full Text Available Learning efficient image representations is at the core of the scene classification task of remote sensing imagery. The existing methods for solving the scene classification task, based on either feature coding approaches with low-level hand-engineered features or unsupervised feature learning, can only generate mid-level image features with limited representative ability, which essentially prevents them from achieving better performance. Recently, the deep convolutional neural networks (CNNs, which are hierarchical architectures trained on large-scale datasets, have shown astounding performance in object recognition and detection. However, it is still not clear how to use these deep convolutional neural networks for high-resolution remote sensing (HRRS scene classification. In this paper, we investigate how to transfer features from these successfully pre-trained CNNs for HRRS scene classification. We propose two scenarios for generating image features via extracting CNN features from different layers. In the first scenario, the activation vectors extracted from fully-connected layers are regarded as the final image features; in the second scenario, we extract dense features from the last convolutional layer at multiple scales and then encode the dense features into global image features through commonly used feature coding approaches. Extensive experiments on two public scene classification datasets demonstrate that the image features obtained by the two proposed scenarios, even with a simple linear classifier, can result in remarkable performance and improve the state-of-the-art by a significant margin. The results reveal that the features from pre-trained CNNs generalize well to HRRS datasets and are more expressive than the low- and mid-level features. Moreover, we tentatively combine features extracted from different CNN models for better performance.

  7. Complex scenes and situations visualization in hierarchical learning algorithm with dynamic 3D NeoAxis engine

    Science.gov (United States)

    Graham, James; Ternovskiy, Igor V.

    2013-06-01

    We applied a two stage unsupervised hierarchical learning system to model complex dynamic surveillance and cyber space monitoring systems using a non-commercial version of the NeoAxis visualization software. The hierarchical scene learning and recognition approach is based on hierarchical expectation maximization, and was linked to a 3D graphics engine for validation of learning and classification results and understanding the human - autonomous system relationship. Scene recognition is performed by taking synthetically generated data and feeding it to a dynamic logic algorithm. The algorithm performs hierarchical recognition of the scene by first examining the features of the objects to determine which objects are present, and then determines the scene based on the objects present. This paper presents a framework within which low level data linked to higher-level visualization can provide support to a human operator and be evaluated in a detailed and systematic way.

  8. Multi-pollutants sensors based on near-IR telecom lasers and mid-IR difference frequency generation: development and applications; Instruments de mesure multi-polluants par spectroscopie infrarouge bases sur des lasers fibres et par generation de difference de frequences: developpement et applications

    Energy Technology Data Exchange (ETDEWEB)

    Cousin, J

    2006-12-15

    At present the detection of VOC and other anthropic trace pollutants is an important challenge in the measurement of air quality. Infrared spectroscopy, allowing spectral regions rich in molecular absorption to be probed, is a suitable technique for in-situ monitoring of the air pollution. Thus the aim of this work was to develop instruments capable of detecting multiple pollutants for in-situ monitoring by IR spectroscopy. A first project benefited from the availability of the telecommunications lasers emitting in near-IR. This instrument was based on an external cavity diode laser (1500 - 1640 nm) in conjunction with a multipass cell (100 m). The detection sensitivity was optimised by employing a balanced detection and a sweep integration procedure. The instrument developed is deployable for in-situ measurements with a sensitivity of < 10{sup -8} cm{sup -1} Hz{sup -1/2} and allowed the quantification of chemical species such as CO{sub 2}, CO, C{sub 2}H{sub 2}, CH{sub 4} and the determination of the isotopic ratio {sup 13}CO{sub 2}/{sup 12}CO{sub 2} in combustion environment The second project consisted in mixing two near-IR fiber lasers in a non-linear crystal (PPLN) in order to produce a laser radiation by difference frequency generation in the middle-IR (3.15 - 3.43 {mu}m), where the absorption bands of the molecules are the most intense. The first studies with this source were carried out on detection of ethylene (C{sub 2}H{sub 4}) and benzene (C{sub 6}H{sub 6}). Developments, characterizations and applications of these instruments in the near and middle IR are detailed and the advantages of the 2 spectral ranges is highlighted. (author)

  9. Teledyne H1RG, H2RG, and H4RG Noise Generator

    Science.gov (United States)

    Rauscher, Bernard J.

    2015-01-01

    This paper describes the near-infrared detector system noise generator (NG) that we wrote for the James Webb Space Telescope (JWST) Near Infrared Spectrograph (NIRSpec). NG simulates many important noise components including; (1) white "read noise", (2) residual bias drifts, (3) pink 1/f noise, (4) alternating column noise, and (5) picture frame noise. By adjusting the input parameters, NG can simulate noise for Teledyne's H1RG, H2RG, and H4RG detectors with and without Teledyne's SIDECAR ASIC IR array controller. NG can be used as a starting point for simulating astronomical scenes by adding dark current, scattered light, and astronomical sources into the results from NG. NG is written in Python-3.4.

  10. Interaction between scene-based and array-based contextual cueing.

    Science.gov (United States)

    Rosenbaum, Gail M; Jiang, Yuhong V

    2013-07-01

    Contextual cueing refers to the cueing of spatial attention by repeated spatial context. Previous studies have demonstrated distinctive properties of contextual cueing by background scenes and by an array of search items. Whereas scene-based contextual cueing reflects explicit learning of the scene-target association, array-based contextual cueing is supported primarily by implicit learning. In this study, we investigated the interaction between scene-based and array-based contextual cueing. Participants searched for a target that was predicted by both the background scene and the locations of distractor items. We tested three possible patterns of interaction: (1) The scene and the array could be learned independently, in which case cueing should be expressed even when only one cue was preserved; (2) the scene and array could be learned jointly, in which case cueing should occur only when both cues were preserved; (3) overshadowing might occur, in which case learning of the stronger cue should preclude learning of the weaker cue. In several experiments, we manipulated the nature of the contextual cues present during training and testing. We also tested explicit awareness of scenes, scene-target associations, and arrays. The results supported the overshadowing account: Specifically, scene-based contextual cueing precluded array-based contextual cueing when both were predictive of the location of a search target. We suggest that explicit, endogenous cues dominate over implicit cues in guiding spatial attention.

  11. Superpixel-Based Feature for Aerial Image Scene Recognition

    Directory of Open Access Journals (Sweden)

    Hongguang Li

    2018-01-01

    Full Text Available Image scene recognition is a core technology for many aerial remote sensing applications. Different landforms are inputted as different scenes in aerial imaging, and all landform information is regarded as valuable for aerial image scene recognition. However, the conventional features of the Bag-of-Words model are designed using local points or other related information and thus are unable to fully describe landform areas. This limitation cannot be ignored when the aim is to ensure accurate aerial scene recognition. A novel superpixel-based feature is proposed in this study to characterize aerial image scenes. Then, based on the proposed feature, a scene recognition method of the Bag-of-Words model for aerial imaging is designed. The proposed superpixel-based feature that utilizes landform information establishes top-task superpixel extraction of landforms to bottom-task expression of feature vectors. This characterization technique comprises the following steps: simple linear iterative clustering based superpixel segmentation, adaptive filter bank construction, Lie group-based feature quantification, and visual saliency model-based feature weighting. Experiments of image scene recognition are carried out using real image data captured by an unmanned aerial vehicle (UAV. The recognition accuracy of the proposed superpixel-based feature is 95.1%, which is higher than those of scene recognition algorithms based on other local features.

  12. Fast Binary Coding for the Scene Classification of High-Resolution Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Fan Hu

    2016-06-01

    Full Text Available Scene classification of high-resolution remote sensing (HRRS imagery is an important task in the intelligent processing of remote sensing images and has attracted much attention in recent years. Although the existing scene classification methods, e.g., the bag-of-words (BOW model and its variants, can achieve acceptable performance, these approaches strongly rely on the extraction of local features and the complicated coding strategy, which are usually time consuming and demand much expert effort. In this paper, we propose a fast binary coding (FBC method, to effectively generate efficient discriminative scene representations of HRRS images. The main idea is inspired by the unsupervised feature learning technique and the binary feature descriptions. More precisely, equipped with the unsupervised feature learning technique, we first learn a set of optimal “filters” from large quantities of randomly-sampled image patches and then obtain feature maps by convolving the image scene with the learned filters. After binarizing the feature maps, we perform a simple hashing step to convert the binary-valued feature map to the integer-valued feature map. Finally, statistical histograms computed on the integer-valued feature map are used as global feature representations of the scenes of HRRS images, similar to the conventional BOW model. The analysis of the algorithm complexity and experiments on HRRS image datasets demonstrate that, in contrast with existing scene classification approaches, the proposed FBC has much faster computational speed and achieves comparable classification performance. In addition, we also propose two extensions to FBC, i.e., the spatial co-occurrence matrix and different visual saliency maps, for further improving its final classification accuracy.

  13. Primal scene derivatives in the work of Yukio Mishima: the primal scene fantasy.

    Science.gov (United States)

    Turco, Ronald N

    2002-01-01

    This article discusses the preoccupation with fire, revenge, crucifixion, and other fantasies as they relate to the primal scene. The manifestations of these fantasies are demonstrated in a work of fiction by Yukio Mishima. The Temple of the Golden Pavillion. As is the case in other writings of Mishima there is a fusion of aggressive and libidinal drives and a preoccupation with death. The primal scene is directly connected with pyromania and destructive "acting out" of fantasies. This article is timely with regard to understanding contemporary events of cultural and national destruction.

  14. Emotional and neutral scenes in competition: orienting, efficiency, and identification.

    Science.gov (United States)

    Calvo, Manuel G; Nummenmaa, Lauri; Hyönä, Jukka

    2007-12-01

    To investigate preferential processing of emotional scenes competing for limited attentional resources with neutral scenes, prime pictures were presented briefly (450 ms), peripherally (5.2 degrees away from fixation), and simultaneously (one emotional and one neutral scene) versus singly. Primes were followed by a mask and a probe for recognition. Hit rate was higher for emotional than for neutral scenes in the dual- but not in the single-prime condition, and A' sensitivity decreased for neutral but not for emotional scenes in the dual-prime condition. This preferential processing involved both selective orienting and efficient encoding, as revealed, respectively, by a higher probability of first fixation on--and shorter saccade latencies to--emotional scenes and by shorter fixation time needed to accurately identify emotional scenes, in comparison with neutral scenes.

  15. Scene Integration for Online VR Advertising Clouds

    Directory of Open Access Journals (Sweden)

    Michael Kalochristianakis

    2014-12-01

    Full Text Available This paper presents a scene composition approach that allows the combinational use of standard three dimensional objects, called models, in order to create X3D scenes. The module is an integral part of a broader design aiming to construct large scale online advertising infrastructures that rely on virtual reality technologies. The architecture addresses a number of problems regarding remote rendering for low end devices and last but not least, the provision of scene composition and integration. Since viewers do not keep information regarding individual input models or scenes, composition requires the consideration of mechanisms that add state to viewing technologies. In terms of this work we extended a well-known, open source X3D authoring tool.

  16. Estimating the number of people in crowded scenes

    Science.gov (United States)

    Kim, Minjin; Kim, Wonjun; Kim, Changick

    2011-01-01

    This paper presents a method to estimate the number of people in crowded scenes without using explicit object segmentation or tracking. The proposed method consists of three steps as follows: (1) extracting space-time interest points using eigenvalues of the local spatio-temporal gradient matrix, (2) generating crowd regions based on space-time interest points, and (3) estimating the crowd density based on the multiple regression. In experimental results, the efficiency and robustness of our proposed method are demonstrated by using PETS 2009 dataset.

  17. Three-dimensional measurement system for crime scene documentation

    Science.gov (United States)

    Adamczyk, Marcin; Hołowko, Elwira; Lech, Krzysztof; Michoński, Jakub; MÄ czkowski, Grzegorz; Bolewicki, Paweł; Januszkiewicz, Kamil; Sitnik, Robert

    2017-10-01

    Three dimensional measurements (such as photogrammetry, Time of Flight, Structure from Motion or Structured Light techniques) are becoming a standard in the crime scene documentation process. The usage of 3D measurement techniques provide an opportunity to prepare more insightful investigation and helps to show every trace in the context of the entire crime scene. In this paper we would like to present a hierarchical, three-dimensional measurement system that is designed for crime scenes documentation process. Our system reflects the actual standards in crime scene documentation process - it is designed to perform measurement in two stages. First stage of documentation, the most general, is prepared with a scanner with relatively low spatial resolution but also big measuring volume - it is used for the whole scene documentation. Second stage is much more detailed: high resolution but smaller size of measuring volume for areas that required more detailed approach. The documentation process is supervised by a specialised application CrimeView3D, that is a software platform for measurements management (connecting with scanners and carrying out measurements, automatic or semi-automatic data registration in the real time) and data visualisation (3D visualisation of documented scenes). It also provides a series of useful tools for forensic technicians: virtual measuring tape, searching for sources of blood spatter, virtual walk on the crime scene and many others. In this paper we present our measuring system and the developed software. We also provide an outcome from research on metrological validation of scanners that was performed according to VDI/VDE standard. We present a CrimeView3D - a software-platform that was developed to manage the crime scene documentation process. We also present an outcome from measurement sessions that were conducted on real crime scenes with cooperation with Technicians from Central Forensic Laboratory of Police.

  18. The roles of scene gist and spatial dependency among objects in the semantic guidance of attention in real-world scenes.

    Science.gov (United States)

    Wu, Chia-Chien; Wang, Hsueh-Cheng; Pomplun, Marc

    2014-12-01

    A previous study (Vision Research 51 (2011) 1192-1205) found evidence for semantic guidance of visual attention during the inspection of real-world scenes, i.e., an influence of semantic relationships among scene objects on overt shifts of attention. In particular, the results revealed an observer bias toward gaze transitions between semantically similar objects. However, this effect is not necessarily indicative of semantic processing of individual objects but may be mediated by knowledge of the scene gist, which does not require object recognition, or by known spatial dependency among objects. To examine the mechanisms underlying semantic guidance, in the present study, participants were asked to view a series of displays with the scene gist excluded and spatial dependency varied. Our results show that spatial dependency among objects seems to be sufficient to induce semantic guidance. Scene gist, on the other hand, does not seem to affect how observers use semantic information to guide attention while viewing natural scenes. Extracting semantic information mainly based on spatial dependency may be an efficient strategy of the visual system that only adds little cognitive load to the viewing task. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Moving through a multiplex holographic scene

    Science.gov (United States)

    Mrongovius, Martina

    2013-02-01

    This paper explores how movement can be used as a compositional element in installations of multiplex holograms. My holographic images are created from montages of hand-held video and photo-sequences. These spatially dynamic compositions are visually complex but anchored to landmarks and hints of the capturing process - such as the appearance of the photographer's shadow - to establish a sense of connection to the holographic scene. Moving around in front of the hologram, the viewer animates the holographic scene. A perception of motion then results from the viewer's bodily awareness of physical motion and the visual reading of dynamics within the scene or movement of perspective through a virtual suggestion of space. By linking and transforming the physical motion of the viewer with the visual animation, the viewer's bodily awareness - including proprioception, balance and orientation - play into the holographic composition. How multiplex holography can be a tool for exploring coupled, cross-referenced and transformed perceptions of movement is demonstrated with a number of holographic image installations. Through this process I expanded my creative composition practice to consider how dynamic and spatial scenes can be conveyed through the fragmented view of a multiplex hologram. This body of work was developed through an installation art practice and was the basis of my recently completed doctoral thesis: 'The Emergent Holographic Scene — compositions of movement and affect using multiplex holographic images'.

  20. Impaired Insulin Signaling is Associated with Hepatic Mitochondrial Dysfunction in IR+/−-IRS-1+/− Double Heterozygous (IR-IRS1dh Mice

    Directory of Open Access Journals (Sweden)

    Andras Franko

    2017-05-01

    Full Text Available Mitochondria play a pivotal role in energy metabolism, but whether insulin signaling per se could regulate mitochondrial function has not been identified yet. To investigate whether mitochondrial function is regulated by insulin signaling, we analyzed muscle and liver of insulin receptor (IR+/−-insulin receptor substrate-1 (IRS-1+/− double heterozygous (IR-IRS1dh mice, a well described model for insulin resistance. IR-IRS1dh mice were studied at the age of 6 and 12 months and glucose metabolism was determined by glucose and insulin tolerance tests. Mitochondrial enzyme activities, oxygen consumption, and membrane potential were assessed using spectrophotometric, respirometric, and proton motive force analysis, respectively. IR-IRS1dh mice showed elevated serum insulin levels. Hepatic mitochondrial oxygen consumption was reduced in IR-IRS1dh animals at 12 months of age. Furthermore, 6-month-old IR-IRS1dh mice demonstrated enhanced mitochondrial respiration in skeletal muscle, but a tendency of impaired glucose tolerance. On the other hand, 12-month-old IR-IRS1dh mice showed improved glucose tolerance, but normal muscle mitochondrial function. Our data revealed that deficiency in IR/IRS-1 resulted in normal or even elevated skeletal muscle, but impaired hepatic mitochondrial function, suggesting a direct cross-talk between insulin signaling and mitochondria in the liver.

  1. Mid-IR laser ultrasonic testing for fiber reinforced plastics

    Science.gov (United States)

    Kusano, Masahiro; Hatano, Hideki; Oguchi, Kanae; Yamawaki, Hisashi; Watanabe, Makoto; Enoki, Manabu

    2018-04-01

    Ultrasonic testing is the most common method to detect defects in materials and evaluate their sizes and locations. Since piezo-electric transducers are manually handled from point to point, it takes more costs for huge products such as airplanes. Laser ultrasonic testing (LUT) is a breakthrough technique. A pulsed laser generates ultrasonic waves on a material surface due to thermoelastic effect or ablation. The ultrasonic waves can be detected by another laser with an interferometer. Thus, LUT can realize instantaneous inspection without contacting a sample. A pulse laser with around 3.2 μm wavelength (in the mid-IR range) is more suitable to generate ultrasonic waves for fiber reinforced plastics (FRPs) because the light is well absorbed by the polymeric matrix. On the other hand, such a laser is not available in the market. In order to emit the mid-IR laser pulse, we came up with the application of an optical parametric oscillator and developed an efficient wavelength conversion device by pumping a compact Nd:YAG solid-state laser. Our mid-IR LUT system is most suitable for inspection of FRPs. The signal-to-noise ratio of ultrasonic waves generated by the mid-IR laser is higher than that by the Nd:YAG laser. The purpose of the present study is to evaluate the performance of the mid-IR LUT system in reflection mode. We investigated the effects of the material properties and the laser properties on the generated ultrasonic waves. In addition, C-scan images by the system were also presented.

  2. Global scene layout modulates contextual learning in change detection

    Directory of Open Access Journals (Sweden)

    Markus eConci

    2014-02-01

    Full Text Available Change in the visual scene often goes unnoticed – a phenomenon referred to as ‘change blindness’. This study examined whether the hierarchical structure, i.e., the global-local layout of a scene can influence performance in a one-shot change detection paradigm. To this end, natural scenes of a laid breakfast table were presented, and observers were asked to locate the onset of a new local object. Importantly, the global structure of the scene was manipulated by varying the relations among objects in the scene layouts. The very same items were either presented as global-congruent (typical layouts or as global-incongruent (random arrangements. Change blindness was less severe for congruent than for incongruent displays, and this congruency benefit increased with the duration of the experiment. These findings show that global layouts are learned, supporting detection of local changes with enhanced efficiency. However, performance was not affected by scene congruency in a subsequent control experiment that required observers to localize a static discontinuity (i.e., an object that was missing from the repeated layouts. Our results thus show that learning of the global layout is particularly linked to the local objects. Taken together, our results reveal an effect of global precedence in natural scenes. We suggest that relational properties within the hierarchy of a natural scene are governed, in particular, by global image analysis, reducing change blindness for local objects through scene learning.

  3. Global scene layout modulates contextual learning in change detection.

    Science.gov (United States)

    Conci, Markus; Müller, Hermann J

    2014-01-01

    Change in the visual scene often goes unnoticed - a phenomenon referred to as "change blindness." This study examined whether the hierarchical structure, i.e., the global-local layout of a scene can influence performance in a one-shot change detection paradigm. To this end, natural scenes of a laid breakfast table were presented, and observers were asked to locate the onset of a new local object. Importantly, the global structure of the scene was manipulated by varying the relations among objects in the scene layouts. The very same items were either presented as global-congruent (typical) layouts or as global-incongruent (random) arrangements. Change blindness was less severe for congruent than for incongruent displays, and this congruency benefit increased with the duration of the experiment. These findings show that global layouts are learned, supporting detection of local changes with enhanced efficiency. However, performance was not affected by scene congruency in a subsequent control experiment that required observers to localize a static discontinuity (i.e., an object that was missing from the repeated layouts). Our results thus show that learning of the global layout is particularly linked to the local objects. Taken together, our results reveal an effect of "global precedence" in natural scenes. We suggest that relational properties within the hierarchy of a natural scene are governed, in particular, by global image analysis, reducing change blindness for local objects through scene learning.

  4. Setting the scene

    International Nuclear Information System (INIS)

    Curran, S.

    1977-01-01

    The reasons for the special meeting on the breeder reactor are outlined with some reference to the special Scottish interest in the topic. Approximately 30% of the electrical energy generated in Scotland is nuclear and the special developments at Dounreay make policy decisions on the future of the commercial breeder reactor urgent. The participants review the major questions arising in arriving at such decisions. In effect an attempt is made to respond to the wish of the Secretary of State for Energy to have informed debate. To set the scene the importance of energy availability as regards to the strength of the national economy is stressed and the reasons for an increasing energy demand put forward. Examination of alternative sources of energy shows that none is definitely capable of filling the foreseen energy gap. This implies an integrated thermal/breeder reactor programme as the way to close the anticipated gap. The problems of disposal of radioactive waste and the safeguards in the handling of plutonium are outlined. Longer-term benefits, including the consumption of plutonium and naturally occurring radioactive materials, are examined. (author)

  5. The primal scene and symbol formation.

    Science.gov (United States)

    Niedecken, Dietmut

    2016-06-01

    This article discusses the meaning of the primal scene for symbol formation by exploring its way of processing in a child's play. The author questions the notion that a sadomasochistic way of processing is the only possible one. A model of an alternative mode of processing is being presented. It is suggested that both ways of processing intertwine in the "fabric of life" (D. Laub). Two clinical vignettes, one from an analytic child psychotherapy and the other from the analysis of a 30 year-old female patient, illustrate how the primal scene is being played out in the form of a terzet. The author explores whether the sadomasochistic way of processing actually precedes the "primal scene as a terzet". She discusses if it could even be regarded as a precondition for the formation of the latter or, alternatively, if the "combined parent-figure" gives rise to ways of processing. The question is being left open. Finally, it is shown how both modes of experiencing the primal scene underlie the discoursive and presentative symbol formation, respectively. Copyright © 2015 Institute of Psychoanalysis.

  6. Modeling global scene factors in attention

    Science.gov (United States)

    Torralba, Antonio

    2003-07-01

    Models of visual attention have focused predominantly on bottom-up approaches that ignored structured contextual and scene information. I propose a model of contextual cueing for attention guidance based on the global scene configuration. It is shown that the statistics of low-level features across the whole image can be used to prime the presence or absence of objects in the scene and to predict their location, scale, and appearance before exploring the image. In this scheme, visual context information can become available early in the visual processing chain, which allows modulation of the saliency of image regions and provides an efficient shortcut for object detection and recognition. 2003 Optical Society of America

  7. Presentation of 3D Scenes Through Video Example.

    Science.gov (United States)

    Baldacci, Andrea; Ganovelli, Fabio; Corsini, Massimiliano; Scopigno, Roberto

    2017-09-01

    Using synthetic videos to present a 3D scene is a common requirement for architects, designers, engineers or Cultural Heritage professionals however it is usually time consuming and, in order to obtain high quality results, the support of a film maker/computer animation expert is necessary. We introduce an alternative approach that takes the 3D scene of interest and an example video as input, and automatically produces a video of the input scene that resembles the given video example. In other words, our algorithm allows the user to "replicate" an existing video, on a different 3D scene. We build on the intuition that a video sequence of a static environment is strongly characterized by its optical flow, or, in other words, that two videos are similar if their optical flows are similar. We therefore recast the problem as producing a video of the input scene whose optical flow is similar to the optical flow of the input video. Our intuition is supported by a user-study specifically designed to verify this statement. We have successfully tested our approach on several scenes and input videos, some of which are reported in the accompanying material of this paper.

  8. Design and characterization of a mixed-signal PCB for digital-to-analog conversion in a modular and scalable infrared scene projector

    Science.gov (United States)

    Benedict, Jacob

    Infra-red (IR) sensors have proven instrumental in a wide variety of fields from military to industrial applications. The proliferation of IR sensors has spawned an intense push for technologies that can test and calibrate the multitudes of IR sensors. One such technology, IR scene projection (IRSP), provides an inexpensive and safe method for the testing of IR sensor devices. Previous efforts have been conducted to develop IRSPs based on super-lattice light emitting diodes (SLEDS). A single-color 512x512 SLEDs system has been developed, produced, and tested as documented in Corey Lange's Master's thesis, and a GOMAC paper by Rodney McGee [1][2]. Current efforts are being undergone to develop a two-color 512x512 SLEDs system designated (TCSA). The following thesis discusses the design and implementation of a custom printed circuit board (PCB), known as the FMC 4DAC, that contains both analog and digital signals. Utilizing two 16-bit digital-to-analog converters (DAC) the purpose of the board is to provide four analog current output channels for driving the TCSA system to a maximum frame rate of 1 kHz. In addition, the board supports a scalable TCSA system architecture. Several copies of the board can be run in parallel to achieve a range of analog channels between 4 and 32.

  9. Real-time capture and reconstruction system with multiple GPUs for a 3D live scene by a generation from 4K IP images to 8K holograms.

    Science.gov (United States)

    Ichihashi, Yasuyuki; Oi, Ryutaro; Senoh, Takanori; Yamamoto, Kenji; Kurita, Taiichiro

    2012-09-10

    We developed a real-time capture and reconstruction system for three-dimensional (3D) live scenes. In previous research, we used integral photography (IP) to capture 3D images and then generated holograms from the IP images to implement a real-time reconstruction system. In this paper, we use a 4K (3,840 × 2,160) camera to capture IP images and 8K (7,680 × 4,320) liquid crystal display (LCD) panels for the reconstruction of holograms. We investigate two methods for enlarging the 4K images that were captured by integral photography to 8K images. One of the methods increases the number of pixels of each elemental image. The other increases the number of elemental images. In addition, we developed a personal computer (PC) cluster system with graphics processing units (GPUs) for the enlargement of IP images and the generation of holograms from the IP images using fast Fourier transform (FFT). We used the Compute Unified Device Architecture (CUDA) as the development environment for the GPUs. The Fast Fourier transform is performed using the CUFFT (CUDA FFT) library. As a result, we developed an integrated system for performing all processing from the capture to the reconstruction of 3D images by using these components and successfully used this system to reconstruct a 3D live scene at 12 frames per second.

  10. NEGOTIATING PLACE AND GENDERED VIOLENCE IN CANADA’S LARGEST OPEN DRUG SCENE

    Science.gov (United States)

    McNeil, Ryan; Shannon, Kate; Shaver, Laura; Kerr, Thomas; Small, Will

    2014-01-01

    Background Vancouver’s Downtown Eastside is home to Canada’s largest street-based drug scene and only supervised injection facility (Insite). High levels of violence among men and women have been documented in this neighbourhood. This study was undertaken to explore the role of violence in shaping the socio-spatial relations of women and ‘marginal men’ (i.e., those occupying subordinate positions within the drug scene) in the Downtown Eastside, including access to Insite. Methods Semi-structured qualitative interviews were conducted with 23 people who inject drugs (PWID) recruited through the Vancouver Area Network of Drug Users, a local drug user organization. Interviews included a mapping exercise. Interview transcripts and maps were analyzed thematically, with an emphasis on how gendered violence shaped participants’ spatial practices. Results Hegemonic forms of masculinity operating within the Downtown Eastside framed the everyday violence experienced by women and marginal men. This violence shaped the spatial practices of women and marginal men, in that they avoided drug scene milieus where they had experienced violence or that they perceived to be dangerous. Some men linked their spatial restrictions to the perceived 'dope quality' of neighbourhood drug dealers to maintain claims to dominant masculinities while enacting spatial strategies to promote safety. Environmental supports provided by health and social care agencies were critical in enabling women and marginal men to negotiate place and survival within the context of drug scene violence. Access to Insite did not motivate participants to enter into “dangerous” drug scene milieus but they did venture into these areas if necessary to obtain drugs or generate income. Conclusion Gendered violence is critical in restricting the geographies of men and marginal men within the street-based drug scene. There is a need to scale up existing environmental interventions, including supervised injection

  11. Spectral feature characterization methods for blood stain detection in crime scene backgrounds

    Science.gov (United States)

    Yang, Jie; Mathew, Jobin J.; Dube, Roger R.; Messinger, David W.

    2016-05-01

    Blood stains are one of the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Blood spectral signatures containing unique reflectance or absorption features are important both for forensic on-site investigation and laboratory testing. They can be used for target detection and identification applied to crime scene hyperspectral imagery, and also be utilized to analyze the spectral variation of blood on various backgrounds. Non-blood stains often mislead the detection and can generate false alarms at a real crime scene, especially for dark and red backgrounds. This paper measured the reflectance of liquid blood and 9 kinds of non-blood samples in the range of 350 nm - 2500 nm in various crime scene backgrounds, such as pure samples contained in petri dish with various thicknesses, mixed samples with different colors and materials of fabrics, and mixed samples with wood, all of which are examined to provide sub-visual evidence for detecting and recognizing blood from non-blood samples in a realistic crime scene. The spectral difference between blood and non-blood samples are examined and spectral features such as "peaks" and "depths" of reflectance are selected. Two blood stain detection methods are proposed in this paper. The first method uses index to denote the ratio of "depth" minus "peak" over"depth" add"peak" within a wavelength range of the reflectance spectrum. The second method uses relative band depth of the selected wavelength ranges of the reflectance spectrum. Results show that the index method is able to discriminate blood from non-blood samples in most tested crime scene backgrounds, but is not able to detect it from black felt. Whereas the relative band depth method is able to discriminate blood from non-blood samples on all of the tested background material types and colors.

  12. 47 CFR 80.1127 - On-scene communications.

    Science.gov (United States)

    2010-10-01

    ....1127 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) SAFETY AND SPECIAL RADIO SERVICES STATIONS IN THE MARITIME SERVICES Global Maritime Distress and Safety System (GMDSS) Operating Procedures for Distress and Safety Communications § 80.1127 On-scene communications. (a) On-scene communications...

  13. The occipital place area represents the local elements of scenes.

    Science.gov (United States)

    Kamps, Frederik S; Julian, Joshua B; Kubilius, Jonas; Kanwisher, Nancy; Dilks, Daniel D

    2016-05-15

    Neuroimaging studies have identified three scene-selective regions in human cortex: parahippocampal place area (PPA), retrosplenial complex (RSC), and occipital place area (OPA). However, precisely what scene information each region represents is not clear, especially for the least studied, more posterior OPA. Here we hypothesized that OPA represents local elements of scenes within two independent, yet complementary scene descriptors: spatial boundary (i.e., the layout of external surfaces) and scene content (e.g., internal objects). If OPA processes the local elements of spatial boundary information, then it should respond to these local elements (e.g., walls) themselves, regardless of their spatial arrangement. Indeed, we found that OPA, but not PPA or RSC, responded similarly to images of intact rooms and these same rooms in which the surfaces were fractured and rearranged, disrupting the spatial boundary. Next, if OPA represents the local elements of scene content information, then it should respond more when more such local elements (e.g., furniture) are present. Indeed, we found that OPA, but not PPA or RSC, responded more to multiple than single pieces of furniture. Taken together, these findings reveal that OPA analyzes local scene elements - both in spatial boundary and scene content representation - while PPA and RSC represent global scene properties. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. Accumulating and remembering the details of neutral and emotional natural scenes.

    Science.gov (United States)

    Melcher, David

    2010-01-01

    In contrast to our rich sensory experience with complex scenes in everyday life, the capacity of visual working memory is thought to be quite limited. Here our memory has been examined for the details of naturalistic scenes as a function of display duration, emotional valence of the scene, and delay before test. Individual differences in working memory and long-term memory for pictorial scenes were examined in experiment 1. The accumulation of memory for emotional scenes and the retention of these details in long-term memory were investigated in experiment 2. Although there were large individual differences in performance, memory for scene details generally exceeded the traditional working memory limit within a few seconds. Information about positive scenes was learned most quickly, while negative scenes showed the worst memory for details. The overall pattern of results was consistent with the idea that both short-term and long-term representations are mixed together in a medium-term 'online' memory for scenes.

  15. Distinct signalling properties of insulin receptor substrate (IRS)-1 and IRS-2 in mediating insulin/IGF-1 action.

    Science.gov (United States)

    Rabiee, Atefeh; Krüger, Marcus; Ardenkjær-Larsen, Jacob; Kahn, C Ronald; Emanuelli, Brice

    2018-07-01

    Insulin/IGF-1 action is driven by a complex and highly integrated signalling network. Loss-of-function studies indicate that the major insulin/IGF-1 receptor substrate (IRS) proteins, IRS-1 and IRS-2, mediate different biological functions in vitro and in vivo, suggesting specific signalling properties despite their high degree of homology. To identify mechanisms contributing to the differential signalling properties of IRS-1 and IRS-2 in the mediation of insulin/IGF-1 action, we performed comprehensive mass spectrometry (MS)-based phosphoproteomic profiling of brown preadipocytes from wild type, IRS-1 -/- and IRS-2 -/- mice in the basal and IGF-1-stimulated states. We applied stable isotope labeling by amino acids in cell culture (SILAC) for the accurate quantitation of changes in protein phosphorylation. We found ~10% of the 6262 unique phosphorylation sites detected to be regulated by IGF-1. These regulated sites included previously reported substrates of the insulin/IGF-1 signalling pathway, as well as novel substrates including Nuclear Factor I X and Semaphorin-4B. In silico prediction suggests the protein kinase B (PKB), protein kinase C (PKC), and cyclin-dependent kinase (CDK) as the main mediators of these phosphorylation events. Importantly, we found preferential phosphorylation patterns depending on the presence of either IRS-1 or IRS-2, which was associated with specific sets of kinases involved in signal transduction downstream of these substrates such as PDHK1, MAPK3, and PKD1 for IRS-1, and PIN1 and PKC beta for IRS-2. Overall, by generating a comprehensive phosphoproteomic profile from brown preadipocyte cells in response to IGF-1 stimulation, we reveal both common and distinct insulin/IGF-1 signalling events mediated by specific IRS proteins. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. Generation of Ground Truth Datasets for the Analysis of 3d Point Clouds in Urban Scenes Acquired via Different Sensors

    Science.gov (United States)

    Xu, Y.; Sun, Z.; Boerner, R.; Koch, T.; Hoegner, L.; Stilla, U.

    2018-04-01

    In this work, we report a novel way of generating ground truth dataset for analyzing point cloud from different sensors and the validation of algorithms. Instead of directly labeling large amount of 3D points requiring time consuming manual work, a multi-resolution 3D voxel grid for the testing site is generated. Then, with the help of a set of basic labeled points from the reference dataset, we can generate a 3D labeled space of the entire testing site with different resolutions. Specifically, an octree-based voxel structure is applied to voxelize the annotated reference point cloud, by which all the points are organized by 3D grids of multi-resolutions. When automatically annotating the new testing point clouds, a voting based approach is adopted to the labeled points within multiple resolution voxels, in order to assign a semantic label to the 3D space represented by the voxel. Lastly, robust line- and plane-based fast registration methods are developed for aligning point clouds obtained via various sensors. Benefiting from the labeled 3D spatial information, we can easily create new annotated 3D point clouds of different sensors of the same scene directly by considering the corresponding labels of 3D space the points located, which would be convenient for the validation and evaluation of algorithms related to point cloud interpretation and semantic segmentation.

  17. Multiple-octave spanning high-energy mid-IR supercontinuum generation in bulk quadratic nonlinear crystals

    DEFF Research Database (Denmark)

    Zhou, Binbin; Bache, Morten

    2016-01-01

    Bright and broadband coherent mid-IR radiation is important for exciting and probing molecular vibrations. Using cascaded nonlinearities in conventional quadratic nonlinear crystals like lithium niobate, self-defocusing near-IR solitons have been demonstrated that led to very broadband...

  18. Feature diagnosticity and task context shape activity in human scene-selective cortex.

    Science.gov (United States)

    Lowe, Matthew X; Gallivan, Jason P; Ferber, Susanne; Cant, Jonathan S

    2016-01-15

    Scenes are constructed from multiple visual features, yet previous research investigating scene processing has often focused on the contributions of single features in isolation. In the real world, features rarely exist independently of one another and likely converge to inform scene identity in unique ways. Here, we utilize fMRI and pattern classification techniques to examine the interactions between task context (i.e., attend to diagnostic global scene features; texture or layout) and high-level scene attributes (content and spatial boundary) to test the novel hypothesis that scene-selective cortex represents multiple visual features, the importance of which varies according to their diagnostic relevance across scene categories and task demands. Our results show for the first time that scene representations are driven by interactions between multiple visual features and high-level scene attributes. Specifically, univariate analysis of scene-selective cortex revealed that task context and feature diagnosticity shape activity differentially across scene categories. Examination using multivariate decoding methods revealed results consistent with univariate findings, but also evidence for an interaction between high-level scene attributes and diagnostic visual features within scene categories. Critically, these findings suggest visual feature representations are not distributed uniformly across scene categories but are shaped by task context and feature diagnosticity. Thus, we propose that scene-selective cortex constructs a flexible representation of the environment by integrating multiple diagnostically relevant visual features, the nature of which varies according to the particular scene being perceived and the goals of the observer. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Maxwellian Eye Fixation during Natural Scene Perception

    Directory of Open Access Journals (Sweden)

    Jean Duchesne

    2012-01-01

    Full Text Available When we explore a visual scene, our eyes make saccades to jump rapidly from one area to another and fixate regions of interest to extract useful information. While the role of fixation eye movements in vision has been widely studied, their random nature has been a hitherto neglected issue. Here we conducted two experiments to examine the Maxwellian nature of eye movements during fixation. In Experiment 1, eight participants were asked to perform free viewing of natural scenes displayed on a computer screen while their eye movements were recorded. For each participant, the probability density function (PDF of eye movement amplitude during fixation obeyed the law established by Maxwell for describing molecule velocity in gas. Only the mean amplitude of eye movements varied with expertise, which was lower in experts than novice participants. In Experiment 2, two participants underwent fixed time, free viewing of natural scenes and of their scrambled version while their eye movements were recorded. Again, the PDF of eye movement amplitude during fixation obeyed Maxwell’s law for each participant and for each scene condition (normal or scrambled. The results suggest that eye fixation during natural scene perception describes a random motion regardless of top-down or of bottom-up processes.

  20. Maxwellian Eye Fixation during Natural Scene Perception

    Science.gov (United States)

    Duchesne, Jean; Bouvier, Vincent; Guillemé, Julien; Coubard, Olivier A.

    2012-01-01

    When we explore a visual scene, our eyes make saccades to jump rapidly from one area to another and fixate regions of interest to extract useful information. While the role of fixation eye movements in vision has been widely studied, their random nature has been a hitherto neglected issue. Here we conducted two experiments to examine the Maxwellian nature of eye movements during fixation. In Experiment 1, eight participants were asked to perform free viewing of natural scenes displayed on a computer screen while their eye movements were recorded. For each participant, the probability density function (PDF) of eye movement amplitude during fixation obeyed the law established by Maxwell for describing molecule velocity in gas. Only the mean amplitude of eye movements varied with expertise, which was lower in experts than novice participants. In Experiment 2, two participants underwent fixed time, free viewing of natural scenes and of their scrambled version while their eye movements were recorded. Again, the PDF of eye movement amplitude during fixation obeyed Maxwell's law for each participant and for each scene condition (normal or scrambled). The results suggest that eye fixation during natural scene perception describes a random motion regardless of top-down or of bottom-up processes. PMID:23226987

  1. Selective scene perception deficits in a case of topographical disorientation.

    Science.gov (United States)

    Robin, Jessica; Lowe, Matthew X; Pishdadian, Sara; Rivest, Josée; Cant, Jonathan S; Moscovitch, Morris

    2017-07-01

    Topographical disorientation (TD) is a neuropsychological condition characterized by an inability to find one's way, even in familiar environments. One common contributing cause of TD is landmark agnosia, a visual recognition impairment specific to scenes and landmarks. Although many cases of TD with landmark agnosia have been documented, little is known about the perceptual mechanisms which lead to selective deficits in recognizing scenes. In the present study, we test LH, a man who exhibits TD and landmark agnosia, on measures of scene perception that require selectively attending to either the configural or surface properties of a scene. Compared to healthy controls, LH demonstrates perceptual impairments when attending to the configuration of a scene, but not when attending to its surface properties, such as the pattern of the walls or whether the ground is sand or grass. In contrast, when focusing on objects instead of scenes, LH demonstrates intact perception of both geometric and surface properties. This study demonstrates that in a case of TD and landmark agnosia, the perceptual impairments are selective to the layout of scenes, providing insight into the mechanism of landmark agnosia and scene-selective perceptual processes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Broadly tunable picosecond ir source

    International Nuclear Information System (INIS)

    Campillo, A.J.; Hyer, R.C.; Shapiro, S.L.

    1979-01-01

    A completely grating tuned (1.9 to 2.4 μm) picosecond traveling wave IR generator capable of controlled spectral bandwidth operation down to the Fourier Transform limit is reported. Subsequent down conversion in CdSe extends tuning to 10 to 20 μm

  3. Crime Scenes as Augmented Reality

    DEFF Research Database (Denmark)

    Sandvik, Kjetil

    2010-01-01

    Using the concept of augmented reality, this article will investigate how places in various ways have become augmented by means of different mediatization strategies. Augmentation of reality implies an enhancement of the places' emotional character: a certain mood, atmosphere or narrative surplus......, physical damage: they are all readable and interpretable signs. As augmented reality the crime scene carries a narrative which at first is hidden and must be revealed. Due to the process of investigation and the detective's ability to reason and deduce, the crime scene as place is reconstructed as virtual...

  4. Semi-Supervised Multitask Learning for Scene Recognition.

    Science.gov (United States)

    Lu, Xiaoqiang; Li, Xuelong; Mou, Lichao

    2015-09-01

    Scene recognition has been widely studied to understand visual information from the level of objects and their relationships. Toward scene recognition, many methods have been proposed. They, however, encounter difficulty to improve the accuracy, mainly due to two limitations: 1) lack of analysis of intrinsic relationships across different scales, say, the initial input and its down-sampled versions and 2) existence of redundant features. This paper develops a semi-supervised learning mechanism to reduce the above two limitations. To address the first limitation, we propose a multitask model to integrate scene images of different resolutions. For the second limitation, we build a model of sparse feature selection-based manifold regularization (SFSMR) to select the optimal information and preserve the underlying manifold structure of data. SFSMR coordinates the advantages of sparse feature selection and manifold regulation. Finally, we link the multitask model and SFSMR, and propose the semi-supervised learning method to reduce the two limitations. Experimental results report the improvements of the accuracy in scene recognition.

  5. Homa1-ir And Homa2-ir Indexes In Identifying Insulin Resistance And Metabolic Syndrome - Brazilian Metabolic Syndrome Study (brams) [Índices Homa1-ir E Homa2-ir Para Identificação De Resistência à Insulina E Síndrome Metabólica - Estudo Brasileiro De Síndrome Metabólica (brams)

    OpenAIRE

    Geloneze B.; Vasques A.C.J.; Stabe C.F.C.; Pareja J.C.; de Lima Rosado L.E.F.P.; de Queiroz E.C.; Tambascia M.A.

    2009-01-01

    Objective: To investigate cut-off values for HOMA1-IR and HOMA2-IR to identify insulin resistance (IR) and metabolic syndrome (MS), and to assess the association of the indexes with components of the MS. Methods: Nondiabetic subjects from the Brazilian Metabolic Syndrome Study were studied (n = 1,203, 18 to 78 years). The cut-off values for IR were determined from the 90th percentile in the healthy group (n = 297) and, for MS, a ROC curve was generated for the total sample. Results: In the he...

  6. Extending laser plasma accelerators into the mid-IR spectral domain with a next-generation ultra-fast CO2 laser

    Science.gov (United States)

    Pogorelsky, I. V.; Babzien, M.; Ben-Zvi, I.; Polyanskiy, M. N.; Skaritka, J.; Tresca, O.; Dover, N. P.; Najmudin, Z.; Lu, W.; Cook, N.; Ting, A.; Chen, Y.-H.

    2016-03-01

    Expanding the scope of relativistic plasma research to wavelengths longer than the λ/≈   0.8-1.1 μm range covered by conventional mode-locked solid-state lasers would offer attractive opportunities due to the quadratic scaling of the ponderomotive electron energy and critical plasma density with λ. Answering this quest, a next-generation mid-IR laser project is being advanced at the BNL ATF as a part of the user facility upgrade. We discuss the technical approach to this conceptually new 100 TW, 100 fs, λ  =   9-11 μm CO2 laser BESTIA (Brookhaven Experimental Supra-Terawatt Infrared at ATF) that encompasses several innovations applied for the first time to molecular gas lasers. BESTIA will enable new regimes of laser plasma accelerators. One example is shock-wave ion acceleration (SWA) from gas jets. We review ongoing efforts to achieve stable, monoenergetic proton acceleration by dynamically shaping the plasma density profile from a hydrogen gas target with laser-produced blast waves. At its full power, 100 TW BESTIA promises to achieve proton beams at an energy exceeding 200 MeV. In addition to ion acceleration in over-critical plasma, the ultra-intense mid-IR BESTIA will open up new opportunities in driving wakefields in tenuous plasmas, expanding the landscape of laser wakefield accelerator (LWFA) studies into the unexplored long-wavelength spectral domain. Simple wavelength scaling suggests that a 100 TW CO2 laser beam will be capable of efficiently generating plasma ‘bubbles’ a thousand times greater in volume compared with a near-IR solid state laser of an equivalent power. Combined with a femtosecond electron linac available at the ATF, this wavelength scaling will facilitate the study of external seeding and staging of LWFAs.

  7. Political conservatism predicts asymmetries in emotional scene memory.

    Science.gov (United States)

    Mills, Mark; Gonzalez, Frank J; Giuseffi, Karl; Sievert, Benjamin; Smith, Kevin B; Hibbing, John R; Dodd, Michael D

    2016-06-01

    Variation in political ideology has been linked to differences in attention to and processing of emotional stimuli, with stronger responses to negative versus positive stimuli (negativity bias) the more politically conservative one is. As memory is enhanced by attention, such findings predict that memory for negative versus positive stimuli should similarly be enhanced the more conservative one is. The present study tests this prediction by having participants study 120 positive, negative, and neutral scenes in preparation for a subsequent memory test. On the memory test, the same 120 scenes were presented along with 120 new scenes and participants were to respond whether a scene was old or new. Results on the memory test showed that negative scenes were more likely to be remembered than positive scenes, though, this was true only for political conservatives. That is, a larger negativity bias was found the more conservative one was. The effect was sizeable, explaining 45% of the variance across subjects in the effect of emotion. These findings demonstrate that the relationship between political ideology and asymmetries in emotion processing extend to memory and, furthermore, suggest that exploring the extent to which subject variation in interactions among emotion, attention, and memory is predicted by conservatism may provide new insights into theories of political ideology. Published by Elsevier B.V.

  8. Being There: (Re)Making the Assessment Scene

    Science.gov (United States)

    Gallagher, Chris W.

    2011-01-01

    I use Burkean analysis to show how neoliberalism undermines faculty assessment expertise and underwrites testing industry expertise in the current assessment scene. Contending that we cannot extricate ourselves from our limited agency in this scene until we abandon the familiar "stakeholder" theory of power, I propose a rewriting of the…

  9. Combined IR-Raman vs vibrational sum-frequency heterospectral correlation spectroscopy

    Science.gov (United States)

    Roy, Sandra; Beutier, Clémentine; Hore, Dennis K.

    2018-06-01

    Vibrational sum-frequency generation spectroscopy is a valuable probe of surface structure, particularly when the same molecules are present in one of the adjacent bulk solid or solution phases. As a result of the non-centrosymmetric requirement of SFG, the signal generated is a marker of the extent to which the molecules are ordered in an arrangement that breaks the up-down symmetry at the surface. In cases where the accompanying changes in the bulk are of interest in understanding and interpreting the surface structure, simultaneous analysis of the bulk IR absorption or bulk Raman scattering is helpful, and may be used in heterospectral surface-bulk two-dimensional correlation. We demonstrate that, in such cases, generating a new type of bulk spectrum that combines the IR and Raman amplitudes is a better candidate than the individual IR and Raman spectra for the purpose of correlation with the SFG signal.

  10. [Perception of objects and scenes in age-related macular degeneration].

    Science.gov (United States)

    Tran, T H C; Boucart, M

    2012-01-01

    Vision related quality of life questionnaires suggest that patients with AMD exhibit difficulties in finding objects and in mobility. In the natural environment, objects seldom appear in isolation. They appear in a spatial context which may obscure them in part or place obstacles in the patient's path. Furthermore, the luminance of a natural scene varies as a function of the hour of the day and the light source, which can alter perception. This study aims to evaluate recognition of objects and natural scenes by patients with AMD, by using photographs of such scenes. Studies demonstrate that AMD patients are able to categorize scenes as nature scenes or urban scenes and to discriminate indoor from outdoor scenes with a high degree of precision. They detect objects better in isolation, in color, or against a white background than in their natural contexts. These patients encounter more difficulties than normally sighted individuals in detecting objects in a low-contrast, black-and-white scene. These results may have implications for rehabilitation, for layout of texts and magazines for the reading-impaired and for the rearrangement of the spatial environment of older AMD patients in order to facilitate mobility, finding objects and reducing the risk of falls. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  11. CCD and IR array controllers

    Science.gov (United States)

    Leach, Robert W.; Low, Frank J.

    2000-08-01

    A family of controllers has bene developed that is powerful and flexible enough to operate a wide range of CCD and IR focal plane arrays in a variety of ground-based applications. These include fast readout of small CCD and IR arrays for adaptive optics applications, slow readout of large CCD and IR mosaics, and single CCD and IR array operation at low background/low noise regimes as well as high background/high speed regimes. The CCD and IR controllers have a common digital core based on user- programmable digital signal processors that are used to generate the array clocking and signal processing signals customized for each application. A fiber optic link passes image data and commands to VME or PCI interface boards resident in a host computer to the controller. CCD signal processing is done with a dual slope integrator operating at speeds of up to one Megapixel per second per channel. Signal processing of IR arrays is done either with a dual channel video processor or a four channel video processor that has built-in image memory and a coadder to 32-bit precision for operating high background arrays. Recent developments underway include the implementation of a fast fiber optic data link operating at a speed of 12.5 Megapixels per second for fast image transfer from the controller to the host computer, and supporting image acquisition software and device drivers for the PCI interface board for the Sun Solaris, Linux and Windows 2000 operating systems.

  12. A next generation Ultra-Fast Flash Observatory (UFFO-100) for IR/optical observations of the rise phase of gamma-ray bursts

    DEFF Research Database (Denmark)

    Grossan, B.; Park, I.H.; Ahmad, S.

    2012-01-01

    generation of rapid-response space observatory instruments. We list science topics motivating ourinstruments, those that require rapid optical-IR GRB response, including: A survey of GRB rise shapes/times,measurements of optical bulk Lorentz factors, investigation of magnetic dominated (vs. non-magnetic) jet...... for a next generation space observatory as a secondinstrument on a low-earth orbit spacecraft, with a 120 kg instrument mass budget. Restricted to relatively modest mass,power, and launch resources, we find that a coded mask X-ray camera with 1024 cm2 of detector area could rapidlylocate about 64...

  13. Text Detection in Natural Scene Images by Stroke Gabor Words.

    Science.gov (United States)

    Yi, Chucai; Tian, Yingli

    2011-01-01

    In this paper, we propose a novel algorithm, based on stroke components and descriptive Gabor filters, to detect text regions in natural scene images. Text characters and strings are constructed by stroke components as basic units. Gabor filters are used to describe and analyze the stroke components in text characters or strings. We define a suitability measurement to analyze the confidence of Gabor filters in describing stroke component and the suitability of Gabor filters on an image window. From the training set, we compute a set of Gabor filters that can describe principle stroke components of text by their parameters. Then a K -means algorithm is applied to cluster the descriptive Gabor filters. The clustering centers are defined as Stroke Gabor Words (SGWs) to provide a universal description of stroke components. By suitability evaluation on positive and negative training samples respectively, each SGW generates a pair of characteristic distributions of suitability measurements. On a testing natural scene image, heuristic layout analysis is applied first to extract candidate image windows. Then we compute the principle SGWs for each image window to describe its principle stroke components. Characteristic distributions generated by principle SGWs are used to classify text or nontext windows. Experimental results on benchmark datasets demonstrate that our algorithm can handle complex backgrounds and variant text patterns (font, color, scale, etc.).

  14. Construction and Optimization of Three-Dimensional Disaster Scenes within Mobile Virtual Reality

    Directory of Open Access Journals (Sweden)

    Ya Hu

    2018-06-01

    Full Text Available Because mobile virtual reality (VR is both mobile and immersive, three-dimensional (3D visualizations of disaster scenes based in mobile VR enable users to perceive and recognize disaster environments faster and better than is possible with other methods. To achieve immersion and prevent users from feeling dizzy, such visualizations require a high scene-rendering frame rate. However, the existing related visualization work cannot provide a sufficient solution for this purpose. This study focuses on the construction and optimization of a 3D disaster scene in order to satisfy the high frame-rate requirements for the rendering of 3D disaster scenes in mobile VR. First, the design of a plugin-free browser/server (B/S architecture for 3D disaster scene construction and visualization based in mobile VR is presented. Second, certain key technologies for scene optimization are discussed, including diverse modes of scene data representation, representation optimization of mobile scenes, and adaptive scheduling of mobile scenes. By means of these technologies, smartphones with various performance levels can achieve higher scene-rendering frame rates and improved visual quality. Finally, using a flood disaster as an example, a plugin-free prototype system was developed, and experiments were conducted. The experimental results demonstrate that a 3D disaster scene constructed via the methods addressed in this study has a sufficiently high scene-rendering frame rate to satisfy the requirements for rendering a 3D disaster scene in mobile VR.

  15. Monitoring combat wound healing by IR hyperspectral imaging

    Science.gov (United States)

    Howle, Chris R.; Spear, Abigail M.; Gazi, Ehsan; Crane, Nicole J.

    2016-03-01

    In recent conflicts, battlefield injuries consist largely of extensive soft injuries from blasts and high energy projectiles, including gunshot wounds. Repair of these large, traumatic wounds requires aggressive surgical treatment, including multiple surgical debridements to remove devitalised tissue and to reduce bacterial load. Identifying those patients with wound complications, such as infection and impaired healing, could greatly assist health care teams in providing the most appropriate and personalised care for combat casualties. Candidate technologies to enable this benefit include the fusion of imaging and optical spectroscopy to enable rapid identification of key markers. Hence, a novel system based on IR negative contrast imaging (NCI) is presented that employs an optical parametric oscillator (OPO) source comprising a periodically-poled LiNbO3 (PPLN) crystal. The crystal operates in the shortwave and midwave IR spectral regions (ca. 1.5 - 1.9 μm and 2.4 - 3.8 μm, respectively). Wavelength tuning is achieved by translating the crystal within the pump beam. System size and complexity are minimised by the use of single element detectors and the intracavity OPO design. Images are composed by raster scanning the monochromatic beam over the scene of interest; the reflection and/or absorption of the incident radiation by target materials and their surrounding environment provide a method for spatial location. Initial results using the NCI system to characterise wound biopsies are presented here.

  16. High-Energy, Multi-Octave-Spanning Mid-IR Sources via Adiabatic Difference Frequency Generation

    Science.gov (United States)

    2016-10-17

    MASSACHUSETTS AVE CAMBRIDGE , MA 02139-4301 US 8. PERFORMING ORGANIZATION REPORT NUMBER 9. SPONSORING/MONITORING AGENCY NAME(S) AND ADDRESS(ES) AF Office...ADFG) stage, illustrated in Fig. 2. This system represents a very simple extension of a near-IR OPCPA system to octave-spanning mid-IR, requiring...retrieved, as shown in Fig. 10. For illustration , 3 pulse shapes were selected. First, a simple linear chirp was applied to show that the pulse can be

  17. Generation and mid-IR measurement of a gas-phase to predict security parameters of aviation jet fuel.

    Science.gov (United States)

    Gómez-Carracedo, M P; Andrade, J M; Calviño, M A; Prada, D; Fernández, E; Muniategui, S

    2003-07-27

    The worldwide use of kerosene as aviation jet fuel makes its safety considerations of most importance not only for aircraft security but for the workers' health (chronic and/or acute exposure). As most kerosene risks come from its vapours, this work focuses on predicting seven characteristics (flash point, freezing point, % of aromatics and four distillation points) which assess its potential hazards. Two experimental devices were implemented in order to, first, generate a kerosene vapour phase and, then, to measure its mid-IR spectrum. All the working conditions required to generate the gas phase were optimised either in a univariate or a multivariate (SIMPLEX) approach. Next, multivariate prediction models were deployed using partial least squares regression and it was found that both the average prediction errors and precision parameters were satisfactory, almost always well below the reference figures.

  18. Fixations on objects in natural scenes: dissociating importance from salience

    Directory of Open Access Journals (Sweden)

    Bernard Marius e’t Hart

    2013-07-01

    Full Text Available The relation of selective attention to understanding of natural scenes has been subject to intense behavioral research and computational modeling, and gaze is often used as a proxy for such attention. The probability of an image region to be fixated typically correlates with its contrast. However, this relation does not imply a causal role of contrast. Rather, contrast may relate to an object’s importance for a scene, which in turn drives attention. Here we operationalize importance by the probability that an observer names the object as characteristic for a scene. We modify luminance contrast of either a frequently named (common/important or a rarely named (rare/unimportant object, track the observers’ eye movements during scene viewing and ask them to provide keywords describing the scene immediately after.When no object is modified relative to the background, important objects draw more fixations than unimportant ones. Increases of contrast make an object more likely to be fixated, irrespective of whether it was important for the original scene, while decreases in contrast have little effect on fixations. Any contrast modification makes originally unimportant objects more important for the scene. Finally, important objects are fixated more centrally than unimportant objects, irrespective of contrast.Our data suggest a dissociation between object importance (relevance for the scene and salience (relevance for attention. If an object obeys natural scene statistics, important objects are also salient. However, when natural scene statistics are violated, importance and salience are differentially affected. Object salience is modulated by the expectation about object properties (e.g., formed by context or gist, and importance by the violation of such expectations. In addition, the dependence of fixated locations within an object on the object’s importance suggests an analogy to the effects of word frequency on landing positions in reading.

  19. Alkoholio ir tabako pasiūlos ir paklausos teisinio reguliavimo raida Lietuvos Respublikoje: problemos ir sprendimai

    OpenAIRE

    Mockevičius, Arminas

    2014-01-01

    Viešosios teisės magistro studijų programos studento Armino Mockevičiaus buvo parašytas magistro baigiamasis darbas „Alkoholio ir tabako pasiūlos ir paklausos teisinio reguliavimo raida Lietuvos Respublikoje: problemos ir sprendimai“. Šis darbas parašytas Vilniuje, 2014 metais, Mykolo Romerio universiteto Teisės fakulteto Konstitucinės ir administracinės teisės institute, vadovaujant dr. Gintautui Vilkeliui, apimtis 98 p. Darbo tikslas yra atskleisti alkoholio ir tabako pasiūlos ir paklau...

  20. Affective salience can reverse the effects of stimulus-driven salience on eye movements in complex scenes

    Directory of Open Access Journals (Sweden)

    Yaqing eNiu

    2012-09-01

    Full Text Available In natural vision both stimulus features and cognitive/affective factors influence an observer's attention. However, the relationship between stimulus-driven (bottom-up and cognitive/affective (top-down factors remains controversial: Can affective salience counteract strong visual stimulus signals and shift attention allocation irrespective of bottom-up features? Is there any difference between negative and positive scenes in terms of their influence on attention deployment? Here we examined the impact of affective factors on eye movement behavior, to understand the competition between visual stimulus-driven salience and affective salience and how they affect gaze allocation in complex scene viewing. Building on our previous research, we compared predictions generated by a visual salience model with measures indexing participant-identified emotionally meaningful regions of each image. To examine how eye movement behaviour differs for negative, positive, and neutral scenes, we examined the influence of affective salience in capturing attention according to emotional valence. Taken together, our results show that affective salience can override stimulus-driven salience and overall emotional valence can determine attention allocation in complex scenes. These findings are consistent with the hypothesis that cognitive/affective factors play a dominant role in active gaze control.

  1. A statistical model for radar images of agricultural scenes

    Science.gov (United States)

    Frost, V. S.; Shanmugan, K. S.; Holtzman, J. C.; Stiles, J. A.

    1982-01-01

    The presently derived and validated statistical model for radar images containing many different homogeneous fields predicts the probability density functions of radar images of entire agricultural scenes, thereby allowing histograms of large scenes composed of a variety of crops to be described. Seasat-A SAR images of agricultural scenes are accurately predicted by the model on the basis of three assumptions: each field has the same SNR, all target classes cover approximately the same area, and the true reflectivity characterizing each individual target class is a uniformly distributed random variable. The model is expected to be useful in the design of data processing algorithms and for scene analysis using radar images.

  2. The neural bases of spatial frequency processing during scene perception

    Science.gov (United States)

    Kauffmann, Louise; Ramanoël, Stephen; Peyrin, Carole

    2014-01-01

    Theories on visual perception agree that scenes are processed in terms of spatial frequencies. Low spatial frequencies (LSF) carry coarse information whereas high spatial frequencies (HSF) carry fine details of the scene. However, how and where spatial frequencies are processed within the brain remain unresolved questions. The present review addresses these issues and aims to identify the cerebral regions differentially involved in low and high spatial frequency processing, and to clarify their attributes during scene perception. Results from a number of behavioral and neuroimaging studies suggest that spatial frequency processing is lateralized in both hemispheres, with the right and left hemispheres predominantly involved in the categorization of LSF and HSF scenes, respectively. There is also evidence that spatial frequency processing is retinotopically mapped in the visual cortex. HSF scenes (as opposed to LSF) activate occipital areas in relation to foveal representations, while categorization of LSF scenes (as opposed to HSF) activates occipital areas in relation to more peripheral representations. Concomitantly, a number of studies have demonstrated that LSF information may reach high-order areas rapidly, allowing an initial coarse parsing of the visual scene, which could then be sent back through feedback into the occipito-temporal cortex to guide finer HSF-based analysis. Finally, the review addresses spatial frequency processing within scene-selective regions areas of the occipito-temporal cortex. PMID:24847226

  3. Detection of chromatic and luminance distortions in natural scenes.

    Science.gov (United States)

    Jennings, Ben J; Wang, Karen; Menzies, Samantha; Kingdom, Frederick A A

    2015-09-01

    A number of studies have measured visual thresholds for detecting spatial distortions applied to images of natural scenes. In one study, Bex [J. Vis.10(2), 1 (2010)10.1167/10.2.231534-7362] measured sensitivity to sinusoidal spatial modulations of image scale. Here, we measure sensitivity to sinusoidal scale distortions applied to the chromatic, luminance, or both layers of natural scene images. We first established that sensitivity does not depend on whether the undistorted comparison image was of the same or of a different scene. Next, we found that, when the luminance but not chromatic layer was distorted, performance was the same regardless of whether the chromatic layer was present, absent, or phase-scrambled; in other words, the chromatic layer, in whatever form, did not affect sensitivity to the luminance layer distortion. However, when the chromatic layer was distorted, sensitivity was higher when the luminance layer was intact compared to when absent or phase-scrambled. These detection threshold results complement the appearance of periodic distortions of the image scale: when the luminance layer is distorted visibly, the scene appears distorted, but when the chromatic layer is distorted visibly, there is little apparent scene distortion. We conclude that (a) observers have a built-in sense of how a normal image of a natural scene should appear, and (b) the detection of distortion in, as well as the apparent distortion of, natural scene images is mediated predominantly by the luminance layer and not chromatic layer.

  4. Saliency-Guided Detection of Unknown Objects in RGB-D Indoor Scenes.

    Science.gov (United States)

    Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Xi, Ning

    2015-08-27

    This paper studies the problem of detecting unknown objects within indoor environments in an active and natural manner. The visual saliency scheme utilizing both color and depth cues is proposed to arouse the interests of the machine system for detecting unknown objects at salient positions in a 3D scene. The 3D points at the salient positions are selected as seed points for generating object hypotheses using the 3D shape. We perform multi-class labeling on a Markov random field (MRF) over the voxels of the 3D scene, combining cues from object hypotheses and 3D shape. The results from MRF are further refined by merging the labeled objects, which are spatially connected and have high correlation between color histograms. Quantitative and qualitative evaluations on two benchmark RGB-D datasets illustrate the advantages of the proposed method. The experiments of object detection and manipulation performed on a mobile manipulator validate its effectiveness and practicability in robotic applications.

  5. Visual search for changes in scenes creates long-term, incidental memory traces.

    Science.gov (United States)

    Utochkin, Igor S; Wolfe, Jeremy M

    2018-05-01

    Humans are very good at remembering large numbers of scenes over substantial periods of time. But how good are they at remembering changes to scenes? In this study, we tested scene memory and change detection two weeks after initial scene learning. In Experiments 1-3, scenes were learned incidentally during visual search for change. In Experiment 4, observers explicitly memorized scenes. At test, after two weeks observers were asked to discriminate old from new scenes, to recall a change that they had detected in the study phase, or to detect a newly introduced change in the memorization experiment. Next, they performed a change detection task, usually looking for the same change as in the study period. Scene recognition memory was found to be similar in all experiments, regardless of the study task. In Experiment 1, more difficult change detection produced better scene memory. Experiments 2 and 3 supported a "depth-of-processing" account for the effects of initial search and change detection on incidental memory for scenes. Of most interest, change detection was faster during the test phase than during the study phase, even when the observer had no explicit memory of having found that change previously. This result was replicated in two of our three change detection experiments. We conclude that scenes can be encoded incidentally as well as explicitly and that changes in those scenes can leave measurable traces even if they are not explicitly recalled.

  6. Scene reassembly after multimodal digitization and pipeline evaluation using photorealistic rendering

    DEFF Research Database (Denmark)

    Stets, Jonathan Dyssel; Dal Corso, Alessandro; Nielsen, Jannik Boll

    2017-01-01

    of the lighting environment. This enables pixelwise comparison of photographs of the real scene with renderings of the digital version of the scene. Such quantitative evaluation is useful for verifying acquired material appearance and reconstructed surface geometry, which is an important aspect of digital content......Transparent objects require acquisition modalities that are very different from the ones used for objects with more diffuse reflectance properties. Digitizing a scene where objects must be acquired with different modalities requires scene reassembly after reconstruction of the object surfaces....... This reassembly of a scene that was picked apart for scanning seems unexplored. We contribute with a multimodal digitization pipeline for scenes that require this step of reassembly. Our pipeline includes measurement of bidirectional reflectance distribution functions and high dynamic range imaging...

  7. Visual search in scenes involves selective and non-selective pathways

    Science.gov (United States)

    Wolfe, Jeremy M; Vo, Melissa L-H; Evans, Karla K; Greene, Michelle R

    2010-01-01

    How do we find objects in scenes? For decades, visual search models have been built on experiments in which observers search for targets, presented among distractor items, isolated and randomly arranged on blank backgrounds. Are these models relevant to search in continuous scenes? This paper argues that the mechanisms that govern artificial, laboratory search tasks do play a role in visual search in scenes. However, scene-based information is used to guide search in ways that had no place in earlier models. Search in scenes may be best explained by a dual-path model: A “selective” path in which candidate objects must be individually selected for recognition and a “non-selective” path in which information can be extracted from global / statistical information. PMID:21227734

  8. Radiative transfer model for heterogeneous 3-D scenes

    Science.gov (United States)

    Kimes, D. S.; Kirchner, J. A.

    1982-01-01

    A general mathematical framework for simulating processes in heterogeneous 3-D scenes is presented. Specifically, a model was designed and coded for application to radiative transfers in vegetative scenes. The model is unique in that it predicts (1) the directional spectral reflectance factors as a function of the sensor's azimuth and zenith angles and the sensor's position above the canopy, (2) the spectral absorption as a function of location within the scene, and (3) the directional spectral radiance as a function of the sensor's location within the scene. The model was shown to follow known physical principles of radiative transfer. Initial verification of the model as applied to a soybean row crop showed that the simulated directional reflectance data corresponded relatively well in gross trends to the measured data. However, the model can be greatly improved by incorporating more sophisticated and realistic anisotropic scattering algorithms

  9. Two Distinct Scene-Processing Networks Connecting Vision and Memory.

    Science.gov (United States)

    Baldassano, Christopher; Esteva, Andre; Fei-Fei, Li; Beck, Diane M

    2016-01-01

    A number of regions in the human brain are known to be involved in processing natural scenes, but the field has lacked a unifying framework for understanding how these different regions are organized and interact. We provide evidence from functional connectivity and meta-analyses for a new organizational principle, in which scene processing relies upon two distinct networks that split the classically defined parahippocampal place area (PPA). The first network of strongly connected regions consists of the occipital place area/transverse occipital sulcus and posterior PPA, which contain retinotopic maps and are not strongly coupled to the hippocampus at rest. The second network consists of the caudal inferior parietal lobule, retrosplenial complex, and anterior PPA, which connect to the hippocampus (especially anterior hippocampus), and are implicated in both visual and nonvisual tasks, including episodic memory and navigation. We propose that these two distinct networks capture the primary functional division among scene-processing regions, between those that process visual features from the current view of a scene and those that connect information from a current scene view with a much broader temporal and spatial context. This new framework for understanding the neural substrates of scene-processing bridges results from many lines of research, and makes specific functional predictions.

  10. Image registration of naval IR images

    Science.gov (United States)

    Rodland, Arne J.

    1996-06-01

    In a real world application an image from a stabilized sensor on a moving platform will not be 100 percent stabilized. There will always be a small unknown error in the stabilization due to factors such as dynamic deformations in the structure between sensor and reference Inertial Navigation Unit, servo inaccuracies, etc. For a high resolution imaging sensor this stabilization error causes the image to move several pixels in unknown direction between frames. TO be able to detect and track small moving objects from such a sensor, this unknown movement of the sensor image must be estimated. An algorithm that searches for land contours in the image has been evaluated. The algorithm searches for high contrast points distributed over the whole image. As long as moving objects in the scene only cover a small area of the scene, most of the points are located on solid ground. By matching the list of points from frame to frame, the movement of the image due to stabilization errors can be estimated and compensated. The point list is searched for points with diverging movement from the estimated stabilization error. These points are then assumed to be located on moving objects. Points assumed to be located on moving objects are gradually exchanged with new points located in the same area. Most of the processing is performed on the list of points and not on the complete image. The algorithm is therefore very fast and well suited for real time implementation. The algorithm has been tested on images from an experimental IR scanner. Stabilization errors were added artificially to the image such that the output from the algorithm could be compared with the artificially added stabilization errors.

  11. Cognitive organization of roadway scenes : an empirical study.

    NARCIS (Netherlands)

    Gundy, C.M.

    1995-01-01

    This report describes six studies investigating the cognitive organization of roadway scenes. These scenes were represented by still photographs taken on a number of roads outside of built-up areas. Seventy-eight drivers, stratified by age and sex to simulate the Dutch driving population,

  12. A view not to be missed: Salient scene content interferes with cognitive restoration

    Science.gov (United States)

    Van der Jagt, Alexander P. N.; Craig, Tony; Brewer, Mark J.; Pearson, David G.

    2017-01-01

    Attention Restoration Theory (ART) states that built scenes place greater load on attentional resources than natural scenes. This is explained in terms of "hard" and "soft" fascination of built and natural scenes. Given a lack of direct empirical evidence for this assumption we propose that perceptual saliency of scene content can function as an empirically derived indicator of fascination. Saliency levels were established by measuring speed of scene category detection using a Go/No-Go detection paradigm. Experiment 1 shows that built scenes are more salient than natural scenes. Experiment 2 replicates these findings using greyscale images, ruling out a colour-based response strategy, and additionally shows that built objects in natural scenes affect saliency to a greater extent than the reverse. Experiment 3 demonstrates that the saliency of scene content is directly linked to cognitive restoration using an established restoration paradigm. Overall, these findings demonstrate an important link between the saliency of scene content and related cognitive restoration. PMID:28723975

  13. A view not to be missed: Salient scene content interferes with cognitive restoration.

    Directory of Open Access Journals (Sweden)

    Alexander P N Van der Jagt

    Full Text Available Attention Restoration Theory (ART states that built scenes place greater load on attentional resources than natural scenes. This is explained in terms of "hard" and "soft" fascination of built and natural scenes. Given a lack of direct empirical evidence for this assumption we propose that perceptual saliency of scene content can function as an empirically derived indicator of fascination. Saliency levels were established by measuring speed of scene category detection using a Go/No-Go detection paradigm. Experiment 1 shows that built scenes are more salient than natural scenes. Experiment 2 replicates these findings using greyscale images, ruling out a colour-based response strategy, and additionally shows that built objects in natural scenes affect saliency to a greater extent than the reverse. Experiment 3 demonstrates that the saliency of scene content is directly linked to cognitive restoration using an established restoration paradigm. Overall, these findings demonstrate an important link between the saliency of scene content and related cognitive restoration.

  14. Performance Benefits with Scene-Linked HUD Symbology: An Attentional Phenomenon?

    Science.gov (United States)

    Levy, Jonathan L.; Foyle, David C.; McCann, Robert S.; Null, Cynthia H. (Technical Monitor)

    1999-01-01

    Previous research has shown that in a simulated flight task, navigating a path defined by ground markers while maintaining a target altitude is more accurate when an altitude indicator appears in a virtual "scenelinked" format (projected symbology moving as if it were part of the out-the-window environment) compared to the fixed-location, superimposed format found on present-day HUDs (Foyle, McCann & Shelden, 1995). One explanation of the scene-linked performance advantage is that attention can be divided between scene-linked symbology and the outside world more efficiently than between standard (fixed-position) HUD symbology and the outside world. The present study tested two alternative explanations by manipulating the location of the scene-linked HUD symbology relative to the ground path markers. Scene-linked symbology yielded better ground path-following performance than standard fixed-location superimposed symbology regardless of whether the scene-linked symbology appeared directly along the ground path or at various distances off the path. The results support the explanation that the performance benefits found with scene-linked symbology are attentional.

  15. Scene complexity: influence on perception, memory, and development in the medial temporal lobe

    Directory of Open Access Journals (Sweden)

    Xiaoqian J Chai

    2010-03-01

    Full Text Available Regions in the medial temporal lobe (MTL and prefrontal cortex (PFC are involved in memory formation for scenes in both children and adults. The development in children and adolescents of successful memory encoding for scenes has been associated with increased activation in PFC, but not MTL, regions. However, evidence suggests that a functional subregion of the MTL that supports scene perception, located in the parahippocampal gyrus (PHG, goes through a prolonged maturation process. Here we tested the hypothesis that maturation of scene perception supports the development of memory for complex scenes. Scenes were characterized by their levels of complexity defined by the number of unique object categories depicted in the scene. Recognition memory improved with age, in participants ages 8-24, for high, but not low, complexity scenes. High-complexity compared to low-complexity scenes activated a network of regions including the posterior PHG. The difference in activations for high- versus low- complexity scenes increased with age in the right posterior PHG. Finally, activations in right posterior PHG were associated with age-related increases in successful memory formation for high-, but not low-, complexity scenes. These results suggest that functional maturation of the right posterior PHG plays a critical role in the development of enduring long-term recollection for high-complexity scenes.

  16. Crime Scene Investigation.

    Science.gov (United States)

    Harris, Barbara; Kohlmeier, Kris; Kiel, Robert D.

    Casting students in grades 5 through 12 in the roles of reporters, lawyers, and detectives at the scene of a crime, this interdisciplinary activity involves participants in the intrigue and drama of crime investigation. Using a hands-on, step-by-step approach, students work in teams to investigate a crime and solve a mystery. Through role-playing…

  17. Changing scenes: memory for naturalistic events following change blindness.

    Science.gov (United States)

    Mäntylä, Timo; Sundström, Anna

    2004-11-01

    Research on scene perception indicates that viewers often fail to detect large changes to scene regions when these changes occur during a visual disruption such as a saccade or a movie cut. In two experiments, we examined whether this relative inability to detect changes would produce systematic biases in event memory. In Experiment 1, participants decided whether two successively presented images were the same or different, followed by a memory task, in which they recalled the content of the viewed scene. In Experiment 2, participants viewed a short video, in which an actor carried out a series of daily activities, and central scenes' attributes were changed during a movie cut. A high degree of change blindness was observed in both experiments, and these effects were related to scene complexity (Experiment 1) and level of retrieval support (Experiment 2). Most important, participants reported the changed, rather than the initial, event attributes following a failure in change detection. These findings suggest that attentional limitations during encoding contribute to biases in episodic memory.

  18. Sensory substitution: the spatial updating of auditory scenes ‘mimics’ the spatial updating of visual scenes

    Directory of Open Access Journals (Sweden)

    Achille ePasqualotto

    2016-04-01

    Full Text Available Visual-to-auditory sensory substitution is used to convey visual information through audition, and it was initially created to compensate for blindness; it consists of software converting the visual images captured by a video-camera into the equivalent auditory images, or ‘soundscapes’. Here, it was used by blindfolded sighted participants to learn the spatial position of simple shapes depicted in images arranged on the floor. Very few studies have used sensory substitution to investigate spatial representation, while it has been widely used to investigate object recognition. Additionally, with sensory substitution we could study the performance of participants actively exploring the environment through audition, rather than passively localising sound sources. Blindfolded participants egocentrically learnt the position of six images by using sensory substitution and then a judgement of relative direction task (JRD was used to determine how this scene was represented. This task consists of imagining being in a given location, oriented in a given direction, and pointing towards the required image. Before performing the JRD task, participants explored a map that provided allocentric information about the scene. Although spatial exploration was egocentric, surprisingly we found that performance in the JRD task was better for allocentric perspectives. This suggests that the egocentric representation of the scene was updated. This result is in line with previous studies using visual and somatosensory scenes, thus supporting the notion that different sensory modalities produce equivalent spatial representation(s. Moreover, our results have practical implications to improve training methods with sensory substitution devices.

  19. Mental Layout Extrapolations Prime Spatial Processing of Scenes

    Science.gov (United States)

    Gottesman, Carmela V.

    2011-01-01

    Four experiments examined whether scene processing is facilitated by layout representation, including layout that was not perceived but could be predicted based on a previous partial view (boundary extension). In a priming paradigm (after Sanocki, 2003), participants judged objects' distances in photographs. In Experiment 1, full scenes (target),…

  20. Colour agnosia impairs the recognition of natural but not of non-natural scenes.

    Science.gov (United States)

    Nijboer, Tanja C W; Van Der Smagt, Maarten J; Van Zandvoort, Martine J E; De Haan, Edward H F

    2007-03-01

    Scene recognition can be enhanced by appropriate colour information, yet the level of visual processing at which colour exerts its effects is still unclear. It has been suggested that colour supports low-level sensory processing, while others have claimed that colour information aids semantic categorization and recognition of objects and scenes. We investigated the effect of colour on scene recognition in a case of colour agnosia, M.A.H. In a scene identification task, participants had to name images of natural or non-natural scenes in six different formats. Irrespective of scene format, M.A.H. was much slower on the natural than on the non-natural scenes. As expected, neither M.A.H. nor control participants showed any difference in performance for the non-natural scenes. However, for the natural scenes, appropriate colour facilitated scene recognition in control participants (i.e., shorter reaction times), whereas M.A.H.'s performance did not differ across formats. Our data thus support the hypothesis that the effect of colour occurs at the level of learned associations.

  1. Semantic guidance of eye movements in real-world scenes

    OpenAIRE

    Hwang, Alex D.; Wang, Hsueh-Cheng; Pomplun, Marc

    2011-01-01

    The perception of objects in our visual world is influenced by not only their low-level visual features such as shape and color, but also their high-level features such as meaning and semantic relations among them. While it has been shown that low-level features in real-world scenes guide eye movements during scene inspection and search, the influence of semantic similarity among scene objects on eye movements in such situations has not been investigated. Here we study guidance of eye movemen...

  2. Picture models for 2-scene comics creating system

    Directory of Open Access Journals (Sweden)

    Miki UENO

    2015-03-01

    Full Text Available Recently, computer understanding pictures and stories becomes one of the most important research topics in computer science. However, there are few researches about human like understanding by computers because pictures have not certain format and contain more lyric aspect than that of natural laguage. For picture understanding, a comic is the suitable target because it is consisted by clear and simple plot of stories and separated scenes.In this paper, we propose 2 different types of picture models for 2-scene comics creating system. We also show the method of the application of 2-scene comics creating system by means of proposed picture model.

  3. IR-360 nuclear power plant safety functions and component classification

    International Nuclear Information System (INIS)

    Yousefpour, F.; Shokri, F.; Soltani, H.

    2010-01-01

    The IR-360 nuclear power plant as a 2-loop PWR of 360 MWe power generation capacity is under design in MASNA Company. For design of the IR-360 structures, systems and components (SSCs), the codes and standards and their design requirements must be determined. It is a prerequisite to classify the IR-360 safety functions and safety grade of structures, systems and components correctly for selecting and adopting the suitable design codes and standards. This paper refers to the IAEA nuclear safety codes and standards as well as USNRC standard system to determine the IR-360 safety functions and to formulate the principles of the IR-360 component classification in accordance with the safety philosophy and feature of the IR-360. By implementation of defined classification procedures for the IR-360 SSCs, the appropriate design codes and standards are specified. The requirements of specific codes and standards are used in design process of IR-360 SSCs by design engineers of MASNA Company. In this paper, individual determination of the IR-360 safety functions and definition of the classification procedures and roles are presented. Implementation of this work which is described with example ensures the safety and reliability of the IR-360 nuclear power plant.

  4. IR-360 nuclear power plant safety functions and component classification

    Energy Technology Data Exchange (ETDEWEB)

    Yousefpour, F., E-mail: fyousefpour@snira.co [Management of Nuclear Power Plant Construction Company (MASNA) (Iran, Islamic Republic of); Shokri, F.; Soltani, H. [Management of Nuclear Power Plant Construction Company (MASNA) (Iran, Islamic Republic of)

    2010-10-15

    The IR-360 nuclear power plant as a 2-loop PWR of 360 MWe power generation capacity is under design in MASNA Company. For design of the IR-360 structures, systems and components (SSCs), the codes and standards and their design requirements must be determined. It is a prerequisite to classify the IR-360 safety functions and safety grade of structures, systems and components correctly for selecting and adopting the suitable design codes and standards. This paper refers to the IAEA nuclear safety codes and standards as well as USNRC standard system to determine the IR-360 safety functions and to formulate the principles of the IR-360 component classification in accordance with the safety philosophy and feature of the IR-360. By implementation of defined classification procedures for the IR-360 SSCs, the appropriate design codes and standards are specified. The requirements of specific codes and standards are used in design process of IR-360 SSCs by design engineers of MASNA Company. In this paper, individual determination of the IR-360 safety functions and definition of the classification procedures and roles are presented. Implementation of this work which is described with example ensures the safety and reliability of the IR-360 nuclear power plant.

  5. Scene Categorization in Alzheimer's Disease: A Saccadic Choice Task

    Directory of Open Access Journals (Sweden)

    Quentin Lenoble

    2015-01-01

    Full Text Available Aims: We investigated the performance in scene categorization of patients with Alzheimer's disease (AD using a saccadic choice task. Method: 24 patients with mild AD, 28 age-matched controls and 26 young people participated in the study. The participants were presented pairs of coloured photographs and were asked to make a saccadic eye movement to the picture corresponding to the target scene (natural vs. urban, indoor vs. outdoor. Results: The patients' performance did not differ from chance for natural scenes. Differences between young and older controls and patients with AD were found in accuracy but not saccadic latency. Conclusions: The results are interpreted in terms of cerebral reorganization in the prefrontal and temporo-occipital cortex of patients with AD, but also in terms of impaired processing of visual global properties of scenes.

  6. Strong-Field Physics with Mid-IR Fields

    Directory of Open Access Journals (Sweden)

    Benjamin Wolter

    2015-06-01

    Full Text Available Strong-field physics is currently experiencing a shift towards the use of mid-IR driving wavelengths. This is because they permit conducting experiments unambiguously in the quasistatic regime and enable exploiting the effects related to ponderomotive scaling of electron recollisions. Initial measurements taken in the mid-IR immediately led to a deeper understanding of photoionization and allowed a discrimination among different theoretical models. Ponderomotive scaling of rescattering has enabled new avenues towards time-resolved probing of molecular structure. Essential for this paradigm shift was the convergence of two experimental tools: (1 intense mid-IR sources that can create high-energy photons and electrons while operating within the quasistatic regime and (2 detection systems that can detect the generated high-energy particles and image the entire momentum space of the interaction in full coincidence. Here, we present a unique combination of these two essential ingredients, namely, a 160-kHz mid-IR source and a reaction microscope detection system, to present an experimental methodology that provides an unprecedented three-dimensional view of strong-field interactions. The system is capable of generating and detecting electron energies that span a 6 order of magnitude dynamic range. We demonstrate the versatility of the system by investigating electron recollisions, the core process that drives strong-field phenomena, at both low (meV and high (hundreds of eV energies. The low-energy region is used to investigate recently discovered low-energy structures, while the high-energy electrons are used to probe atomic structure via laser-induced electron diffraction. Moreover, we present, for the first time, the correlated momentum distribution of electrons from nonsequential double ionization driven by mid-IR pulses.

  7. A hierarchical probabilistic model for rapid object categorization in natural scenes.

    Directory of Open Access Journals (Sweden)

    Xiaofu He

    Full Text Available Humans can categorize objects in complex natural scenes within 100-150 ms. This amazing ability of rapid categorization has motivated many computational models. Most of these models require extensive training to obtain a decision boundary in a very high dimensional (e.g., ∼6,000 in a leading model feature space and often categorize objects in natural scenes by categorizing the context that co-occurs with objects when objects do not occupy large portions of the scenes. It is thus unclear how humans achieve rapid scene categorization.To address this issue, we developed a hierarchical probabilistic model for rapid object categorization in natural scenes. In this model, a natural object category is represented by a coarse hierarchical probability distribution (PD, which includes PDs of object geometry and spatial configuration of object parts. Object parts are encoded by PDs of a set of natural object structures, each of which is a concatenation of local object features. Rapid categorization is performed as statistical inference. Since the model uses a very small number (∼100 of structures for even complex object categories such as animals and cars, it requires little training and is robust in the presence of large variations within object categories and in their occurrences in natural scenes. Remarkably, we found that the model categorized animals in natural scenes and cars in street scenes with a near human-level performance. We also found that the model located animals and cars in natural scenes, thus overcoming a flaw in many other models which is to categorize objects in natural context by categorizing contextual features. These results suggest that coarse PDs of object categories based on natural object structures and statistical operations on these PDs may underlie the human ability to rapidly categorize scenes.

  8. System and method for extracting dominant orientations from a scene

    Science.gov (United States)

    Straub, Julian; Rosman, Guy; Freifeld, Oren; Leonard, John J.; Fisher, III; , John W.

    2017-05-30

    In one embodiment, a method of identifying the dominant orientations of a scene comprises representing a scene as a plurality of directional vectors. The scene may comprise a three-dimensional representation of a scene, and the plurality of directional vectors may comprise a plurality of surface normals. The method further comprises determining, based on the plurality of directional vectors, a plurality of orientations describing the scene. The determined plurality of orientations explains the directionality of the plurality of directional vectors. In certain embodiments, the plurality of orientations may have independent axes of rotation. The plurality of orientations may be determined by representing the plurality of directional vectors as lying on a mathematical representation of a sphere, and inferring the parameters of a statistical model to adapt the plurality of orientations to explain the positioning of the plurality of directional vectors lying on the mathematical representation of the sphere.

  9. AR goggles make crime scene investigation a desk job

    OpenAIRE

    Aron, Jacob; NORTHFIELD, Dean

    2012-01-01

    CRIME scene investigators could one day help solve murders without leaving the office. A pair of augmented reality glasses could allow local police to virtually tag objects in a crime scene, and build a clean record of the scene in 3D video before evidence is removed for processing.\\ud The system, being developed by Oytun Akman and colleagues at the Delft University of Technology in the Netherlands, consists of a head-mounted display receiving 3D video from a pair of attached cameras controll...

  10. Oculomotor capture during real-world scene viewing depends on cognitive load.

    Science.gov (United States)

    Matsukura, Michi; Brockmole, James R; Boot, Walter R; Henderson, John M

    2011-03-25

    It has been claimed that gaze control during scene viewing is largely governed by stimulus-driven, bottom-up selection mechanisms. Recent research, however, has strongly suggested that observers' top-down control plays a dominant role in attentional prioritization in scenes. A notable exception to this strong top-down control is oculomotor capture, where visual transients in a scene draw the eyes. One way to test whether oculomotor capture during scene viewing is independent of an observer's top-down goal setting is to reduce observers' cognitive resource availability. In the present study, we examined whether increasing observers' cognitive load influences the frequency and speed of oculomotor capture during scene viewing. In Experiment 1, we tested whether increasing observers' cognitive load modulates the degree of oculomotor capture by a new object suddenly appeared in a scene. Similarly, in Experiment 2, we tested whether increasing observers' cognitive load modulates the degree of oculomotor capture by an object's color change. In both experiments, the degree of oculomotor capture decreased as observers' cognitive resources were reduced. These results suggest that oculomotor capture during scene viewing is dependent on observers' top-down selection mechanisms. Copyright © 2011 Elsevier Ltd. All rights reserved.

  11. Gay and Lesbian Scene in Metelkova

    Directory of Open Access Journals (Sweden)

    Nataša Velikonja

    2013-09-01

    Full Text Available The article deals with the development of the gay and lesbian scene in ACC Metelkova, while specifying the preliminary aspects of establishing and building gay and lesbian activism associated with spatial issues. The struggle for space or occupying public space is vital for the gay and lesbian scene, as it provides not only the necessary socializing opportunities for gays and lesbians, but also does away with the historical hiding of homosexuality in the closet, in seclusion and silence. Because of their autonomy and long-term, continuous existence, homo-clubs at Metelkova contributed to the consolidation of the gay and lesbian scene in Slovenia and significantly improved the opportunities for cultural, social and political expression of gays and lesbians. Such a synthesis of the cultural, social and political, further intensified in Metelkova, and characterizes the gay and lesbian community in Slovenia from the very outset of gay and lesbian activism in 1984. It is this long-term synthesis that keeps this community in Slovenia so vital and politically resilient.

  12. Small-size pedestrian detection in large scene based on fast R-CNN

    Science.gov (United States)

    Wang, Shengke; Yang, Na; Duan, Lianghua; Liu, Lu; Dong, Junyu

    2018-04-01

    Pedestrian detection is a canonical sub-problem of object detection with high demand during recent years. Although recent deep learning object detectors such as Fast/Faster R-CNN have shown excellent performance for general object detection, they have limited success for small size pedestrian detection in large-view scene. We study that the insufficient resolution of feature maps lead to the unsatisfactory accuracy when handling small instances. In this paper, we investigate issues involving Fast R-CNN for pedestrian detection. Driven by the observations, we propose a very simple but effective baseline for pedestrian detection based on Fast R-CNN, employing the DPM detector to generate proposals for accuracy, and training a fast R-CNN style network to jointly optimize small size pedestrian detection with skip connection concatenating feature from different layers to solving coarseness of feature maps. And the accuracy is improved in our research for small size pedestrian detection in the real large scene.

  13. Decontamination and decommissioning of laboratory solutions enriched uranium (IR-01 b)

    International Nuclear Information System (INIS)

    Diaz Arocas, P. P.; Sama Colao, J.; Garcia Diaz, A.; Torre Rodriguez, J.; Martinez, A.; Argiles, E.; Garrido Delgado, C.

    2010-01-01

    Completed actions decontamination and decommissioning of the Laboratory of Enriched Uranium Solutions, attached to the Radioactivity lR-0l CIEMAT, was carried out final radiological control of the laboratory. From the documentation generated proceeded to request modification of the IR-01 installation by closing its laboratory IR-01 b.

  14. Integration of heterogeneous features for remote sensing scene classification

    Science.gov (United States)

    Wang, Xin; Xiong, Xingnan; Ning, Chen; Shi, Aiye; Lv, Guofang

    2018-01-01

    Scene classification is one of the most important issues in remote sensing (RS) image processing. We find that features from different channels (shape, spectral, texture, etc.), levels (low-level and middle-level), or perspectives (local and global) could provide various properties for RS images, and then propose a heterogeneous feature framework to extract and integrate heterogeneous features with different types for RS scene classification. The proposed method is composed of three modules (1) heterogeneous features extraction, where three heterogeneous feature types, called DS-SURF-LLC, mean-Std-LLC, and MS-CLBP, are calculated, (2) heterogeneous features fusion, where the multiple kernel learning (MKL) is utilized to integrate the heterogeneous features, and (3) an MKL support vector machine classifier for RS scene classification. The proposed method is extensively evaluated on three challenging benchmark datasets (a 6-class dataset, a 12-class dataset, and a 21-class dataset), and the experimental results show that the proposed method leads to good classification performance. It produces good informative features to describe the RS image scenes. Moreover, the integration of heterogeneous features outperforms some state-of-the-art features on RS scene classification tasks.

  15. Audio scene segmentation for video with generic content

    Science.gov (United States)

    Niu, Feng; Goela, Naveen; Divakaran, Ajay; Abdel-Mottaleb, Mohamed

    2008-01-01

    In this paper, we present a content-adaptive audio texture based method to segment video into audio scenes. The audio scene is modeled as a semantically consistent chunk of audio data. Our algorithm is based on "semantic audio texture analysis." At first, we train GMM models for basic audio classes such as speech, music, etc. Then we define the semantic audio texture based on those classes. We study and present two types of scene changes, those corresponding to an overall audio texture change and those corresponding to a special "transition marker" used by the content creator, such as a short stretch of music in a sitcom or silence in dramatic content. Unlike prior work using genre specific heuristics, such as some methods presented for detecting commercials, we adaptively find out if such special transition markers are being used and if so, which of the base classes are being used as markers without any prior knowledge about the content. Our experimental results show that our proposed audio scene segmentation works well across a wide variety of broadcast content genres.

  16. Simulator scene display evaluation device

    Science.gov (United States)

    Haines, R. F. (Inventor)

    1986-01-01

    An apparatus for aligning and calibrating scene displays in an aircraft simulator has a base on which all of the instruments for the aligning and calibrating are mounted. Laser directs beam at double right prism which is attached to pivoting support on base. The pivot point of the prism is located at the design eye point (DEP) of simulator during the aligning and calibrating. The objective lens in the base is movable on a track to follow the laser beam at different angles within the field of vision at the DEP. An eyepiece and a precision diopter are movable into a position behind the prism during the scene evaluation. A photometer or illuminometer is pivotable about the pivot into and out of position behind the eyepiece.

  17. Parallel programming of saccades during natural scene viewing: evidence from eye movement positions.

    Science.gov (United States)

    Wu, Esther X W; Gilani, Syed Omer; van Boxtel, Jeroen J A; Amihai, Ido; Chua, Fook Kee; Yen, Shih-Cheng

    2013-10-24

    Previous studies have shown that saccade plans during natural scene viewing can be programmed in parallel. This evidence comes mainly from temporal indicators, i.e., fixation durations and latencies. In the current study, we asked whether eye movement positions recorded during scene viewing also reflect parallel programming of saccades. As participants viewed scenes in preparation for a memory task, their inspection of the scene was suddenly disrupted by a transition to another scene. We examined whether saccades after the transition were invariably directed immediately toward the center or were contingent on saccade onset times relative to the transition. The results, which showed a dissociation in eye movement behavior between two groups of saccades after the scene transition, supported the parallel programming account. Saccades with relatively long onset times (>100 ms) after the transition were directed immediately toward the center of the scene, probably to restart scene exploration. Saccades with short onset times (programming of saccades during scene viewing. Additionally, results from the analyses of intersaccadic intervals were also consistent with the parallel programming hypothesis.

  18. PENDIDIKAN AKHLAK MUSLIMAT MELALUISYA’IR : ANALISIS GENDER ATAS AJARAN SYI’IR MUSLIMAT KARYA NYAI WANIFAH KUDUS

    Directory of Open Access Journals (Sweden)

    Nur Said

    2016-03-01

    result of this study are: Firstly, Syi’ir Muslimat is written by Nyai Wanifah, a woman who lived during the Dutch colonial era in Islamic boarding schools (pesantren tradition in Kudus, Central Java. Secondly, some of the moral education values in Syi’ir Muslimat among others: (1 The importance of moral education, (2 The danger of stupid women; (3 The importance of learning for women at early age, (4 Ethics decorated themselves; (5 The danger of materialism, (6 The ethics of relation the family; (7 From the house to reach heaven; (8 Beware the devil trickery; (9 Avoid adultery; (10 the important of closing aurot; (11 devoted to parents. Third, although there are some compounds that gender bias in Syi’ir Muslimat for example: (a There is an explanation that shows that women lower than men in degree, (2 The claim that women are talkative than men, (3 Women only fit in the domestic sphere; however in general the advices in syi’ir is still very relafen in the present context, particularly to give alternative solution in responding the nation moral crisis especially in women young generation. Keywords:Syi’ir Muslimat, Character Education, Gender Analysis.

  19. Synchronous contextual irregularities affect early scene processing: replication and extension.

    Science.gov (United States)

    Mudrik, Liad; Shalgi, Shani; Lamy, Dominique; Deouell, Leon Y

    2014-04-01

    Whether contextual regularities facilitate perceptual stages of scene processing is widely debated, and empirical evidence is still inconclusive. Specifically, it was recently suggested that contextual violations affect early processing of a scene only when the incongruent object and the scene are presented a-synchronously, creating expectations. We compared event-related potentials (ERPs) evoked by scenes that depicted a person performing an action using either a congruent or an incongruent object (e.g., a man shaving with a razor or with a fork) when scene and object were presented simultaneously. We also explored the role of attention in contextual processing by using a pre-cue to direct subjects׳ attention towards or away from the congruent/incongruent object. Subjects׳ task was to determine how many hands the person in the picture used in order to perform the action. We replicated our previous findings of frontocentral negativity for incongruent scenes that started ~ 210 ms post stimulus presentation, even earlier than previously found. Surprisingly, this incongruency ERP effect was negatively correlated with the reaction times cost on incongruent scenes. The results did not allow us to draw conclusions about the role of attention in detecting the regularity, due to a weak attention manipulation. By replicating the 200-300 ms incongruity effect with a new group of subjects at even earlier latencies than previously reported, the results strengthen the evidence for contextual processing during this time window even when simultaneous presentation of the scene and object prevent the formation of prior expectations. We discuss possible methodological limitations that may account for previous failures to find this an effect, and conclude that contextual information affects object model selection processes prior to full object identification, with semantic knowledge activation stages unfolding only later on. Copyright © 2014 Elsevier Ltd. All rights reserved.

  20. Motivational Objects in Natural Scenes (MONS: A Database of >800 Objects

    Directory of Open Access Journals (Sweden)

    Judith Schomaker

    2017-09-01

    Full Text Available In daily life, we are surrounded by objects with pre-existing motivational associations. However, these are rarely controlled for in experiments with natural stimuli. Research on natural stimuli would therefore benefit from stimuli with well-defined motivational properties; in turn, such stimuli also open new paths in research on motivation. Here we introduce a database of Motivational Objects in Natural Scenes (MONS. The database consists of 107 scenes. Each scene contains 2 to 7 objects placed at approximately equal distance from the scene center. Each scene was photographed creating 3 versions, with one object (“critical object” being replaced to vary the overall motivational value of the scene (appetitive, aversive, and neutral, while maintaining high visual similarity between the three versions. Ratings on motivation, valence, arousal and recognizability were obtained using internet-based questionnaires. Since the main objective was to provide stimuli of well-defined motivational value, three motivation scales were used: (1 Desire to own the object; (2 Approach/Avoid; (3 Desire to interact with the object. Three sets of ratings were obtained in independent sets of observers: for all 805 objects presented on a neutral background, for 321 critical objects presented in their scene context, and for the entire scenes. On the basis of the motivational ratings, objects were subdivided into aversive, neutral, and appetitive categories. The MONS database will provide a standardized basis for future studies on motivational value under realistic conditions.

  1. Motivational Objects in Natural Scenes (MONS): A Database of >800 Objects.

    Science.gov (United States)

    Schomaker, Judith; Rau, Elias M; Einhäuser, Wolfgang; Wittmann, Bianca C

    2017-01-01

    In daily life, we are surrounded by objects with pre-existing motivational associations. However, these are rarely controlled for in experiments with natural stimuli. Research on natural stimuli would therefore benefit from stimuli with well-defined motivational properties; in turn, such stimuli also open new paths in research on motivation. Here we introduce a database of Motivational Objects in Natural Scenes (MONS). The database consists of 107 scenes. Each scene contains 2 to 7 objects placed at approximately equal distance from the scene center. Each scene was photographed creating 3 versions, with one object ("critical object") being replaced to vary the overall motivational value of the scene (appetitive, aversive, and neutral), while maintaining high visual similarity between the three versions. Ratings on motivation, valence, arousal and recognizability were obtained using internet-based questionnaires. Since the main objective was to provide stimuli of well-defined motivational value, three motivation scales were used: (1) Desire to own the object; (2) Approach/Avoid; (3) Desire to interact with the object. Three sets of ratings were obtained in independent sets of observers: for all 805 objects presented on a neutral background, for 321 critical objects presented in their scene context, and for the entire scenes. On the basis of the motivational ratings, objects were subdivided into aversive, neutral, and appetitive categories. The MONS database will provide a standardized basis for future studies on motivational value under realistic conditions.

  2. Super-Segments Based Classification of 3D Urban Street Scenes

    Directory of Open Access Journals (Sweden)

    Yu Zhou

    2012-12-01

    Full Text Available We address the problem of classifying 3D point clouds: given 3D urban street scenes gathered by a lidar sensor, we wish to assign a class label to every point. This work is a key step toward realizing applications in robots and cars, for example. In this paper, we present a novel approach to the classification of 3D urban scenes based on super-segments, which are generated from point clouds by two stages of segmentation: a clustering stage and a grouping stage. Then, six effective normal and dimension features that vary with object class are extracted at the super-segment level for training some general classifiers. We evaluate our method both quantitatively and qualitatively using the challenging Velodyne lidar data set. The results show that by only using normal and dimension features we can achieve better recognition than can be achieved with high-dimensional shape descriptors. We also evaluate the adopting of the MRF framework in our approach, but the experimental results indicate that thisbarely improved the accuracy of the classified results due to the sparse property of the super-segments.

  3. Cortical Representations of Speech in a Multitalker Auditory Scene.

    Science.gov (United States)

    Puvvada, Krishna C; Simon, Jonathan Z

    2017-09-20

    The ability to parse a complex auditory scene into perceptual objects is facilitated by a hierarchical auditory system. Successive stages in the hierarchy transform an auditory scene of multiple overlapping sources, from peripheral tonotopically based representations in the auditory nerve, into perceptually distinct auditory-object-based representations in the auditory cortex. Here, using magnetoencephalography recordings from men and women, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in distinct hierarchical stages of the auditory cortex. Using systems-theoretic methods of stimulus reconstruction, we show that the primary-like areas in the auditory cortex contain dominantly spectrotemporal-based representations of the entire auditory scene. Here, both attended and ignored speech streams are represented with almost equal fidelity, and a global representation of the full auditory scene with all its streams is a better candidate neural representation than that of individual streams being represented separately. We also show that higher-order auditory cortical areas, by contrast, represent the attended stream separately and with significantly higher fidelity than unattended streams. Furthermore, the unattended background streams are more faithfully represented as a single unsegregated background object rather than as separated objects. Together, these findings demonstrate the progression of the representations and processing of a complex acoustic scene up through the hierarchy of the human auditory cortex. SIGNIFICANCE STATEMENT Using magnetoencephalography recordings from human listeners in a simulated cocktail party environment, we investigate how a complex acoustic scene consisting of multiple speech sources is represented in separate hierarchical stages of the auditory cortex. We show that the primary-like areas in the auditory cortex use a dominantly spectrotemporal-based representation of the entire auditory

  4. Separate and simultaneous adjustment of light qualities in a real scene

    NARCIS (Netherlands)

    Xia, L.; Pont, S.C.; Heynderickx, I.E.J.R.

    2017-01-01

    Humans are able to estimate light field properties in a scene in that they have expectations of the objects' appearance inside it. Previously, we probed such expectations in a real scene by asking whether a "probe object" fitted a real scene with regard to its lighting. But how well are observers

  5. Recognizing the Stranger: Recognition Scenes in the Gospel of John

    DEFF Research Database (Denmark)

    Larsen, Kasper Bro

    Recognizing the Stranger is the first monographic study of recognition scenes and motifs in the Gospel of John. The recognition type-scene (anagnōrisis) was a common feature in ancient drama and narrative, highly valued by Aristotle as a touching moment of truth, e.g., in Oedipus’ tragic self...... structures of the type-scene in order to show how Jesus’ true identity can be recognized behind the half-mask of his human appearance....

  6. On the Use of ROMOT—A RObotized 3D-MOvie Theatre—To Enhance Romantic Movie Scenes

    Directory of Open Access Journals (Sweden)

    Cristina Portalés

    2017-04-01

    Full Text Available In this paper, we introduce the use of ROMOT—a RObotic 3D-MOvie Theatre—to enhance love and sex movie scenes. ROMOT represents the next generation of movie theatres, where scenes are enhanced with multimodal content, also allowing audience interaction. ROMOT is highly versatile as it can support different setups, integrated hardware and content and, thus, it can be easily adapted to different groups and purposes. Regarding the setups, currently, ROMOT supports a traditional movie setup (including first-person movies, a mixed reality environment, a virtual reality interactive environment, and an augmented reality mirror-based scene. Regarding the integrated hardware, the system currently integrates a variety of devices and displays that allow audiences to see, hear, smell, touch, and feel the movement, all synchronized with the film experience. Finally, regarding to content, here we theorize about the use of ROMOT for romantic-related interactive movies. Though the work presented in this sense is rather speculative, it might open new avenues of research and for the film and other creative industries.

  7. Effects of aging on neural connectivity underlying selective memory for emotional scenes.

    Science.gov (United States)

    Waring, Jill D; Addis, Donna Rose; Kensinger, Elizabeth A

    2013-02-01

    Older adults show age-related reductions in memory for neutral items within complex visual scenes, but just like young adults, older adults exhibit a memory advantage for emotional items within scenes compared with the background scene information. The present study examined young and older adults' encoding-stage effective connectivity for selective memory of emotional items versus memory for both the emotional item and its background. In a functional magnetic resonance imaging (fMRI) study, participants viewed scenes containing either positive or negative items within neutral backgrounds. Outside the scanner, participants completed a memory test for items and backgrounds. Irrespective of scene content being emotionally positive or negative, older adults had stronger positive connections among frontal regions and from frontal regions to medial temporal lobe structures than did young adults, especially when items and backgrounds were subsequently remembered. These results suggest there are differences between young and older adults' connectivity accompanying the encoding of emotional scenes. Older adults may require more frontal connectivity to encode all elements of a scene rather than just encoding the emotional item. Published by Elsevier Inc.

  8. Scene text recognition in mobile applications by character descriptor and structure configuration.

    Science.gov (United States)

    Yi, Chucai; Tian, Yingli

    2014-07-01

    Text characters and strings in natural scene can provide valuable information for many applications. Extracting text directly from natural scene images or videos is a challenging task because of diverse text patterns and variant background interferences. This paper proposes a method of scene text recognition from detected text regions. In text detection, our previously proposed algorithms are applied to obtain text regions from scene image. First, we design a discriminative character descriptor by combining several state-of-the-art feature detectors and descriptors. Second, we model character structure at each character class by designing stroke configuration maps. Our algorithm design is compatible with the application of scene text extraction in smart mobile devices. An Android-based demo system is developed to show the effectiveness of our proposed method on scene text information extraction from nearby objects. The demo system also provides us some insight into algorithm design and performance improvement of scene text extraction. The evaluation results on benchmark data sets demonstrate that our proposed scheme of text recognition is comparable with the best existing methods.

  9. Radio Wave Propagation Scene Partitioning for High-Speed Rails

    Directory of Open Access Journals (Sweden)

    Bo Ai

    2012-01-01

    Full Text Available Radio wave propagation scene partitioning is necessary for wireless channel modeling. As far as we know, there are no standards of scene partitioning for high-speed rail (HSR scenarios, and therefore we propose the radio wave propagation scene partitioning scheme for HSR scenarios in this paper. Based on our measurements along the Wuhan-Guangzhou HSR, Zhengzhou-Xian passenger-dedicated line, Shijiazhuang-Taiyuan passenger-dedicated line, and Beijing-Tianjin intercity line in China, whose operation speeds are above 300 km/h, and based on the investigations on Beijing South Railway Station, Zhengzhou Railway Station, Wuhan Railway Station, Changsha Railway Station, Xian North Railway Station, Shijiazhuang North Railway Station, Taiyuan Railway Station, and Tianjin Railway Station, we obtain an overview of HSR propagation channels and record many valuable measurement data for HSR scenarios. On the basis of these measurements and investigations, we partitioned the HSR scene into twelve scenarios. Further work on theoretical analysis based on radio wave propagation mechanisms, such as reflection and diffraction, may lead us to develop the standard of radio wave propagation scene partitioning for HSR. Our work can also be used as a basis for the wireless channel modeling and the selection of some key techniques for HSR systems.

  10. Unconscious analyses of visual scenes based on feature conjunctions.

    Science.gov (United States)

    Tachibana, Ryosuke; Noguchi, Yasuki

    2015-06-01

    To efficiently process a cluttered scene, the visual system analyzes statistical properties or regularities of visual elements embedded in the scene. It is controversial, however, whether those scene analyses could also work for stimuli unconsciously perceived. Here we show that our brain performs the unconscious scene analyses not only using a single featural cue (e.g., orientation) but also based on conjunctions of multiple visual features (e.g., combinations of color and orientation information). Subjects foveally viewed a stimulus array (duration: 50 ms) where 4 types of bars (red-horizontal, red-vertical, green-horizontal, and green-vertical) were intermixed. Although a conscious perception of those bars was inhibited by a subsequent mask stimulus, the brain correctly analyzed the information about color, orientation, and color-orientation conjunctions of those invisible bars. The information of those features was then used for the unconscious configuration analysis (statistical processing) of the central bars, which induced a perceptual bias and illusory feature binding in visible stimuli at peripheral locations. While statistical analyses and feature binding are normally 2 key functions of the visual system to construct coherent percepts of visual scenes, our results show that a high-level analysis combining those 2 functions is correctly performed by unconscious computations in the brain. (c) 2015 APA, all rights reserved).

  11. SAMPEG: a scene-adaptive parallel MPEG-2 software encoder

    NARCIS (Netherlands)

    Farin, D.S.; Mache, N.; With, de P.H.N.; Girod, B.; Bouman, C.A.; Steinbach, E.G.

    2001-01-01

    This paper presents a fully software-based MPEG-2 encoder architecture, which uses scene-change detection to optimize the Group-of-Picture (GOP) structure for the actual video sequence. This feature enables easy, lossless edit cuts at scene-change positions and it also improves overall picture

  12. Viewing nature scenes positively affects recovery of autonomic function following acute-mental stress.

    Science.gov (United States)

    Brown, Daniel K; Barton, Jo L; Gladwell, Valerie F

    2013-06-04

    A randomized crossover study explored whether viewing different scenes prior to a stressor altered autonomic function during the recovery from the stressor. The two scenes were (a) nature (composed of trees, grass, fields) or (b) built (composed of man-made, urban scenes lacking natural characteristics) environments. Autonomic function was assessed using noninvasive techniques of heart rate variability; in particular, time domain analyses evaluated parasympathetic activity, using root-mean-square of successive differences (RMSSD). During stress, secondary cardiovascular markers (heart rate, systolic and diastolic blood pressure) showed significant increases from baseline which did not differ between the two viewing conditions. Parasympathetic activity, however, was significantly higher in recovery following the stressor in the viewing scenes of nature condition compared to viewing scenes depicting built environments (RMSSD; 50.0 ± 31.3 vs 34.8 ± 14.8 ms). Thus, viewing nature scenes prior to a stressor alters autonomic activity in the recovery period. The secondary aim was to examine autonomic function during viewing of the two scenes. Standard deviation of R-R intervals (SDRR), as change from baseline, during the first 5 min of viewing nature scenes was greater than during built scenes. Overall, this suggests that nature can elicit improvements in the recovery process following a stressor.

  13. Third-harmonic generation in silicon and photonic crystals of macroporous silicon in the spectral intermediate-IR range; Erzeugung der Dritten Harmonischen in Silizium und Photonischen Kristallen aus makroporoesem Silizium im spektralen mittleren IR-Bereich

    Energy Technology Data Exchange (ETDEWEB)

    Mitzschke, Kerstin

    2007-11-01

    Nonlinear optical spectroscopy is a powerful method to study surface or bulk properties of condensed matter. In centrosymmetric materials like silicon even order nonlinear optical processes are forbidden. Besides self-focussing or self phase modulation third-harmonic-generation (THG) is the simplest process that can be studied. This work demonstrates that THG is a adapted non-contact and non-invasive optical method to get information about bulk structures of silicon and Photonic crystals (PC), consisting of silicon. Until now most studies are done in the visible spectral range being limited by the linear absorption losses. So the extension of THG to the IR spectral range is extremely useful. This will allow the investigation of Photonic Crystals, where frequencies near a photonic bandgap are of special interest. 2D- photonic structures under investigation were fabricated via photoelectrochemical etching of the Si (100) wafer (thickness 500 {mu}m) receiving square and hexagonal arranged pores. The typical periodicity of the structures used is 2 {mu}m and the length of the pores reached to 400 {mu}m. Because of stability the photonic structures were superimposed on silicon substrate. The experimental set-up used for the THG experiments generates tuneable picosecond IR pulses (tuning range 1500-4000 cm{sup -1}). The IR-pulse hit the sample either perpendicular to the sample surface or under an angle {theta}. The sample can be rotated (f) around the surface normal. The generated third harmonic is analysed by a polarizer, spectrally filtered by a polychromator and registered by a CCD camera. The setup can be used either in transmission or in reflection mode. Optical transmission and reflection spectra of the Si bulk correspond well with the theoretical description, a 4-fold and a 8-fold dependencies of the azimuth angle resulting in the structure of the x{sup (3)}-tensor of (100)-Si. The situation changes dramatically if the PC with hexagonal structure is investigated

  14. The influence of color on emotional perception of natural scenes.

    Science.gov (United States)

    Codispoti, Maurizio; De Cesarei, Andrea; Ferrari, Vera

    2012-01-01

    Is color a critical factor when processing the emotional content of natural scenes? Under challenging perceptual conditions, such as when pictures are briefly presented, color might facilitate scene segmentation and/or function as a semantic cue via association with scene-relevant concepts (e.g., red and blood/injury). To clarify the influence of color on affective picture perception, we compared the late positive potentials (LPP) to color versus grayscale pictures, presented for very brief (24 ms) and longer (6 s) exposure durations. Results indicated that removing color information had no effect on the affective modulation of the LPP, regardless of exposure duration. These findings imply that the recognition of the emotional content of scenes, even when presented very briefly, does not critically rely on color information. Copyright © 2011 Society for Psychophysiological Research.

  15. The TApIR experiment. IR absorption spectra of liquid hydrogen isotopologues; Das TApIR Experiment IR-Absorptionsspektren fluessiger Wasserstoffisotopologe

    Energy Technology Data Exchange (ETDEWEB)

    Groessle, Robin

    2015-11-27

    The scope of the thesis is the infrared absorption spectroscopy of liquid hydrogen isotopologues with the tritium absorption infrared spectroscopy (TApIR) experiment at the tritium laboratory Karlsruhe (TLK). The calibration process from the sample preparation to the reference measurements are described. A further issue is the classical evaluation of FTIR absorption spectra and the extension using the rolling circle filter (RCF) including the effects on statistical and systematical errors. The impact of thermal and nuclear spin temperature on the IR absorption spectra is discussed. An empirical based modeling for the IR absorption spectra of liquid hydrogen isotopologues is performed.

  16. Cybersickness in the presence of scene rotational movements along different axes.

    Science.gov (United States)

    Lo, W T; So, R H

    2001-02-01

    Compelling scene movements in a virtual reality (VR) system can cause symptoms of motion sickness (i.e., cybersickness). A within-subject experiment has been conducted to investigate the effects of scene oscillations along different axes on the level of cybersickness. Sixteen male participants were exposed to four 20-min VR simulation sessions. The four sessions used the same virtual environment but with scene oscillations along different axes, i.e., pitch, yaw, roll, or no oscillation (speed: 30 degrees/s, range: +/- 60 degrees). Verbal ratings of the level of nausea were taken at 5-min intervals during the sessions and sickness symptoms were also measured before and after the sessions using the Simulator Sickness Questionnaire (SSQ). In the presence of scene oscillation, both nausea ratings and SSQ scores increased at significantly higher rates than with no oscillation. While individual participants exhibited different susceptibilities to nausea associated with VR simulation containing scene oscillations along different rotational axes, the overall effects of axis among our group of 16 randomly selected participants were not significant. The main effects of, and interactions among, scene oscillation, duration, and participants are discussed in the paper.

  17. Monitoring the long term stability of the IRS-P6 AWiFS sensor using the Sonoran and RVPN sites

    Science.gov (United States)

    Chander, Gyanesh; Sampath, Aparajithan; Angal, Amit; Choi, Taeyoung; Xiong, Xiaoxiong

    2010-10-01

    This paper focuses on radiometric and geometric assessment of the Indian Remote Sensing (IRS-P6) Advanced Wide Field Sensor (AWiFS) sensor using the Sonoran desert and Railroad Valley Playa, Nevada (RVPN) ground sites. Imageto- Image (I2I) accuracy and relative band-to-band (B2B) accuracy were measured. I2I accuracy of the AWiFS imagery was assessed by measuring the imagery against Landsat Global Land Survey (GLS) 2000. The AWiFS images were typically registered to within one pixel to the GLS 2000 mosaic images. The B2B process used the same concepts as the I2I, except instead of a reference image and a search image; the individual bands of a multispectral image are tested against each other. The B2B results showed that all the AWiFS multispectral bands are registered to sub-pixel accuracy. Using the limited amount of scenes available over these ground sites, the reflective bands of AWiFS sensor indicate a long-term drift in the top-of-atmosphere (TOA) reflectance. Because of the limited availability of AWiFS scenes over these ground sites, a comprehensive evaluation of the radiometric stability using these sites is not possible. In order to overcome this limitation, a cross-comparison between AWiFS and Landsat 7 (L7) Enhanced Thematic Mapper Plus (ETM+) was performed using image statistics based on large common areas observed by the sensors within 30 minutes. Regression curves and coefficients of determination for the TOA trends from these sensors were generated to quantify the uncertainty in these relationships and to provide an assessment of the calibration differences between these sensors.

  18. Tarptautinio turizmo raida ir vystymo prognozės Lietuvoje ir Lenkijoje

    OpenAIRE

    Veličkaitė, Dalia

    2009-01-01

    Išanalizuota ir įvertinta Lietuvos ir Lenkijos atvykstamojo turizmo raida 2000- 2007m., užsienio turistų srautai, apgyvendinimo paslaugų paklausa, turistų tikslai ir kelionių transporto pasirinkimas, turistų išlaidos ir šalių turizmo pajamos, iškeltos atvykstamojo turizmo problemos bei pateikti jų sprendimo siūlymai.paskutinėje darbo dalyje buvo atliktos 2008- 2015metų Lietuvos ir Lenkijos turizmo raidos prognozės. In the final master work Lithuanian and Poland arriving tourism development...

  19. Electronic structure, local magnetism, and spin-orbit effects of Ir(IV)-, Ir(V)-, and Ir(VI)-based compounds

    Energy Technology Data Exchange (ETDEWEB)

    Laguna-Marco, M. A.; Kayser, P.; Alonso, J. A.; Martínez-Lope, M. J.; van Veenendaal, M.; Choi, Y.; Haskel, D.

    2015-06-01

    Element- and orbital-selective x-ray absorption and magnetic circular dichroism measurements are carried out to probe the electronic structure and magnetism of Ir 5d electronic states in double perovskite Sr2MIrO6 (M = Mg, Ca, Sc, Ti, Ni, Fe, Zn, In) and La2NiIrO6 compounds. All the studied systems present a significant influence of spin-orbit interactions in the electronic ground state. In addition, we find that the Ir 5d local magnetic moment shows different character depending on the oxidation state despite the net magnetization being similar for all the compounds. Ir carries an orbital contribution comparable to the spin contribution for Ir4+ (5d(5)) and Ir5+ (5d(4)) oxides, whereas the orbital contribution is quenched for Ir6+ (5d(3)) samples. Incorporation of a magnetic 3d atom allows getting insight into the magnetic coupling between 5d and 3d transition metals. Together with previous susceptibility and neutron diffractionmeasurements, the results indicate that Ir carries a significant local magnetic moment even in samples without a 3d metal. The size of the (small) net magnetization of these compounds is a result of predominant antiferromagnetic interactions between local moments coupled with structural details of each perovskite structure

  20. IR-IR Conformation Specific Spectroscopy of Na+(Glucose) Adducts

    Science.gov (United States)

    Voss, Jonathan M.; Kregel, Steven J.; Fischer, Kaitlyn C.; Garand, Etienne

    2018-01-01

    We report an IR-IR double resonance study of the structural landscape present in the Na+(glucose) complex. Our experimental approach involves minimal modifications to a typical IR predissociation setup, and can be carried out via ion-dip or isomer-burning methods, providing additional flexibility to suit different experimental needs. In the current study, the single-laser IR predissociation spectrum of Na+(glucose), which clearly indicates contributions from multiple structures, was experimentally disentangled to reveal the presence of three α-conformers and five β-conformers. Comparisons with calculations show that these eight conformations correspond to the lowest energy gas-phase structures with distinctive Na+ coordination. [Figure not available: see fulltext.

  1. Hydrological AnthropoScenes

    Science.gov (United States)

    Cudennec, Christophe

    2016-04-01

    The Anthropocene concept encapsulates the planetary-scale changes resulting from accelerating socio-ecological transformations, beyond the stratigraphic definition actually in debate. The emergence of multi-scale and proteiform complexity requires inter-discipline and system approaches. Yet, to reduce the cognitive challenge of tackling this complexity, the global Anthropocene syndrome must now be studied from various topical points of view, and grounded at regional and local levels. A system approach should allow to identify AnthropoScenes, i.e. settings where a socio-ecological transformation subsystem is clearly coherent within boundaries and displays explicit relationships with neighbouring/remote scenes and within a nesting architecture. Hydrology is a key topical point of view to be explored, as it is important in many aspects of the Anthropocene, either with water itself being a resource, hazard or transport force; or through the network, connectivity, interface, teleconnection, emergence and scaling issues it determines. We will schematically exemplify these aspects with three contrasted hydrological AnthropoScenes in Tunisia, France and Iceland; and reframe therein concepts of the hydrological change debate. Bai X., van der Leeuw S., O'Brien K., Berkhout F., Biermann F., Brondizio E., Cudennec C., Dearing J., Duraiappah A., Glaser M., Revkin A., Steffen W., Syvitski J., 2016. Plausible and desirable futures in the Anthropocene: A new research agenda. Global Environmental Change, in press, http://dx.doi.org/10.1016/j.gloenvcha.2015.09.017 Brondizio E., O'Brien K., Bai X., Biermann F., Steffen W., Berkhout F., Cudennec C., Lemos M.C., Wolfe A., Palma-Oliveira J., Chen A. C-T. Re-conceptualizing the Anthropocene: A call for collaboration. Global Environmental Change, in review. Montanari A., Young G., Savenije H., Hughes D., Wagener T., Ren L., Koutsoyiannis D., Cudennec C., Grimaldi S., Blöschl G., Sivapalan M., Beven K., Gupta H., Arheimer B., Huang Y

  2. PKCδ-mediated IRS-1 Ser24 phosphorylation negatively regulates IRS-1 function

    International Nuclear Information System (INIS)

    Greene, Michael W.; Ruhoff, Mary S.; Roth, Richard A.; Kim, Jeong-a; Quon, Michael J.; Krause, Jean A.

    2006-01-01

    The IRS-1 PH and PTB domains are essential for insulin-stimulated IRS-1 Tyr phosphorylation and insulin signaling, while Ser/Thr phosphorylation of IRS-1 disrupts these signaling events. To investigate consensus PKC phosphorylation sites in the PH-PTB domains of human IRS-1, we changed Ser24, Ser58, and Thr191 to Ala (3A) or Glu (3E), to block or mimic phosphorylation, respectively. The 3A mutant abrogated the inhibitory effect of PKCδ on insulin-stimulated IRS-1 Tyr phosphorylation, while reductions in insulin-stimulated IRS-1 Tyr phosphorylation, cellular proliferation, and Akt activation were observed with the 3E mutant. When single Glu mutants were tested, the Ser24 to Glu mutant had the greatest inhibitory effect on insulin-stimulated IRS-1 Tyr phosphorylation. PKCδ-mediated IRS-1 Ser24 phosphorylation was confirmed in cells with PKCδ catalytic domain mutants and by an RNAi method. Mechanistic studies revealed that IRS-1 with Ala and Glu point mutations at Ser24 impaired phosphatidylinositol-4,5-bisphosphate binding. In summary, our data are consistent with the hypothesis that Ser24 is a negative regulatory phosphorylation site in IRS-1

  3. Hierarchical Model for the Similarity Measurement of a Complex Holed-Region Entity Scene

    Directory of Open Access Journals (Sweden)

    Zhanlong Chen

    2017-11-01

    Full Text Available Complex multi-holed-region entity scenes (i.e., sets of random region with holes are common in spatial database systems, spatial query languages, and the Geographic Information System (GIS. A multi-holed-region (region with an arbitrary number of holes is an abstraction of the real world that primarily represents geographic objects that have more than one interior boundary, such as areas that contain several lakes or lakes that contain islands. When the similarity of the two complex holed-region entity scenes is measured, the number of regions in the scenes and the number of holes in the regions are usually different between the two scenes, which complicates the matching relationships of holed-regions and holes. The aim of this research is to develop several holed-region similarity metrics and propose a hierarchical model to measure comprehensively the similarity between two complex holed-region entity scenes. The procedure first divides a complex entity scene into three layers: a complex scene, a micro-spatial-scene, and a simple entity (hole. The relationships between the adjacent layers are considered to be sets of relationships, and each level of similarity measurements is nested with the adjacent one. Next, entity matching is performed from top to bottom, while the similarity results are calculated from local to global. In addition, we utilize position graphs to describe the distribution of the holed-regions and subsequently describe the directions between the holes using a feature matrix. A case study that uses the Great Lakes in North America in 1986 and 2015 as experimental data illustrates the entire similarity measurement process between two complex holed-region entity scenes. The experimental results show that the hierarchical model accounts for the relationships of the different layers in the entire complex holed-region entity scene. The model can effectively calculate the similarity of complex holed-region entity scenes, even if the

  4. Learning object-to-class kernels for scene classification.

    Science.gov (United States)

    Zhang, Lei; Zhen, Xiantong; Shao, Ling

    2014-08-01

    High-level image representations have drawn increasing attention in visual recognition, e.g., scene classification, since the invention of the object bank. The object bank represents an image as a response map of a large number of pretrained object detectors and has achieved superior performance for visual recognition. In this paper, based on the object bank representation, we propose the object-to-class (O2C) distances to model scene images. In particular, four variants of O2C distances are presented, and with the O2C distances, we can represent the images using the object bank by lower-dimensional but more discriminative spaces, called distance spaces, which are spanned by the O2C distances. Due to the explicit computation of O2C distances based on the object bank, the obtained representations can possess more semantic meanings. To combine the discriminant ability of the O2C distances to all scene classes, we further propose to kernalize the distance representation for the final classification. We have conducted extensive experiments on four benchmark data sets, UIUC-Sports, Scene-15, MIT Indoor, and Caltech-101, which demonstrate that the proposed approaches can significantly improve the original object bank approach and achieve the state-of-the-art performance.

  5. Hybrid nanomaterial and its applications: IR sensing and energy harvesting

    Science.gov (United States)

    Tseng, Yi-Hsuan

    In this dissertation, a hybrid nanomaterial, single-wall carbon nanotubes-copper sulfide nanoparticles (SWNTs-CuS NPs), was synthesized and its properties were analyzed. Due to its unique optical and thermal properties, the hybrid nanomaterial exhibited great potential for infrared (IR) sensing and energy harvesting. The hybrid nanomaterial was synthesized with the non-covalent bond technique to functionalize the surface of the SWNTs and bind the CuS nanoparticles on the surface of the SWNTs. For testing and analyzing the hybrid nanomaterial, SWNTs-CuS nanoparticles were formed as a thin film structure using the vacuum filtration method. Two conductive wires were bound on the ends of the thin film to build a thin film device for measurements and analyses. Measurements found that the hybrid nanomaterial had a significantly increased light absorption (up to 80%) compared to the pure SWNTs. Moreover, the hybrid nanomaterial thin film devices exhibited a clear optical and thermal switching effect, which could be further enhanced up to ten times with asymmetric illumination of light and thermal radiation on the thin film devices instead of symmetric illumination. A simple prototype thermoelectric generator enabled by the hybrid nanomaterials was demonstrated, indicating a new route for achieving thermoelectricity. In addition, CuS nanoparticles have great optical absorption especially in the near-infrared region. Therefore, the hybrid nanomaterial thin films also have the potential for IR sensing applications. The first application to be covered in this dissertation is the IR sensing application. IR thin film sensors based on the SWNTs-CuS nanoparticles hybrid nanomaterials were fabricated. The IR response in the photocurrent of the hybrid thin film sensor was significantly enhanced, increasing the photocurrent by 300% when the IR light illuminates the thin film device asymmetrically. The detection limit could be as low as 48mW mm-2. The dramatically enhanced

  6. The role of memory for visual search in scenes.

    Science.gov (United States)

    Le-Hoa Võ, Melissa; Wolfe, Jeremy M

    2015-03-01

    Many daily activities involve looking for something. The ease with which these searches are performed often allows one to forget that searching represents complex interactions between visual attention and memory. Although a clear understanding exists of how search efficiency will be influenced by visual features of targets and their surrounding distractors or by the number of items in the display, the role of memory in search is less well understood. Contextual cueing studies have shown that implicit memory for repeated item configurations can facilitate search in artificial displays. When searching more naturalistic environments, other forms of memory come into play. For instance, semantic memory provides useful information about which objects are typically found where within a scene, and episodic scene memory provides information about where a particular object was seen the last time a particular scene was viewed. In this paper, we will review work on these topics, with special emphasis on the role of memory in guiding search in organized, real-world scenes. © 2015 New York Academy of Sciences.

  7. Ammonia IR Absorbance Measurements with an Equilibrium Vapor Cell

    National Research Council Canada - National Science Library

    Field, Paul

    2004-01-01

    Infrared (IR) absorbance spectra were acquired for 18 ammonia vapor pressures. The vapor pressures were generated with 15 gravimetrically prepared aqueous solutions and three commercial aqueous solutions using a dynamic method I.E...

  8. “Getting out of downtown”: a longitudinal study of how street-entrenched youth attempt to exit an inner city drug scene

    Directory of Open Access Journals (Sweden)

    Rod Knight

    2017-05-01

    Full Text Available Abstract Background Urban drug “scenes” have been identified as important risk environments that shape the health of street-entrenched youth. New knowledge is needed to inform policy and programing interventions to help reduce youths’ drug scene involvement and related health risks. The aim of this study was to identify how young people envisioned exiting a local, inner-city drug scene in Vancouver, Canada, as well as the individual, social and structural factors that shaped their experiences. Methods Between 2008 and 2016, we draw on 150 semi-structured interviews with 75 street-entrenched youth. We also draw on data generated through ethnographic fieldwork conducted with a subgroup of 25 of these youth between. Results Youth described that, in order to successfully exit Vancouver’s inner city drug scene, they would need to: (a secure legitimate employment and/or obtain education or occupational training; (b distance themselves – both physically and socially – from the urban drug scene; and (c reduce their drug consumption. As youth attempted to leave the scene, most experienced substantial social and structural barriers (e.g., cycling in and out of jail, the need to access services that are centralized within a place that they are trying to avoid, in addition to managing complex individual health issues (e.g., substance dependence. Factors that increased youth’s capacity to successfully exit the drug scene included access to various forms of social and cultural capital operating outside of the scene, including supportive networks of friends and/or family, as well as engagement with addiction treatment services (e.g., low-threshold access to methadone to support cessation or reduction of harmful forms of drug consumption. Conclusions Policies and programming interventions that can facilitate young people’s efforts to reduce engagement with Vancouver’s inner-city drug scene are critically needed, including meaningful

  9. The time course of natural scene perception with reduced attention

    NARCIS (Netherlands)

    Groen, I.I.A.; Ghebreab, S.; Lamme, V.A.F.; Scholte, H.S.

    Attention is thought to impose an informational bottleneck on vision by selecting particular information from visual scenes for enhanced processing. Behavioral evidence suggests, however, that some scene information is extracted even when attention is directed elsewhere. Here, we investigated the

  10. Hebbian learning in a model with dynamic rate-coded neurons: an alternative to the generative model approach for learning receptive fields from natural scenes.

    Science.gov (United States)

    Hamker, Fred H; Wiltschut, Jan

    2007-09-01

    Most computational models of coding are based on a generative model according to which the feedback signal aims to reconstruct the visual scene as close as possible. We here explore an alternative model of feedback. It is derived from studies of attention and thus, probably more flexible with respect to attentive processing in higher brain areas. According to this model, feedback implements a gain increase of the feedforward signal. We use a dynamic model with presynaptic inhibition and Hebbian learning to simultaneously learn feedforward and feedback weights. The weights converge to localized, oriented, and bandpass filters similar as the ones found in V1. Due to presynaptic inhibition the model predicts the organization of receptive fields within the feedforward pathway, whereas feedback primarily serves to tune early visual processing according to the needs of the task.

  11. A Two-Stream Deep Fusion Framework for High-Resolution Aerial Scene Classification

    Directory of Open Access Journals (Sweden)

    Yunlong Yu

    2018-01-01

    Full Text Available One of the challenging problems in understanding high-resolution remote sensing images is aerial scene classification. A well-designed feature representation method and classifier can improve classification accuracy. In this paper, we construct a new two-stream deep architecture for aerial scene classification. First, we use two pretrained convolutional neural networks (CNNs as feature extractor to learn deep features from the original aerial image and the processed aerial image through saliency detection, respectively. Second, two feature fusion strategies are adopted to fuse the two different types of deep convolutional features extracted by the original RGB stream and the saliency stream. Finally, we use the extreme learning machine (ELM classifier for final classification with the fused features. The effectiveness of the proposed architecture is tested on four challenging datasets: UC-Merced dataset with 21 scene categories, WHU-RS dataset with 19 scene categories, AID dataset with 30 scene categories, and NWPU-RESISC45 dataset with 45 challenging scene categories. The experimental results demonstrate that our architecture gets a significant classification accuracy improvement over all state-of-the-art references.

  12. Dynamics of scene representations in the human brain revealed by magnetoencephalography and deep neural networks

    Science.gov (United States)

    Cichy, Radoslaw Martin; Khosla, Aditya; Pantazis, Dimitrios; Oliva, Aude

    2017-01-01

    Human scene recognition is a rapid multistep process evolving over time from single scene image to spatial layout processing. We used multivariate pattern analyses on magnetoencephalography (MEG) data to unravel the time course of this cortical process. Following an early signal for lower-level visual analysis of single scenes at ~100 ms, we found a marker of real-world scene size, i.e. spatial layout processing, at ~250 ms indexing neural representations robust to changes in unrelated scene properties and viewing conditions. For a quantitative model of how scene size representations may arise in the brain, we compared MEG data to a deep neural network model trained on scene classification. Representations of scene size emerged intrinsically in the model, and resolved emerging neural scene size representation. Together our data provide a first description of an electrophysiological signal for layout processing in humans, and suggest that deep neural networks are a promising framework to investigate how spatial layout representations emerge in the human brain. PMID:27039703

  13. Recognition and attention guidance during contextual cueing in real-world scenes: evidence from eye movements.

    Science.gov (United States)

    Brockmole, James R; Henderson, John M

    2006-07-01

    When confronted with a previously encountered scene, what information is used to guide search to a known target? We contrasted the role of a scene's basic-level category membership with its specific arrangement of visual properties. Observers were repeatedly shown photographs of scenes that contained consistently but arbitrarily located targets, allowing target positions to be associated with scene content. Learned scenes were then unexpectedly mirror reversed, spatially translating visual features as well as the target across the display while preserving the scene's identity and concept. Mirror reversals produced a cost as the eyes initially moved toward the position in the display in which the target had previously appeared. The cost was not complete, however; when initial search failed, the eyes were quickly directed to the target's new position. These results suggest that in real-world scenes, shifts of attention are initially based on scene identity, and subsequent shifts are guided by more detailed information regarding scene and object layout.

  14. Use of AFIS for linking scenes of crime.

    Science.gov (United States)

    Hefetz, Ido; Liptz, Yakir; Vaturi, Shaul; Attias, David

    2016-05-01

    Forensic intelligence can provide critical information in criminal investigations - the linkage of crime scenes. The Automatic Fingerprint Identification System (AFIS) is an example of a technological improvement that has advanced the entire forensic identification field to strive for new goals and achievements. In one example using AFIS, a series of burglaries into private apartments enabled a fingerprint examiner to search latent prints from different burglary scenes against an unsolved latent print database. Latent finger and palm prints coming from the same source were associated with over than 20 cases. Then, by forensic intelligence and profile analysis the offender's behavior could be anticipated. He was caught, identified, and arrested. It is recommended to perform an AFIS search of LT/UL prints against current crimes automatically as part of laboratory protocol and not by an examiner's discretion. This approach may link different crime scenes. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  15. Real-time generation of kd-trees for ray tracing using DirectX 11

    OpenAIRE

    Säll, Martin; Cronqvist, Fredrik

    2017-01-01

    Context. Ray tracing has always been a simple but effective way to create a photorealistic scene but at a greater cost when expanding the scene. Recent improvements in GPU and CPU hardware have made ray tracing faster, making more complex scenes possible with the same amount of time needed to process the scene. Despite the improvements in hardware ray tracing is still rarely run at a interactive speed. Objectives. The aim of this experiment was to implement a new kdtree generation algorithm us...

  16. Gordon Craig's Scene Project: a history open to revision

    Directory of Open Access Journals (Sweden)

    Luiz Fernando

    2014-09-01

    Full Text Available The article proposes a review of Gordon Craig’s Scene project, an invention patented in 1910 and developed until 1922. Craig himself kept an ambiguous position whether it was an unfulfilled project or not. His son and biographer Edward Craig sustained that Craig’s original aims were never achieved because of technical limitation, and most of the scholars who examined the matter followed this position. Departing from the actual screen models saved in the Bibliothèque Nationale de France, Craig’s original notebooks, and a short film from 1963, I defend that the patented project and the essay published in 1923 mean, indeed, the materialisation of the dreamed device of the thousand scenes in one scene

  17. A view not to be missed: Salient scene content interferes with cognitive restoration

    NARCIS (Netherlands)

    van der Jagt, A.P.N.; Craig, Tony; Brewer, Mark J.; Pearson, David G.

    2017-01-01

    Attention Restoration Theory (ART) states that built scenes place greater load on attentional resources than natural scenes. This is explained in terms of "hard" and "soft" fascination of built and natural scenes. Given a lack of direct empirical evidence for this assumption we propose that

  18. Places in the Brain: Bridging Layout and Object Geometry in Scene-Selective Cortex.

    Science.gov (United States)

    Dillon, Moira R; Persichetti, Andrew S; Spelke, Elizabeth S; Dilks, Daniel D

    2017-06-13

    Diverse animal species primarily rely on sense (left-right) and egocentric distance (proximal-distal) when navigating the environment. Recent neuroimaging studies with human adults show that this information is represented in 2 scene-selective cortical regions-the occipital place area (OPA) and retrosplenial complex (RSC)-but not in a third scene-selective region-the parahippocampal place area (PPA). What geometric properties, then, does the PPA represent, and what is its role in scene processing? Here we hypothesize that the PPA represents relative length and angle, the geometric properties classically associated with object recognition, but only in the context of large extended surfaces that compose the layout of a scene. Using functional magnetic resonance imaging adaptation, we found that the PPA is indeed sensitive to relative length and angle changes in pictures of scenes, but not pictures of objects that reliably elicited responses to the same geometric changes in object-selective cortical regions. Moreover, we found that the OPA is also sensitive to such changes, while the RSC is tolerant to such changes. Thus, the geometric information typically associated with object recognition is also used during some aspects of scene processing. These findings provide evidence that scene-selective cortex differentially represents the geometric properties guiding navigation versus scene categorization. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  19. Estimating cotton canopy ground cover from remotely sensed scene reflectance

    International Nuclear Information System (INIS)

    Maas, S.J.

    1998-01-01

    Many agricultural applications require spatially distributed information on growth-related crop characteristics that could be supplied through aircraft or satellite remote sensing. A study was conducted to develop and test a methodology for estimating plant canopy ground cover for cotton (Gossypium hirsutum L.) from scene reflectance. Previous studies indicated that a relatively simple relationship between ground cover and scene reflectance could be developed based on linear mixture modeling. Theoretical analysis indicated that the effects of shadows in the scene could be compensated for by averaging the results obtained using scene reflectance in the red and near-infrared wavelengths. The methodology was tested using field data collected over several years from cotton test plots in Texas and California. Results of the study appear to verify the utility of this approach. Since the methodology relies on information that can be obtained solely through remote sensing, it would be particularly useful in applications where other field information, such as plant size, row spacing, and row orientation, is unavailable

  20. Significance of perceptually relevant image decolorization for scene classification

    Science.gov (United States)

    Viswanathan, Sowmya; Divakaran, Govind; Soman, Kutti Padanyl

    2017-11-01

    Color images contain luminance and chrominance components representing the intensity and color information, respectively. The objective of this paper is to show the significance of incorporating chrominance information to the task of scene classification. An improved color-to-grayscale image conversion algorithm that effectively incorporates chrominance information is proposed using the color-to-gray structure similarity index and singular value decomposition to improve the perceptual quality of the converted grayscale images. The experimental results based on an image quality assessment for image decolorization and its success rate (using the Cadik and COLOR250 datasets) show that the proposed image decolorization technique performs better than eight existing benchmark algorithms for image decolorization. In the second part of the paper, the effectiveness of incorporating the chrominance component for scene classification tasks is demonstrated using a deep belief network-based image classification system developed using dense scale-invariant feature transforms. The amount of chrominance information incorporated into the proposed image decolorization technique is confirmed with the improvement to the overall scene classification accuracy. Moreover, the overall scene classification performance improved by combining the models obtained using the proposed method and conventional decolorization methods.

  1. Medial Temporal Lobe Contributions to Episodic Future Thinking: Scene Construction or Future Projection?

    Science.gov (United States)

    Palombo, D J; Hayes, S M; Peterson, K M; Keane, M M; Verfaellie, M

    2018-02-01

    Previous research has shown that the medial temporal lobes (MTL) are more strongly engaged when individuals think about the future than about the present, leading to the suggestion that future projection drives MTL engagement. However, future thinking tasks often involve scene processing, leaving open the alternative possibility that scene-construction demands, rather than future projection, are responsible for the MTL differences observed in prior work. This study explores this alternative account. Using functional magnetic resonance imaging, we directly contrasted MTL activity in 1) high scene-construction and low scene-construction imagination conditions matched in future thinking demands and 2) future-oriented and present-oriented imagination conditions matched in scene-construction demands. Consistent with the alternative account, the MTL was more active for the high versus low scene-construction condition. By contrast, MTL differences were not observed when comparing the future versus present conditions. Moreover, the magnitude of MTL activation was associated with the extent to which participants imagined a scene but was not associated with the extent to which participants thought about the future. These findings help disambiguate which component processes of imagination specifically involve the MTL. Published by Oxford University Press 2016.

  2. Analysis of effect of cable degradation on SPND IR calculation

    International Nuclear Information System (INIS)

    Tamboli, P.K.; Sharma, A.; Prasad, A.D.; Singh, Nita; Antony, J.; Kelkar, M.G.; Kaurav, Reetesh; Pramanik, M.

    2013-01-01

    Neutron flux is the most vital parameter in the nuclear reactor safety against Neutronic over power. The modern days Indian PHWRs with large core size are loosely coupled reactors and hence In-core Self Power Neutron Detectors (SPNDs) are most suitable for monitoring local neutron power for generating Regional Overpower Trip. However the SPNDs and its Mineral Insulation Cable are prone to IR loss due to use of ceramic insulation which are highly hygroscopic. The present paper covers the online analysis of IR f degraded cable as per the surveillance requirement of monitoring the IR to assess the healthiness of SPNDs which are part of SSC/SSE for Reactor Protection Systems. The paper also proposes an alternative method for monitoring IR for startup//low power range when SPND signals are yet to pick up and Reactor Control and Protection are based on out of core Ionization Chambers. (author)

  3. Effects of rust in the crack face on crack detection based on Sonic-IR method

    International Nuclear Information System (INIS)

    Harai, Y.; Izumi, Y.; Tanabe, H.; Takamatsu, T.; Sakagami, T.

    2015-01-01

    Sonic-IR, which is based on the thermographic detection of the temperature rise due to frictional heating at the defect faces under ultrasonic excitation, has an advantage in the detection of closed and small defects. However, this method has a lot of nuclear factors relating to heat generation. In this study, effects of rust in the crack faces on the crack detection based on the sonic-IR method is experimentally investigated by using crack specimens. The heat generation by ultrasonic excitation was observed regularly during rust accelerated test using original device. The distribution of temperature change around the crack was changed with the progress of rust. This change in heat generation, it believed to be due to change in the contact state of the crack surface due to rust. As a result, it was found that heat generation by ultrasonic excitation is affected by rust in the crack faces. And it was also found that crack detection can be conducted by sonic-IR even if rust was generated in the crack faces. (author)

  4. Eye movements and attention in reading, scene perception, and visual search.

    Science.gov (United States)

    Rayner, Keith

    2009-08-01

    Eye movements are now widely used to investigate cognitive processes during reading, scene perception, and visual search. In this article, research on the following topics is reviewed with respect to reading: (a) the perceptual span (or span of effective vision), (b) preview benefit, (c) eye movement control, and (d) models of eye movements. Related issues with respect to eye movements during scene perception and visual search are also reviewed. It is argued that research on eye movements during reading has been somewhat advanced over research on eye movements in scene perception and visual search and that some of the paradigms developed to study reading should be more widely adopted in the study of scene perception and visual search. Research dealing with "real-world" tasks and research utilizing the visual-world paradigm are also briefly discussed.

  5. Ten-watt level picosecond parametric mid-IR source broadly tunable in wavelength

    Science.gov (United States)

    Vyvlečka, Michal; Novák, Ondřej; Roškot, Lukáscaron; Smrž, Martin; Mužík, Jiří; Endo, Akira; Mocek, Tomáš

    2018-02-01

    Mid-IR wavelength range (between 2 and 8 μm) offers perspective applications, such as minimally-invasive neurosurgery, gas sensing, or plastic and polymer processing. Maturity of high average power near-IR lasers is beneficial for powerful mid-IR generation by optical parametric conversion. We utilize in-house developed Yb:YAG thin-disk laser of 100 W average power at 77 kHz repetition rate, wavelength of 1030 nm, and about 2 ps pulse width for pumping of a ten-watt level picosecond mid-IR source. Seed beam is obtained by optical parametric generation in a double-pass 10 mm long PPLN crystal pumped by a part of the fundamental near-IR beam. Tunability of the signal wavelength between 1.46 μm and 1.95 μm was achieved with power of several tens of miliwatts. Main part of the fundamental beam pumps an optical parametric amplification stage, which includes a walk-off compensating pair of 10 mm long KTP crystals. We already demonstrated the OPA output signal and idler beam tunability between 1.70-1.95 μm and 2.18-2.62 μm, respectively. The signal and idler beams were amplified up to 8.5 W and 5 W, respectively, at 42 W pump without evidence of strong saturation. Thus, increase in signal and idler output power is expected for pump power increase.

  6. Robotic Discovery of the Auditory Scene

    National Research Council Canada - National Science Library

    Martinson, E; Schultz, A

    2007-01-01

    .... Motivated by the large negative effect of ambient noise sources on robot audition, the long-term goal is to provide awareness of the auditory scene to a robot, so that it may more effectively act...

  7. Developmental Changes in Attention to Faces and Bodies in Static and Dynamic Scenes

    Directory of Open Access Journals (Sweden)

    Brenda M Stoesz

    2014-03-01

    Full Text Available Typically developing individuals show a strong visual preference for faces and face-like stimuli; however, this may come at the expense of attending to bodies or to other aspects of a scene. The primary goal of the present study was to provide additional insight into the development of attentional mechanisms that underlie perception of real people in naturalistic scenes. We examined the looking behaviours of typical children, adolescents, and young adults as they viewed static and dynamic scenes depicting one or more people. Overall, participants showed a bias to attend to faces more than on other parts of the scenes. Adding motion cues led to a reduction in the number, but an increase in the average duration of face fixations in single-character scenes. When multiple characters appeared in a scene, motion-related effects were attenuated and participants shifted their gaze from faces to bodies, or made off-screen glances. Children showed the largest effects related to the introduction of motion cues or additional characters, suggesting that they find dynamic faces difficult to process, and are especially prone to look away from faces when viewing complex social scenes – a strategy that could reduce the cognitive and the affective load imposed by having to divide one’s attention between multiple faces. Our findings provide new insights into the typical development of social attention during natural scene viewing, and lay the foundation for future work examining gaze behaviours in typical and atypical development.

  8. Scene Recognition for Indoor Localization Using a Multi-Sensor Fusion Approach

    Directory of Open Access Journals (Sweden)

    Mengyun Liu

    2017-12-01

    Full Text Available After decades of research, there is still no solution for indoor localization like the GNSS (Global Navigation Satellite System solution for outdoor environments. The major reasons for this phenomenon are the complex spatial topology and RF transmission environment. To deal with these problems, an indoor scene constrained method for localization is proposed in this paper, which is inspired by the visual cognition ability of the human brain and the progress in the computer vision field regarding high-level image understanding. Furthermore, a multi-sensor fusion method is implemented on a commercial smartphone including cameras, WiFi and inertial sensors. Compared to former research, the camera on a smartphone is used to “see” which scene the user is in. With this information, a particle filter algorithm constrained by scene information is adopted to determine the final location. For indoor scene recognition, we take advantage of deep learning that has been proven to be highly effective in the computer vision community. For particle filter, both WiFi and magnetic field signals are used to update the weights of particles. Similar to other fingerprinting localization methods, there are two stages in the proposed system, offline training and online localization. In the offline stage, an indoor scene model is trained by Caffe (one of the most popular open source frameworks for deep learning and a fingerprint database is constructed by user trajectories in different scenes. To reduce the volume requirement of training data for deep learning, a fine-tuned method is adopted for model training. In the online stage, a camera in a smartphone is used to recognize the initial scene. Then a particle filter algorithm is used to fuse the sensor data and determine the final location. To prove the effectiveness of the proposed method, an Android client and a web server are implemented. The Android client is used to collect data and locate a user. The web

  9. Adaptive attunement of selective covert attention to evolutionary-relevant emotional visual scenes.

    Science.gov (United States)

    Fernández-Martín, Andrés; Gutiérrez-García, Aída; Capafons, Juan; Calvo, Manuel G

    2017-05-01

    We investigated selective attention to emotional scenes in peripheral vision, as a function of adaptive relevance of scene affective content for male and female observers. Pairs of emotional-neutral images appeared peripherally-with perceptual stimulus differences controlled-while viewers were fixating on a different stimulus in central vision. Early selective orienting was assessed by the probability of directing the first fixation towards either scene, and the time until first fixation. Emotional scenes selectively captured covert attention even when they were task-irrelevant, thus revealing involuntary, automatic processing. Sex of observers and specific emotional scene content (e.g., male-to-female-aggression, families and babies, etc.) interactively modulated covert attention, depending on adaptive priorities and goals for each sex, both for pleasant and unpleasant content. The attentional system exhibits domain-specific and sex-specific biases and attunements, probably rooted in evolutionary pressures to enhance reproductive and protective success. Emotional cues selectively capture covert attention based on their bio-social significance. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. Adaptive attunement of selective covert attention to evolutionary-relevant emotional visual scenes

    OpenAIRE

    Fernández-Martín, Andrés (UNIR); Gutiérrez-García, Aida; Capafons, Juan; Calvo, Manuel G

    2017-01-01

    We investigated selective attention to emotional scenes in peripheral vision, as a function of adaptive relevance of scene affective content for male and female observers. Pairs of emotional neutral images appeared peripherally with perceptual stimulus differences controlled while viewers were fixating on a different stimulus in central vision. Early selective orienting was assessed by the probability of directing the first fixation towards either scene, and the time until first fixation. Emo...

  11. Knowledge Guided Disambiguation for Large-Scale Scene Classification With Multi-Resolution CNNs

    Science.gov (United States)

    Wang, Limin; Guo, Sheng; Huang, Weilin; Xiong, Yuanjun; Qiao, Yu

    2017-04-01

    Convolutional Neural Networks (CNNs) have made remarkable progress on scene recognition, partially due to these recent large-scale scene datasets, such as the Places and Places2. Scene categories are often defined by multi-level information, including local objects, global layout, and background environment, thus leading to large intra-class variations. In addition, with the increasing number of scene categories, label ambiguity has become another crucial issue in large-scale classification. This paper focuses on large-scale scene recognition and makes two major contributions to tackle these issues. First, we propose a multi-resolution CNN architecture that captures visual content and structure at multiple levels. The multi-resolution CNNs are composed of coarse resolution CNNs and fine resolution CNNs, which are complementary to each other. Second, we design two knowledge guided disambiguation techniques to deal with the problem of label ambiguity. (i) We exploit the knowledge from the confusion matrix computed on validation data to merge ambiguous classes into a super category. (ii) We utilize the knowledge of extra networks to produce a soft label for each image. Then the super categories or soft labels are employed to guide CNN training on the Places2. We conduct extensive experiments on three large-scale image datasets (ImageNet, Places, and Places2), demonstrating the effectiveness of our approach. Furthermore, our method takes part in two major scene recognition challenges, and achieves the second place at the Places2 challenge in ILSVRC 2015, and the first place at the LSUN challenge in CVPR 2016. Finally, we directly test the learned representations on other scene benchmarks, and obtain the new state-of-the-art results on the MIT Indoor67 (86.7\\%) and SUN397 (72.0\\%). We release the code and models at~\\url{https://github.com/wanglimin/MRCNN-Scene-Recognition}.

  12. Representations and Techniques for 3D Object Recognition and Scene Interpretation

    CERN Document Server

    Hoiem, Derek

    2011-01-01

    One of the grand challenges of artificial intelligence is to enable computers to interpret 3D scenes and objects from imagery. This book organizes and introduces major concepts in 3D scene and object representation and inference from still images, with a focus on recent efforts to fuse models of geometry and perspective with statistical machine learning. The book is organized into three sections: (1) Interpretation of Physical Space; (2) Recognition of 3D Objects; and (3) Integrated 3D Scene Interpretation. The first discusses representations of spatial layout and techniques to interpret physi

  13. Embedded and real-time vehicle detection system for challenging on-road scenes

    Science.gov (United States)

    Gu, Qin; Yang, Jianyu; Kong, Lingjiang; Yan, Wei Qi; Klette, Reinhard

    2017-06-01

    Vehicle detection is an important topic for advanced driver-assistance systems. This paper proposes an adaptive approach for an embedded system by focusing on monocular vehicle detection in real time, also aiming at being accurate under challenging conditions. Scene classification is accomplished by using a simplified convolution neural network with hypothesis generation by SoftMax regression. The output is consequently taken into account to optimize detection parameters for hypothesis generation and testing. Thus, we offer a sample-reorganization mechanism to improve the performance of vehicle hypothesis verification. A hypothesis leap mechanism is in use to improve the operating efficiency of the on-board system. A practical on-road test is employed to verify vehicle detection (i.e., accuracy) and also the performance of the designed on-board system regarding speed.

  14. Automating the construction of scene classifiers for content-based video retrieval

    NARCIS (Netherlands)

    Khan, L.; Israël, Menno; Petrushin, V.A.; van den Broek, Egon; van der Putten, Peter

    2004-01-01

    This paper introduces a real time automatic scene classifier within content-based video retrieval. In our envisioned approach end users like documentalists, not image processing experts, build classifiers interactively, by simply indicating positive examples of a scene. Classification consists of a

  15. Number of perceptually distinct surface colors in natural scenes.

    Science.gov (United States)

    Marín-Franch, Iván; Foster, David H

    2010-09-30

    The ability to perceptually identify distinct surfaces in natural scenes by virtue of their color depends not only on the relative frequency of surface colors but also on the probabilistic nature of observer judgments. Previous methods of estimating the number of discriminable surface colors, whether based on theoretical color gamuts or recorded from real scenes, have taken a deterministic approach. Thus, a three-dimensional representation of the gamut of colors is divided into elementary cells or points which are spaced at one discrimination-threshold unit intervals and which are then counted. In this study, information-theoretic methods were used to take into account both differing surface-color frequencies and observer response uncertainty. Spectral radiances were calculated from 50 hyperspectral images of natural scenes and were represented in a perceptually almost uniform color space. The average number of perceptually distinct surface colors was estimated as 7.3 × 10(3), much smaller than that based on counting methods. This number is also much smaller than the number of distinct points in a scene that are, in principle, available for reliable identification under illuminant changes, suggesting that color constancy, or the lack of it, does not generally determine the limit on the use of color for surface identification.

  16. Characteristics of nontrauma scene flights for air medical transport.

    Science.gov (United States)

    Krebs, Margaret G; Fletcher, Erica N; Werman, Howard; McKenzie, Lara B

    2014-01-01

    Little is known about the use of air medical transport for patients with medical, rather than traumatic, emergencies. This study describes the practices of air transport programs, with respect to nontrauma scene responses, in several areas throughout the United States and Canada. A descriptive, retrospective study was conducted of all nontrauma scene flights from 2008 and 2009. Flight information and patient demographic data were collected from 5 air transport programs. Descriptive statistics were used to examine indications for transport, Glasgow Coma Scale Scores, and loaded miles traveled. A total of 1,785 nontrauma scene flights were evaluated. The percentage of scene flights contributed by nontraumatic emergencies varied between programs, ranging from 0% to 44.3%. The most common indication for transport was cardiac, nonST-segment elevation myocardial infarction (22.9%). Cardiac arrest was the indication for transport in 2.5% of flights. One air transport program reported a high percentage (49.4) of neurologic, stroke, flights. The use of air transport for nontraumatic emergencies varied considerably between various air transport programs and regions. More research is needed to evaluate which nontraumatic emergencies benefit from air transport. National guidelines regarding the use of air transport for nontraumatic emergencies are needed. Copyright © 2014 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.

  17. Virtual environments for scene of crime reconstruction and analysis

    Science.gov (United States)

    Howard, Toby L. J.; Murta, Alan D.; Gibson, Simon

    2000-02-01

    This paper describes research conducted in collaboration with Greater Manchester Police (UK), to evalute the utility of Virtual Environments for scene of crime analysis, forensic investigation, and law enforcement briefing and training. We present an illustrated case study of the construction of a high-fidelity virtual environment, intended to match a particular real-life crime scene as closely as possible. We describe and evaluate the combination of several approaches including: the use of the Manchester Scene Description Language for constructing complex geometrical models; the application of a radiosity rendering algorithm with several novel features based on human perceptual consideration; texture extraction from forensic photography; and experiments with interactive walkthroughs and large-screen stereoscopic display of the virtual environment implemented using the MAVERIK system. We also discuss the potential applications of Virtual Environment techniques in the Law Enforcement and Forensic communities.

  18. Índices HOMA1-IR e HOMA2-IR para identificação de resistência à insulina e síndrome metabólica: Estudo Brasileiro de Síndrome Metabólica (BRAMS)

    OpenAIRE

    Geloneze, Bruno; Vasques, Ana Carolina Junqueira; Stabe, Christiane França Camargo; Pareja, José Carlos; Rosado, Lina Enriqueta Frandsen Paez de Lima; Queiroz, Elaine Cristina de; Tambascia, Marcos Antonio

    2009-01-01

    OBJECTIVE: To investigate cut-off values for HOMA1-IR and HOMA2-IR to identify insulin resistance (IR) and metabolic syndrome (MS), and to assess the association of the indexes with components of the MS. METHODS: Nondiabetic subjects from the Brazilian Metabolic Syndrome Study were studied (n = 1,203, 18 to 78 years). The cut-off values for IR were determined from the 90th percentile in the healthy group (n = 297) and, for MS, a ROC curve was generated for the total sample. RESULTS: In the he...

  19. STREAM PROCESSING ALGORITHMS FOR DYNAMIC 3D SCENE ANALYSIS

    Science.gov (United States)

    2018-02-15

    PROCESSING ALGORITHMS FOR DYNAMIC 3D SCENE ANALYSIS 5a. CONTRACT NUMBER FA8750-14-2-0072 5b. GRANT NUMBER N/A 5c. PROGRAM ELEMENT NUMBER 62788F 6...of Figures 1 The 3D processing pipeline flowchart showing key modules. . . . . . . . . . . . . . . . . 12 2 Overall view (data flow) of the proposed...pipeline flowchart showing key modules. from motion and bundle adjustment algorithm. By fusion of depth masks of the scene obtained from 3D

  20. Children's Development of Analogical Reasoning: Insights from Scene Analogy Problems

    Science.gov (United States)

    Richland, Lindsey E.; Morrison, Robert G.; Holyoak, Keith J.

    2006-01-01

    We explored how relational complexity and featural distraction, as varied in scene analogy problems, affect children's analogical reasoning performance. Results with 3- and 4-year-olds, 6- and 7-year-olds, 9- to 11-year-olds, and 13- and 14-year-olds indicate that when children can identify the critical structural relations in a scene analogy…

  1. The Influence of Color on the Perception of Scene Gist

    Science.gov (United States)

    Castelhano, Monica S.; Henderson, John M.

    2008-01-01

    In 3 experiments the authors used a new contextual bias paradigm to explore how quickly information is extracted from a scene to activate gist, whether color contributes to this activation, and how color contributes, if it does. Participants were shown a brief presentation of a scene followed by the name of a target object. The target object could…

  2. Emotional event-related potentials are larger to figures than scenes but are similarly reduced by inattention

    Directory of Open Access Journals (Sweden)

    Nordström Henrik

    2012-05-01

    Full Text Available Abstract Background In research on event-related potentials (ERP to emotional pictures, greater attention to emotional than neutral stimuli (i.e., motivated attention is commonly indexed by two difference waves between emotional and neutral stimuli: the early posterior negativity (EPN and the late positive potential (LPP. Evidence suggests that if attention is directed away from the pictures, then the emotional effects on EPN and LPP are eliminated. However, a few studies have found residual, emotional effects on EPN and LPP. In these studies, pictures were shown at fixation, and picture composition was that of simple figures rather than that of complex scenes. Because figures elicit larger LPP than do scenes, figures might capture and hold attention more strongly than do scenes. Here, we showed negative and neutral pictures of figures and scenes and tested first, whether emotional effects are larger to figures than scenes for both EPN and LPP, and second, whether emotional effects on EPN and LPP are reduced less for unattended figures than scenes. Results Emotional effects on EPN and LPP were larger for figures than scenes. When pictures were unattended, emotional effects on EPN increased for scenes but tended to decrease for figures, whereas emotional effects on LPP decreased similarly for figures and scenes. Conclusions Emotional effects on EPN and LPP were larger for figures than scenes, but these effects did not resist manipulations of attention more strongly for figures than scenes. These findings imply that the emotional content captures attention more strongly for figures than scenes, but that the emotional content does not hold attention more strongly for figures than scenes.

  3. Emotional event-related potentials are larger to figures than scenes but are similarly reduced by inattention

    Science.gov (United States)

    2012-01-01

    Background In research on event-related potentials (ERP) to emotional pictures, greater attention to emotional than neutral stimuli (i.e., motivated attention) is commonly indexed by two difference waves between emotional and neutral stimuli: the early posterior negativity (EPN) and the late positive potential (LPP). Evidence suggests that if attention is directed away from the pictures, then the emotional effects on EPN and LPP are eliminated. However, a few studies have found residual, emotional effects on EPN and LPP. In these studies, pictures were shown at fixation, and picture composition was that of simple figures rather than that of complex scenes. Because figures elicit larger LPP than do scenes, figures might capture and hold attention more strongly than do scenes. Here, we showed negative and neutral pictures of figures and scenes and tested first, whether emotional effects are larger to figures than scenes for both EPN and LPP, and second, whether emotional effects on EPN and LPP are reduced less for unattended figures than scenes. Results Emotional effects on EPN and LPP were larger for figures than scenes. When pictures were unattended, emotional effects on EPN increased for scenes but tended to decrease for figures, whereas emotional effects on LPP decreased similarly for figures and scenes. Conclusions Emotional effects on EPN and LPP were larger for figures than scenes, but these effects did not resist manipulations of attention more strongly for figures than scenes. These findings imply that the emotional content captures attention more strongly for figures than scenes, but that the emotional content does not hold attention more strongly for figures than scenes. PMID:22607397

  4. Short report: the effect of expertise in hiking on recognition memory for mountain scenes.

    Science.gov (United States)

    Kawamura, Satoru; Suzuki, Sae; Morikawa, Kazunori

    2007-10-01

    The nature of an expert memory advantage that does not depend on stimulus structure or chunking was examined, using more ecologically valid stimuli in the context of a more natural activity than previously studied domains. Do expert hikers and novice hikers see and remember mountain scenes differently? In the present experiment, 18 novice hikers and 17 expert hikers were presented with 60 photographs of scenes from hiking trails. These scenes differed in the degree of functional aspects that implied some action possibilities or dangers. The recognition test revealed that the memory performance of experts was significantly superior to that of novices for scenes with highly functional aspects. The memory performance for the scenes with few functional aspects did not differ between novices and experts. These results suggest that experts pay more attention to, and thus remember better, scenes with functional meanings than do novices.

  5. OpenSceneGraph 3 Cookbook

    CERN Document Server

    Wang, Rui

    2012-01-01

    This is a cookbook full of recipes with practical examples enriched with code and the required screenshots for easy and quick comprehension. You should be familiar with the basic concepts of the OpenSceneGraph API and should be able to write simple programs. Some OpenGL and math knowledge will help a lot, too.

  6. Dynamic modeling method for infrared smoke based on enhanced discrete phase model

    Science.gov (United States)

    Zhang, Zhendong; Yang, Chunling; Zhang, Yan; Zhu, Hongbo

    2018-03-01

    The dynamic modeling of infrared (IR) smoke plays an important role in IR scene simulation systems and its accuracy directly influences the system veracity. However, current IR smoke models cannot provide high veracity, because certain physical characteristics are frequently ignored in fluid simulation; simplifying the discrete phase as a continuous phase and ignoring the IR decoy missile-body spinning. To address this defect, this paper proposes a dynamic modeling method for IR smoke, based on an enhanced discrete phase model (DPM). A mathematical simulation model based on an enhanced DPM is built and a dynamic computing fluid mesh is generated. The dynamic model of IR smoke is then established using an extended equivalent-blackbody-molecule model. Experiments demonstrate that this model realizes a dynamic method for modeling IR smoke with higher veracity.

  7. AUTOMATIC GENERATION OF ROAD INFRASTRUCTURE IN 3D FOR VEHICLE SIMULATORS

    Directory of Open Access Journals (Sweden)

    Adam Orlický

    2017-12-01

    Full Text Available One of the modern methods of testing new systems and interfaces in vehicles is testing in a vehicle simulator. Providing quality models of virtual scenes is one of tasks for driver-car interaction interface simulation. Nowadays, there exist many programs for creating 3D models of road infrastructures, but most of these programs are very expensive or canÂtt export models for the following use. Therefore, a plug-in has been developed at the Faculty of Transportation Sciences in Prague. It can generate road infrastructure by Czech standard for designing roads (CSN 73 6101. The uniqueness of this plug-in is that it is the first tool for generating road infrastructure in NURBS representation. This type of representation brings more exact models and allows to optimize transfer for creating quality models for vehicle simulators. The scenes created by this plug-in were tested on vehicle simulators. The results have shown that with newly created scenes drivers had a much better feeling in comparison to previous scenes.

  8. The Anthropo-scene: A guide for the perplexed.

    Science.gov (United States)

    Lorimer, Jamie

    2017-02-01

    The scientific proposal that the Earth has entered a new epoch as a result of human activities - the Anthropocene - has catalysed a flurry of intellectual activity. I introduce and review the rich, inchoate and multi-disciplinary diversity of this Anthropo-scene. I identify five ways in which the concept of the Anthropocene has been mobilized: scientific question, intellectual zeitgeist, ideological provocation, new ontologies and science fiction. This typology offers an analytical framework for parsing this diversity, for understanding the interactions between different ways of thinking in the Anthropo-scene, and thus for comprehending elements of its particular and peculiar sociabilities. Here I deploy this framework to situate Earth Systems Science within the Anthropo-scene, exploring both the status afforded science in discussions of this new epoch, and the various ways in which the other means of engaging with the concept come to shape the conduct, content and politics of this scientific enquiry. In conclusion the paper reflects on the potential of the Anthropocene for new modes of academic praxis.

  9. A Virtual Environments Editor for Driving Scenes

    Directory of Open Access Journals (Sweden)

    Ronald R. Mourant

    2003-12-01

    Full Text Available The goal of this project was to enable the rapid creation of three-dimensional virtual driving environments. We designed and implemented a high-level scene editor that allows a user to construct a driving environment by pasting icons that represent 1 road segments, 2 road signs, 3 trees and 4 buildings. These icons represent two- and three-dimensional objects that have been predesigned. Icons can be placed in the scene at specific locations (x, y, and z coordinates. The editor includes the capability of a user to "drive" a vehicle using a computer mouse for steering, accelerating and braking. At any time during the process of building a virtual environment, a user may switch to "Run Mode" and inspect the three-dimensional scene by "driving" through it using the mouse. Adjustments and additions can be made to the virtual environment by going back to "Build Mode". Once a user is satisfied with the threedimensional virtual environment, it can be saved in a file. The file can used with Java3D software that enables the traversing of three-dimensional environments. The process of building virtual environments from predesigned icons can be applied to many other application areas. It will enable novice computer users to rapidly construct and use three-dimensional virtual environments.

  10. Where and when Do Objects Become Scenes?

    Directory of Open Access Journals (Sweden)

    Jiye G. Kim

    2011-05-01

    Full Text Available Scenes can be understood with extraordinary speed and facility, not merely as an inventory of individual objects but in the coding of the relations among them. These relations, which can be readily described by prepositions or gerunds (e.g., a hand holding a pen, allows the explicit representation of complex structures. Where in the brain are inter-object relations specified? In a series of fMRI experiments, we show that pairs of objects shown as interacting elicit greater activity in LOC than when the objects are depicted side-by-side (e.g., a hand beside a pen. Other visual areas, PPA, IPS, and DLPFC, did not show this sensitivity to scene relations, rendering it unlikely that the relations were computed in these regions. Using EEG and TMS, we further show that LOC's sensitivity to object interactions arises around 170ms post stimulus onset and that disruption of normal LOC activity—but not IPS activity—is detrimental to the behavioral sensitivity of inter-object relations. Insofar as LOC is the earliest cortical region where shape is distinguished from texture, our results provide strong evidence that scene-like relations are achieved simultaneously with the perception of object shape and not inferred at some stage following object identification.

  11. Single-View 3D Scene Reconstruction and Parsing by Attribute Grammar.

    Science.gov (United States)

    Liu, Xiaobai; Zhao, Yibiao; Zhu, Song-Chun

    2018-03-01

    In this paper, we present an attribute grammar for solving two coupled tasks: i) parsing a 2D image into semantic regions; and ii) recovering the 3D scene structures of all regions. The proposed grammar consists of a set of production rules, each describing a kind of spatial relation between planar surfaces in 3D scenes. These production rules are used to decompose an input image into a hierarchical parse graph representation where each graph node indicates a planar surface or a composite surface. Different from other stochastic image grammars, the proposed grammar augments each graph node with a set of attribute variables to depict scene-level global geometry, e.g., camera focal length, or local geometry, e.g., surface normal, contact lines between surfaces. These geometric attributes impose constraints between a node and its off-springs in the parse graph. Under a probabilistic framework, we develop a Markov Chain Monte Carlo method to construct a parse graph that optimizes the 2D image recognition and 3D scene reconstruction purposes simultaneously. We evaluated our method on both public benchmarks and newly collected datasets. Experiments demonstrate that the proposed method is capable of achieving state-of-the-art scene reconstruction of a single image.

  12. CGI delay compensation. [Computer Generated Image

    Science.gov (United States)

    Mcfarland, R. E.

    1986-01-01

    Computer-generated graphics in real-time helicopter simulation produces objectionable scene-presentation time delays. In the flight simulation laboratory at Ames Research Center, it has been determined that these delays have an adverse influence on pilot performance during agressive tasks such as nap of the earth (NOE) maneuvers. Using contemporary equipment, computer generated image (CGI) time delays are an unavoidable consequence of the operations required for scene generation. However, providing that magnitude distortions at higher frequencies are tolerable, delay compensation is possible over a restricted frequency range. This range, assumed to have an upper limit of perhaps 10 or 15 rad/sec, conforms approximately to the bandwidth associated with helicopter handling qualities research. A compensation algorithm is introduced here and evaluated in terms of tradeoffs in frequency responses. The algorithm has a discrete basis and accommodates both a large, constant transport delay interval and a periodic delay interval, as associated with asynchronous operations.

  13. Effects of scene content and layout on the perceived light direction in 3D spaces.

    Science.gov (United States)

    Xia, Ling; Pont, Sylvia C; Heynderickx, Ingrid

    2016-08-01

    The lighting and furnishing of an interior space (i.e., the reflectance of its materials, the geometries of the furnishings, and their arrangement) determine the appearance of this space. Conversely, human observers infer lighting properties from the space's appearance. We conducted two psychophysical experiments to investigate how the perception of the light direction is influenced by a scene's objects and their layout using real scenes. In the first experiment, we confirmed that the shape of the objects in the scene and the scene layout influence the perceived light direction. In the second experiment, we systematically investigated how specific shape properties influenced the estimation of the light direction. The results showed that increasing the number of visible faces of an object, ultimately using globally spherical shapes in the scene, supported the veridicality of the estimated light direction. Furthermore, symmetric arrangements in the scene improved the estimation of the tilt direction. Thus, human perception of light should integrally consider materials, scene content, and layout.

  14. Overt attention in natural scenes: objects dominate features.

    Science.gov (United States)

    Stoll, Josef; Thrun, Michael; Nuthmann, Antje; Einhäuser, Wolfgang

    2015-02-01

    Whether overt attention in natural scenes is guided by object content or by low-level stimulus features has become a matter of intense debate. Experimental evidence seemed to indicate that once object locations in a scene are known, salience models provide little extra explanatory power. This approach has recently been criticized for using inadequate models of early salience; and indeed, state-of-the-art salience models outperform trivial object-based models that assume a uniform distribution of fixations on objects. Here we propose to use object-based models that take a preferred viewing location (PVL) close to the centre of objects into account. In experiment 1, we demonstrate that, when including this comparably subtle modification, object-based models again are at par with state-of-the-art salience models in predicting fixations in natural scenes. One possible interpretation of these results is that objects rather than early salience dominate attentional guidance. In this view, early-salience models predict fixations through the correlation of their features with object locations. To test this hypothesis directly, in two additional experiments we reduced low-level salience in image areas of high object content. For these modified stimuli, the object-based model predicted fixations significantly better than early salience. This finding held in an object-naming task (experiment 2) and a free-viewing task (experiment 3). These results provide further evidence for object-based fixation selection--and by inference object-based attentional guidance--in natural scenes. Copyright © 2014 The Authors. Published by Elsevier Ltd.. All rights reserved.

  15. Neural Correlates of Divided Attention in Natural Scenes.

    Science.gov (United States)

    Fagioli, Sabrina; Macaluso, Emiliano

    2016-09-01

    Individuals are able to split attention between separate locations, but divided spatial attention incurs the additional requirement of monitoring multiple streams of information. Here, we investigated divided attention using photos of natural scenes, where the rapid categorization of familiar objects and prior knowledge about the likely positions of objects in the real world might affect the interplay between these spatial and nonspatial factors. Sixteen participants underwent fMRI during an object detection task. They were presented with scenes containing either a person or a car, located on the left or right side of the photo. Participants monitored either one or both object categories, in one or both visual hemifields. First, we investigated the interplay between spatial and nonspatial attention by comparing conditions of divided attention between categories and/or locations. We then assessed the contribution of top-down processes versus stimulus-driven signals by separately testing the effects of divided attention in target and nontarget trials. The results revealed activation of a bilateral frontoparietal network when dividing attention between the two object categories versus attending to a single category but no main effect of dividing attention between spatial locations. Within this network, the left dorsal premotor cortex and the left intraparietal sulcus were found to combine task- and stimulus-related signals. These regions showed maximal activation when participants monitored two categories at spatially separate locations and the scene included a nontarget object. We conclude that the dorsal frontoparietal cortex integrates top-down and bottom-up signals in the presence of distractors during divided attention in real-world scenes.

  16. Notes on Women Who Rock: Making Scenes, Building Communities: Participatory Research, Community Engagement, and Archival Practice

    OpenAIRE

    Michelle Habell-Pallán; Sonnet Retman; Angelica Macklin

    2014-01-01

    Since 2011, Women Who Rock (WWR) has brought together scholars, archivists, musicians, media-makers, performers, artists, and activists to explore the role of women and popular music in the creation of cultural scenes and social justice movements in the Americas and beyond. The project promotes generative dialogue and documentation by “encompassing several interwoven components: project-based coursework at the graduate and undergraduate levels; an annual participant-driven conference and film...

  17. Gist in time: Scene semantics and structure enhance recall of searched objects.

    Science.gov (United States)

    Josephs, Emilie L; Draschkow, Dejan; Wolfe, Jeremy M; Võ, Melissa L-H

    2016-09-01

    Previous work has shown that recall of objects that are incidentally encountered as targets in visual search is better than recall of objects that have been intentionally memorized (Draschkow, Wolfe, & Võ, 2014). However, this counter-intuitive result is not seen when these tasks are performed with non-scene stimuli. The goal of the current paper is to determine what features of search in a scene contribute to higher recall rates when compared to a memorization task. In each of four experiments, we compare the free recall rate for target objects following a search to the rate following a memorization task. Across the experiments, the stimuli include progressively more scene-related information. Experiment 1 provides the spatial relations between objects. Experiment 2 adds relative size and depth of objects. Experiments 3 and 4 include scene layout and semantic information. We find that search leads to better recall than explicit memorization in cases where scene layout and semantic information are present, as long as the participant has ample time (2500ms) to integrate this information with knowledge about the target object (Exp. 4). These results suggest that the integration of scene and target information not only leads to more efficient search, but can also contribute to stronger memory representations than intentional memorization. Copyright © 2016 Elsevier B.V. All rights reserved.

  18. The development of brain systems associated with successful memory retrieval of scenes.

    Science.gov (United States)

    Ofen, Noa; Chai, Xiaoqian J; Schuil, Karen D I; Whitfield-Gabrieli, Susan; Gabrieli, John D E

    2012-07-18

    Neuroanatomical and psychological evidence suggests prolonged maturation of declarative memory systems in the human brain from childhood into young adulthood. Here, we examine functional brain development during successful memory retrieval of scenes in children, adolescents, and young adults ages 8-21 via functional magnetic resonance imaging. Recognition memory improved with age, specifically for accurate identification of studied scenes (hits). Successful retrieval (correct old-new decisions for studied vs unstudied scenes) was associated with activations in frontal, parietal, and medial temporal lobe (MTL) regions. Activations associated with successful retrieval increased with age in left parietal cortex (BA7), bilateral prefrontal, and bilateral caudate regions. In contrast, activations associated with successful retrieval did not change with age in the MTL. Psychophysiological interaction analysis revealed that there were, however, age-relate changes in differential connectivity for successful retrieval between MTL and prefrontal regions. These results suggest that neocortical regions related to attentional or strategic control show the greatest developmental changes for memory retrieval of scenes. Furthermore, these results suggest that functional interactions between MTL and prefrontal regions during memory retrieval also develop into young adulthood. The developmental increase of memory-related activations in frontal and parietal regions for retrieval of scenes and the absence of such an increase in MTL regions parallels what has been observed for memory encoding of scenes.

  19. Image Chunking: Defining Spatial Building Blocks for Scene Analysis.

    Science.gov (United States)

    1987-04-01

    mumgs0.USmusa 7.AUWOJO 4. CIUTAC Rm6ANT Wuugme*j James V/. Mlahoney DACA? 6-85-C-00 10 NOQ 1 4-85-K-O 124 Artificial Inteligence Laboratory US USS 545...0197 672 IMAGE CHUWING: DEINING SPATIAL UILDING PLOCKS FOR 142 SCENE ANRLYSIS(U) MASSACHUSETTS INST OF TECH CAIIAIDGE ARTIFICIAL INTELLIGENCE LAO J...Technical Report 980 F-Image Chunking: Defining Spatial Building Blocks for Scene DTm -Analysis S ELECTED James V. Mahoney’ MIT Artificial Intelligence

  20. Global Transsaccadic Change Blindness During Scene Perception

    National Research Council Canada - National Science Library

    Henderson, John

    2003-01-01

    .... The results from two experiments demonstrated a global transsaccadic change-blindness effect, suggesting that point-by-point visual representations are not functional across saccades during complex scene perception. Ahstract.

  1. Spin orientations of the spin-half Ir(4+) ions in Sr3NiIrO6, Sr2IrO4, and Na2IrO3: Density functional, perturbation theory, and Madelung potential analyses.

    Science.gov (United States)

    Gordon, Elijah E; Xiang, Hongjun; Köhler, Jürgen; Whangbo, Myung-Hwan

    2016-03-21

    The spins of the low-spin Ir(4+) (S = 1/2, d(5)) ions at the octahedral sites of the oxides Sr3NiIrO6, Sr2IrO4, and Na2IrO3 exhibit preferred orientations with respect to their IrO6 octahedra. We evaluated the magnetic anisotropies of these S = 1/2 ions on the basis of density functional theory (DFT) calculations including spin-orbit coupling (SOC), and probed their origin by performing perturbation theory analyses with SOC as perturbation within the LS coupling scheme. The observed spin orientations of Sr3NiIrO6 and Sr2IrO4 are correctly predicted by DFT calculations, and are accounted for by the perturbation theory analysis. As for the spin orientation of Na2IrO3, both experimental studies and DFT calculations have not been unequivocal. Our analysis reveals that the Ir(4+) spin orientation of Na2IrO3 should have nonzero components along the c- and a-axis directions. The spin orientations determined by DFT calculations are sensitive to the accuracy of the crystal structures employed, which is explained by perturbation theory analyses when interactions between adjacent Ir(4+) ions are taken into consideration. There are indications implying that the 5d electrons of Na2IrO3 are less strongly localized compared with those of Sr3NiIrO6 and Sr2IrO4. This implication was confirmed by showing that the Madelung potentials of the Ir(4+) ions are less negative in Na2IrO3 than in Sr3NiIrO6 and Sr2IrO4. Most transition-metal S = 1/2 ions do have magnetic anisotropies because the SOC induces interactions among their crystal-field split d-states, and the associated mixing of the states modifies only the orbital parts of the states. This finding cannot be mimicked by a spin Hamiltonian because this model Hamiltonian lacks the orbital degree of freedom, thereby leading to the spin-half syndrome. The spin-orbital entanglement for the 5d spin-half ions Ir(4+) is not as strong as has been assumed.

  2. High-speed three-dimensional shape measurement for dynamic scenes using bi-frequency tripolar pulse-width-modulation fringe projection

    Science.gov (United States)

    Zuo, Chao; Chen, Qian; Gu, Guohua; Feng, Shijie; Feng, Fangxiaoyu; Li, Rubin; Shen, Guochen

    2013-08-01

    This paper introduces a high-speed three-dimensional (3-D) shape measurement technique for dynamic scenes by using bi-frequency tripolar pulse-width-modulation (TPWM) fringe projection. Two wrapped phase maps with different wavelengths can be obtained simultaneously by our bi-frequency phase-shifting algorithm. Then the two phase maps are unwrapped using a simple look-up-table based number-theoretical approach. To guarantee the robustness of phase unwrapping as well as the high sinusoidality of projected patterns, TPWM technique is employed to generate ideal fringe patterns with slight defocus. We detailed our technique, including its principle, pattern design, and system setup. Several experiments on dynamic scenes were performed, verifying that our method can achieve a speed of 1250 frames per second for fast, dense, and accurate 3-D measurements.

  3. Ontology of a scene based on Java 3D architecture.

    Directory of Open Access Journals (Sweden)

    Rubén González Crespo

    2009-12-01

    Full Text Available The present article seeks to make an approach to the class hierarchy of a scene built with the architecture Java 3D, to develop an ontology of a scene as from the semantic essential components for the semantic structuring of the Web3D. Java was selected because the language recommended by the W3C Consortium for the Development of the Web3D oriented applications as from X3D standard is Xj3D which compositionof their Schemas is based the architecture of Java3D In first instance identifies the domain and scope of the ontology, defining classes and subclasses that comprise from Java3D architecture and the essential elements of a scene, as its point of origin, the field of rotation, translation The limitation of the scene and the definition of shaders, then define the slots that are declared in RDF as a framework for describing the properties of the classes established from identifying thedomain and range of each class, then develops composition of the OWL ontology on SWOOP Finally, be perform instantiations of the ontology building for a Iconosphere object as from class expressions defined.

  4. Modelling Technology for Building Fire Scene with Virtual Geographic Environment

    Science.gov (United States)

    Song, Y.; Zhao, L.; Wei, M.; Zhang, H.; Liu, W.

    2017-09-01

    Building fire is a risky activity that can lead to disaster and massive destruction. The management and disposal of building fire has always attracted much interest from researchers. Integrated Virtual Geographic Environment (VGE) is a good choice for building fire safety management and emergency decisions, in which a more real and rich fire process can be computed and obtained dynamically, and the results of fire simulations and analyses can be much more accurate as well. To modelling building fire scene with VGE, the application requirements and modelling objective of building fire scene were analysed in this paper. Then, the four core elements of modelling building fire scene (the building space environment, the fire event, the indoor Fire Extinguishing System (FES) and the indoor crowd) were implemented, and the relationship between the elements was discussed also. Finally, with the theory and framework of VGE, the technology of building fire scene system with VGE was designed within the data environment, the model environment, the expression environment, and the collaborative environment as well. The functions and key techniques in each environment are also analysed, which may provide a reference for further development and other research on VGE.

  5. Guidance of Attention to Objects and Locations by Long-Term Memory of Natural Scenes

    Science.gov (United States)

    Becker, Mark W.; Rasmussen, Ian P.

    2008-01-01

    Four flicker change-detection experiments demonstrate that scene-specific long-term memory guides attention to both behaviorally relevant locations and objects within a familiar scene. Participants performed an initial block of change-detection trials, detecting the addition of an object to a natural scene. After a 30-min delay, participants…

  6. Semantic memory for contextual regularities within and across scene categories: evidence from eye movements.

    Science.gov (United States)

    Brockmole, James R; Le-Hoa Võ, Melissa

    2010-10-01

    When encountering familiar scenes, observers can use item-specific memory to facilitate the guidance of attention to objects appearing in known locations or configurations. Here, we investigated how memory for relational contingencies that emerge across different scenes can be exploited to guide attention. Participants searched for letter targets embedded in pictures of bedrooms. In a between-subjects manipulation, targets were either always on a bed pillow or randomly positioned. When targets were systematically located within scenes, search for targets became more efficient. Importantly, this learning transferred to bedrooms without pillows, ruling out learning that is based on perceptual contingencies. Learning also transferred to living room scenes, but it did not transfer to kitchen scenes, even though both scene types contained pillows. These results suggest that statistical regularities abstracted across a range of stimuli are governed by semantic expectations regarding the presence of target-predicting local landmarks. Moreover, explicit awareness of these contingencies led to a central tendency bias in recall memory for precise target positions that is similar to the spatial category effects observed in landmark memory. These results broaden the scope of conditions under which contextual cuing operates and demonstrate how semantic memory plays a causal and independent role in the learning of associations between objects in real-world scenes.

  7. Generation and application of soft-X-ray by means of inverse compton scattering between high quality election beam and IR laser

    International Nuclear Information System (INIS)

    Washio, M.; Sakaue, K.; Hama, Y.; Kamiya, Y.; Moriyama, R.; Hezume, K.; Saito, T.; Kuroda, R.; Kashiwagi, S.; Ushida, K.; Hayano, H.; Urakawa, J.

    2006-01-01

    High quality beam generation project based on High-Tech Research Center Project, which has been approved by Ministry of Education, Culture, Sports, Science and Technology in 1999, has been conducted by advance research institute for science and engineering, Waseda University. In the project, laser photo-cathode RF-gun has been selected for the high quality electron beam source. RF cavities with low dark current, which were made by diamond turning technique, have been successfully manufactured. The low emittance electron beam was realized by choosing the modified laser injection technique. The obtained normalized emittance was about 3 mm·mrad at 100 pC of electron charge. The soft X-ray beam generation with the energy of 370 eV, which is in the energy region of so-called 'water window', by inverse Compton scattering has been performed by the collision between IR laser and the low emittance electron beams. (authors)

  8. The elephant in the room: Inconsistency in scene viewing and representation.

    Science.gov (United States)

    Spotorno, Sara; Tatler, Benjamin W

    2017-10-01

    We examined the extent to which semantic informativeness, consistency with expectations and perceptual salience contribute to object prioritization in scene viewing and representation. In scene viewing (Experiments 1-2), semantic guidance overshadowed perceptual guidance in determining fixation order, with the greatest prioritization for objects that were diagnostic of the scene's depicted event. Perceptual properties affected selection of consistent objects (regardless of their informativeness) but not of inconsistent objects. Semantic and perceptual properties also interacted in influencing foveal inspection, as inconsistent objects were fixated longer than low but not high salience diagnostic objects. While not studied in direct competition with each other (each studied in competition with diagnostic objects), we found that inconsistent objects were fixated earlier and for longer than consistent but marginally informative objects. In change detection (Experiment 3), perceptual guidance overshadowed semantic guidance, promoting detection of highly salient changes. A residual advantage for diagnosticity over inconsistency emerged only when selection prioritization could not be based on low-level features. Overall these findings show that semantic inconsistency is not prioritized within a scene when competing with other relevant information that is essential to scene understanding and respects observers' expectations. Moreover, they reveal that the relative dominance of semantic or perceptual properties during selection depends on ongoing task requirements. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  9. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features

    Directory of Open Access Journals (Sweden)

    Linyi Li

    2017-01-01

    Full Text Available In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.

  10. Mirth and Murder: Crime Scene Investigation as a Work Context for Examining Humor Applications

    Science.gov (United States)

    Roth, Gene L.; Vivona, Brian

    2010-01-01

    Within work settings, humor is used by workers for a wide variety of purposes. This study examines humor applications of a specific type of worker in a unique work context: crime scene investigation. Crime scene investigators examine death and its details. Members of crime scene units observe death much more frequently than other police officers…

  11. Smoking scenes in popular Japanese serial television dramas: descriptive analysis during the same 3-month period in two consecutive years.

    Science.gov (United States)

    Kanda, Hideyuki; Okamura, Tomonori; Turin, Tanvir Chowdhury; Hayakawa, Takehito; Kadowaki, Takashi; Ueshima, Hirotsugu

    2006-06-01

    Japanese serial television dramas are becoming very popular overseas, particularly in other Asian countries. Exposure to smoking scenes in movies and television dramas has been known to trigger initiation of habitual smoking in young people. Smoking scenes in Japanese dramas may affect the smoking behavior of many young Asians. We examined smoking scenes and smoking-related items in serial television dramas targeting young audiences in Japan during the same season in two consecutive years. Fourteen television dramas targeting the young audience broadcast between July and September in 2001 and 2002 were analyzed. A total of 136 h 42 min of television programs were divided into unit scenes of 3 min (a total of 2734 unit scenes). All the unit scenes were reviewed for smoking scenes and smoking-related items. Of the 2734 3-min unit scenes, 205 (7.5%) were actual smoking scenes and 387 (14.2%) depicted smoking environments with the presence of smoking-related items, such as ash trays. In 185 unit scenes (90.2% of total smoking scenes), actors were shown smoking. Actresses were less frequently shown smoking (9.8% of total smoking scenes). Smoking characters in dramas were in the 20-49 age group in 193 unit scenes (94.1% of total smoking scenes). In 96 unit scenes (46.8% of total smoking scenes), at least one non-smoker was present in the smoking scenes. The smoking locations were mainly indoors, including offices, restaurants and homes (122 unit scenes, 59.6%). The most common smoking-related items shown were ash trays (in 45.5% of smoking-item-related scenes) and cigarettes (in 30.2% of smoking-item-related scenes). Only 3 unit scenes (0.1 % of all scenes) promoted smoking prohibition. This was a descriptive study to examine the nature of smoking scenes observed in Japanese television dramas from a public health perspective.

  12. Technicolor/INRIA team at the MediaEval 2013 Violent Scenes Detection Task

    OpenAIRE

    Penet , Cédric; Demarty , Claire-Hélène; Gravier , Guillaume; Gros , Patrick

    2013-01-01

    International audience; This paper presents the work done at Technicolor and INRIA regarding the MediaEval 2013 Violent Scenes Detection task, which aims at detecting violent scenes in movies. We participated in both the objective and the subjective subtasks.

  13. Studies of IR-screening smoke clouds

    Energy Technology Data Exchange (ETDEWEB)

    Cudzilo, S. [Military Univ. of Technology, Warsaw (Poland)

    2001-02-01

    This paper contains some results of research on the IR-screening capability of smoke clouds generated during the combustion process of varied pyrotechnic formulations. The smoke compositions were made from some oxygen or oxygen-free mixtures containing metal and chloroorganic compounds or mixtures based on red phosphorus. The camouflage effectiveness of clouds generated by these formulations was investigated under laboratory conditions with an infrared camera. The technique employed enables determination of radiant temperature distributions in a smoke cloud treated as an energy equivalent of a grey body emission. The results of the analysis of thermographs from the camera were the basis on which the mixtures producing screens of the highest countermeasure for thermal imaging systems have been chosen. (orig.)

  14. Multiple vehicle routing and dispatching to an emergency scene

    OpenAIRE

    M S Daskin; A Haghani

    1984-01-01

    A model of the distribution of arrival time at the scene of an emergency for the first of many vehicles is developed for the case in which travel times on the links of the network are normally distributed and the path travel times of different vehicles are correlated. The model suggests that the probability that the first vehicle arrives at the scene within a given time may be increased by reducing the path time correlations, even if doing so necessitates increasing the mean path travel time ...

  15. Combined Influence of Visual Scene and Body Tilt on Arm Pointing Movements: Gravity Matters!

    Science.gov (United States)

    Scotto Di Cesare, Cécile; Sarlegna, Fabrice R.; Bourdin, Christophe; Mestre, Daniel R.; Bringoux, Lionel

    2014-01-01

    Performing accurate actions such as goal-directed arm movements requires taking into account visual and body orientation cues to localize the target in space and produce appropriate reaching motor commands. We experimentally tilted the body and/or the visual scene to investigate how visual and body orientation cues are combined for the control of unseen arm movements. Subjects were asked to point toward a visual target using an upward movement during slow body and/or visual scene tilts. When the scene was tilted, final pointing errors varied as a function of the direction of the scene tilt (forward or backward). Actual forward body tilt resulted in systematic target undershoots, suggesting that the brain may have overcompensated for the biomechanical movement facilitation arising from body tilt. Combined body and visual scene tilts also affected final pointing errors according to the orientation of the visual scene. The data were further analysed using either a body-centered or a gravity-centered reference frame to encode visual scene orientation with simple additive models (i.e., ‘combined’ tilts equal to the sum of ‘single’ tilts). We found that the body-centered model could account only for some of the data regarding kinematic parameters and final errors. In contrast, the gravity-centered modeling in which the body and visual scene orientations were referred to vertical could explain all of these data. Therefore, our findings suggest that the brain uses gravity, thanks to its invariant properties, as a reference for the combination of visual and non-visual cues. PMID:24925371

  16. Automatic Tamil lyric generation based on ontological interpretation ...

    Indian Academy of Sciences (India)

    This system proposes an -gram based approach to automatic Tamil lyric generation, by the ontological semantic interpretation of the input scene. The approach is based on identifying the semantics conveyed in the scenario, thereby making the system understand the situation and generate lyrics accordingly. The heart of ...

  17. Anticipatory scene representation in preschool children's recall and recognition memory.

    Science.gov (United States)

    Kreindel, Erica; Intraub, Helene

    2017-09-01

    Behavioral and neuroscience research on boundary extension (false memory beyond the edges of a view of a scene) has provided new insights into the constructive nature of scene representation, and motivates questions about development. Early research with children (as young as 6-7 years) was consistent with boundary extension, but relied on an analysis of spatial errors in drawings which are open to alternative explanations (e.g. drawing ability). Experiment 1 replicated and extended prior drawing results with 4-5-year-olds and adults. In Experiment 2, a new, forced-choice immediate recognition memory test was implemented with the same children. On each trial, a card (photograph of a simple scene) was immediately replaced by a test card (identical view and either a closer or more wide-angle view) and participants indicated which one matched the original view. Error patterns supported boundary extension; identical photographs were more frequently rejected when the closer view was the original view, than vice versa. This asymmetry was not attributable to a selection bias (guessing tasks; Experiments 3-5). In Experiment 4, working memory load was increased by presenting more expansive views of more complex scenes. Again, children exhibited boundary extension, but now adults did not, unless stimulus duration was reduced to 5 s (limiting time to implement strategies; Experiment 5). We propose that like adults, children interpret photographs as views of places in the world; they extrapolate the anticipated continuation of the scene beyond the view and misattribute it to having been seen. Developmental differences in source attribution decision processes provide an explanation for the age-related differences observed. © 2016 John Wiley & Sons Ltd.

  18. From Theatre Improvisation To Video Scenes

    DEFF Research Database (Denmark)

    Larsen, Henry; Hvidt, Niels Christian; Friis, Preben

    2018-01-01

    At Sygehus Lillebaelt, a Danish hospital, there has been a focus for several years on patient communi- cation. This paper reflects on a course focusing on engaging with the patient’s existential themes in particular the negotiations around the creation of video scenes. In the initial workshops, w...

  19. Scene independent real-time indirect illumination

    DEFF Research Database (Denmark)

    Frisvad, Jeppe Revall; Christensen, Niels Jørgen; Falster, Peter

    2005-01-01

    A novel method for real-time simulation of indirect illumination is presented in this paper. The method, which we call Direct Radiance Mapping (DRM), is based on basal radiance calculations and does not impose any restrictions on scene geometry or dynamics. This makes the method tractable for rea...

  20. High-resolution focal plane array IR detection modules and digital signal processing technologies at AIM

    Science.gov (United States)

    Cabanski, Wolfgang A.; Breiter, Rainer; Koch, R.; Mauk, Karl-Heinz; Rode, Werner; Ziegler, Johann; Eberhardt, Kurt; Oelmaier, Reinhard; Schneider, Harald; Walther, Martin

    2000-07-01

    Full video format focal plane array (FPA) modules with up to 640 X 512 pixels have been developed for high resolution imaging applications in either mercury cadmium telluride (MCT) mid wave (MWIR) infrared (IR) or platinum silicide (PtSi) and quantum well infrared photodetector (QWIP) technology as low cost alternatives to MCT for high performance IR imaging in the MWIR or long wave spectral band (LWIR). For the QWIP's, a new photovoltaic technology was introduced for improved NETD performance and higher dynamic range. MCT units provide fast frame rates > 100 Hz together with state of the art thermal resolution NETD hardware platforms and software for image visualization and nonuniformity correction including scene based self learning algorithms had to be developed to accomplish for the high data rates of up to 18 M pixels/s with 14-bit deep data, allowing to take into account nonlinear effects to access the full NETD by accurate reduction of residual fixed pattern noise. The main features of these modules are summarized together with measured performance data for long range detection systems with moderately fast to slow F-numbers like F/2.0 - F/3.5. An outlook shows most recent activities at AIM, heading for multicolor and faster frame rate detector modules based on MCT devices.

  1. Fuzzy Emotional Semantic Analysis and Automated Annotation of Scene Images

    Directory of Open Access Journals (Sweden)

    Jianfang Cao

    2015-01-01

    Full Text Available With the advances in electronic and imaging techniques, the production of digital images has rapidly increased, and the extraction and automated annotation of emotional semantics implied by images have become issues that must be urgently addressed. To better simulate human subjectivity and ambiguity for understanding scene images, the current study proposes an emotional semantic annotation method for scene images based on fuzzy set theory. A fuzzy membership degree was calculated to describe the emotional degree of a scene image and was implemented using the Adaboost algorithm and a back-propagation (BP neural network. The automated annotation method was trained and tested using scene images from the SUN Database. The annotation results were then compared with those based on artificial annotation. Our method showed an annotation accuracy rate of 91.2% for basic emotional values and 82.4% after extended emotional values were added, which correspond to increases of 5.5% and 8.9%, respectively, compared with the results from using a single BP neural network algorithm. Furthermore, the retrieval accuracy rate based on our method reached approximately 89%. This study attempts to lay a solid foundation for the automated emotional semantic annotation of more types of images and therefore is of practical significance.

  2. Error Detection, Factorization and Correction for Multi-View Scene Reconstruction from Aerial Imagery

    Energy Technology Data Exchange (ETDEWEB)

    Hess-Flores, Mauricio [Univ. of California, Davis, CA (United States)

    2011-11-10

    Scene reconstruction from video sequences has become a prominent computer vision research area in recent years, due to its large number of applications in fields such as security, robotics and virtual reality. Despite recent progress in this field, there are still a number of issues that manifest as incomplete, incorrect or computationally-expensive reconstructions. The engine behind achieving reconstruction is the matching of features between images, where common conditions such as occlusions, lighting changes and texture-less regions can all affect matching accuracy. Subsequent processes that rely on matching accuracy, such as camera parameter estimation, structure computation and non-linear parameter optimization, are also vulnerable to additional sources of error, such as degeneracies and mathematical instability. Detection and correction of errors, along with robustness in parameter solvers, are a must in order to achieve a very accurate final scene reconstruction. However, error detection is in general difficult due to the lack of ground-truth information about the given scene, such as the absolute position of scene points or GPS/IMU coordinates for the camera(s) viewing the scene. In this dissertation, methods are presented for the detection, factorization and correction of error sources present in all stages of a scene reconstruction pipeline from video, in the absence of ground-truth knowledge. Two main applications are discussed. The first set of algorithms derive total structural error measurements after an initial scene structure computation and factorize errors into those related to the underlying feature matching process and those related to camera parameter estimation. A brute-force local correction of inaccurate feature matches is presented, as well as an improved conditioning scheme for non-linear parameter optimization which applies weights on input parameters in proportion to estimated camera parameter errors. Another application is in

  3. Suppression of superconductivity in Nb by IrMn in IrMn/Nb bilayers

    KAUST Repository

    Wu, B. L.

    2013-10-10

    Effect of antiferromagnet on superconductivity has been investigated in IrMn/Nb bilayers. Significant suppression of both transition temperature (Tc) and lower critical field (Hc1) of Nb is found in IrMn/Nb bilayers as compared to a single layer Nb of same thickness; the suppression effect is even stronger than that of a ferromagnet in NiFe/Nb bilayers. The addition of an insulating MgO layer at the IrMn-Nb interface nearly restores Tc to that of the single layer Nb, but Hc1 still remains suppressed. These results suggest that, in addition to proximity effect and magnetic impurity scattering, magnetostatic interaction also plays a role in suppressing superconductivity of Nb in IrMn/Nb bilayers. In addition to reduced Tc and Hc1, the IrMn layer also induces broadening in the transition temperature of Nb, which can be accounted for by a finite distribution of stray field from IrMn.

  4. VCSEL-based gigabit IR-UWB link for converged communication and sensing applications in optical metro-access networks

    DEFF Research Database (Denmark)

    Pham, Tien Thang; Gibbon, Timothy Braidwood; Tafur Monroy, Idelfonso

    2012-01-01

    We report on experimental demonstration of an impulse radio ultrawideband (IR-UWB) based converged communication and sensing system. A 1550-nm VCSEL-generated IR-UWB signal is used for 2-Gbps wireless data distribution over 800-m and 50-km single mode fiber links which present short-range in-buil...... application, paving the way forward for the development and deployment of converged UWB VCSEL-based technologies in access and in-building networks of the future.......We report on experimental demonstration of an impulse radio ultrawideband (IR-UWB) based converged communication and sensing system. A 1550-nm VCSEL-generated IR-UWB signal is used for 2-Gbps wireless data distribution over 800-m and 50-km single mode fiber links which present short-range in......-building and long-reach access network applications. The IR-UWB signal is also used to simultaneously measure the rotational speed of a blade spinning between 18 and 30 Hz. To the best of our knowledge, this is the very first demonstration of a simultaneous gigabit UWB telecommunication and wireless UWB sensing...

  5. Scene recognition based on integrating active learning with dictionary learning

    Science.gov (United States)

    Wang, Chengxi; Yin, Xueyan; Yang, Lin; Gong, Chengrong; Zheng, Caixia; Yi, Yugen

    2018-04-01

    Scene recognition is a significant topic in the field of computer vision. Most of the existing scene recognition models require a large amount of labeled training samples to achieve a good performance. However, labeling image manually is a time consuming task and often unrealistic in practice. In order to gain satisfying recognition results when labeled samples are insufficient, this paper proposed a scene recognition algorithm named Integrating Active Learning and Dictionary Leaning (IALDL). IALDL adopts projective dictionary pair learning (DPL) as classifier and introduces active learning mechanism into DPL for improving its performance. When constructing sampling criterion in active learning, IALDL considers both the uncertainty and representativeness as the sampling criteria to effectively select the useful unlabeled samples from a given sample set for expanding the training dataset. Experiment results on three standard databases demonstrate the feasibility and validity of the proposed IALDL.

  6. Developing Scene Understanding Neural Software for Realistic Autonomous Outdoor Missions

    Science.gov (United States)

    2017-09-01

    computer using a single graphics processing unit (GPU). To the best of our knowledge, an implementation of the open-source Python -based AlexNet CNN on...1. Introduction Neurons in the brain enable us to understand scenes by assessing the spatial, temporal, and feature relations of objects in the...effort to use computer neural networks to augment human neural intelligence to improve our scene understanding (Krizhevsky et al. 2012; Zhou et al

  7. IL 6: 2D-IR spectroscopy: chemistry and biophysics in real time

    International Nuclear Information System (INIS)

    Bredenbeck, Jens

    2010-01-01

    Pulsed multidimensional experiments, daily business in the field of NMR spectroscopy, have been demonstrated only relatively recently in IR spectroscopy. Similar as nuclear spins in multidimensional NMR, molecular vibrations are employed in multidimensional IR experiments as probes of molecular structure and dynamics, albeit with femtosecond time resolution. Different types of multidimensional IR experiments have been implemented, resembling basic NMR experiments such as NOESY, COSY and EXSY. In contrast to one-dimensional linear spectroscopy, such multidimensional experiments reveal couplings and correlations of vibrations, which are closely linked to molecular structure and its change in time. The use of mixed IR/VIS pulse sequences further extends the potential of multidimensional IR spectroscopy, enabling studies of ultrafast non-equilibrium processes as well as surface specific, highly sensitive experiments. A UV/VIS pulse preceding the IR pulse sequence can be used to prepare the system under study in a non-equilibrium state. 2D-IR snapshots of the evolving non-equilibrium system are then taken, for example during a photochemical reaction or during the photo-cycle of a light sensitive protein. Preparing the system in a non-equilibrium state by UV/Vis excitation during the IR pulse sequence allows for correlating states of reactant and product of the light triggered process via their 2D-IR cross peaks - a technique that has been used to map the connectivity between different binding sites of a ligand as it migrates through a protein. Introduction of a non-resonant VIS pulse at the end of the IR part of the experiment allows to selectively up-convert the infrared signal of interfacial molecules to the visible spectral range by sum frequency generation. In this way, femtosecond interfacial 2D-IR spectroscopy can be implemented, achieving sub-monolayer sensitivity. (author)

  8. Three-dimensional model-based object recognition and segmentation in cluttered scenes.

    Science.gov (United States)

    Mian, Ajmal S; Bennamoun, Mohammed; Owens, Robyn

    2006-10-01

    Viewpoint independent recognition of free-form objects and their segmentation in the presence of clutter and occlusions is a challenging task. We present a novel 3D model-based algorithm which performs this task automatically and efficiently. A 3D model of an object is automatically constructed offline from its multiple unordered range images (views). These views are converted into multidimensional table representations (which we refer to as tensors). Correspondences are automatically established between these views by simultaneously matching the tensors of a view with those of the remaining views using a hash table-based voting scheme. This results in a graph of relative transformations used to register the views before they are integrated into a seamless 3D model. These models and their tensor representations constitute the model library. During online recognition, a tensor from the scene is simultaneously matched with those in the library by casting votes. Similarity measures are calculated for the model tensors which receive the most votes. The model with the highest similarity is transformed to the scene and, if it aligns accurately with an object in the scene, that object is declared as recognized and is segmented. This process is repeated until the scene is completely segmented. Experiments were performed on real and synthetic data comprised of 55 models and 610 scenes and an overall recognition rate of 95 percent was achieved. Comparison with the spin images revealed that our algorithm is superior in terms of recognition rate and efficiency.

  9. On-the-fly generation and rendering of infinite cities on the GPU

    KAUST Repository

    Steinberger, Markus

    2014-05-01

    In this paper, we present a new approach for shape-grammar-based generation and rendering of huge cities in real-time on the graphics processing unit (GPU). Traditional approaches rely on evaluating a shape grammar and storing the geometry produced as a preprocessing step. During rendering, the pregenerated data is then streamed to the GPU. By interweaving generation and rendering, we overcome the problems and limitations of streaming pregenerated data. Using our methods of visibility pruning and adaptive level of detail, we are able to dynamically generate only the geometry needed to render the current view in real-time directly on the GPU. We also present a robust and efficient way to dynamically update a scene\\'s derivation tree and geometry, enabling us to exploit frame-to-frame coherence. Our combined generation and rendering is significantly faster than all previous work. For detailed scenes, we are capable of generating geometry more rapidly than even just copying pregenerated data from main memory, enabling us to render cities with thousands of buildings at up to 100 frames per second, even with the camera moving at supersonic speed. © 2014 The Author(s) Computer Graphics Forum © 2014 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd.

  10. Pixelated coatings and advanced IR coatings

    Science.gov (United States)

    Pradal, Fabien; Portier, Benjamin; Oussalah, Meihdi; Leplan, Hervé

    2017-09-01

    Reosc developed pixelated infrared coatings on detector. Reosc manufactured thick pixelated multilayer stacks on IR-focal plane arrays for bi-spectral imaging systems, demonstrating high filter performance, low crosstalk, and no deterioration of the device sensitivities. More recently, a 5-pixel filter matrix was designed and fabricated. Recent developments in pixelated coatings, shows that high performance infrared filters can be coated directly on detector for multispectral imaging. Next generation space instrument can benefit from this technology to reduce their weight and consumptions.

  11. A semi-interactive panorama based 3D reconstruction framework for indoor scenes

    NARCIS (Netherlands)

    Dang, T.K.; Worring, M.; Bui, T.D.

    2011-01-01

    We present a semi-interactive method for 3D reconstruction specialized for indoor scenes which combines computer vision techniques with efficient interaction. We use panoramas, popularly used for visualization of indoor scenes, but clearly not able to show depth, for their great field of view, as

  12. 3D Aware Correction and Completion of Depth Maps in Piecewise Planar Scenes

    KAUST Repository

    Thabet, Ali Kassem; Lahoud, Jean; Asmar, Daniel; Ghanem, Bernard

    2015-01-01

    RGB-D sensors are popular in the computer vision community, especially for problems of scene understanding, semantic scene labeling, and segmentation. However, most of these methods depend on reliable input depth measurements, while discarding

  13. Higher-order scene statistics of breast images

    Science.gov (United States)

    Abbey, Craig K.; Sohl-Dickstein, Jascha N.; Olshausen, Bruno A.; Eckstein, Miguel P.; Boone, John M.

    2009-02-01

    Researchers studying human and computer vision have found description and construction of these systems greatly aided by analysis of the statistical properties of naturally occurring scenes. More specifically, it has been found that receptive fields with directional selectivity and bandwidth properties similar to mammalian visual systems are more closely matched to the statistics of natural scenes. It is argued that this allows for sparse representation of the independent components of natural images [Olshausen and Field, Nature, 1996]. These theories have important implications for medical image perception. For example, will a system that is designed to represent the independent components of natural scenes, where objects occlude one another and illumination is typically reflected, be appropriate for X-ray imaging, where features superimpose on one another and illumination is transmissive? In this research we begin to examine these issues by evaluating higher-order statistical properties of breast images from X-ray projection mammography (PM) and dedicated breast computed tomography (bCT). We evaluate kurtosis in responses of octave bandwidth Gabor filters applied to PM and to coronal slices of bCT scans. We find that kurtosis in PM rises and quickly saturates for filter center frequencies with an average value above 0.95. By contrast, kurtosis in bCT peaks near 0.20 cyc/mm with kurtosis of approximately 2. Our findings suggest that the human visual system may be tuned to represent breast tissue more effectively in bCT over a specific range of spatial frequencies.

  14. The singular nature of auditory and visual scene analysis in autism.

    Science.gov (United States)

    Lin, I-Fan; Shirama, Aya; Kato, Nobumasa; Kashino, Makio

    2017-02-19

    Individuals with autism spectrum disorder often have difficulty acquiring relevant auditory and visual information in daily environments, despite not being diagnosed as hearing impaired or having low vision. Resent psychophysical and neurophysiological studies have shown that autistic individuals have highly specific individual differences at various levels of information processing, including feature extraction, automatic grouping and top-down modulation in auditory and visual scene analysis. Comparison of the characteristics of scene analysis between auditory and visual modalities reveals some essential commonalities, which could provide clues about the underlying neural mechanisms. Further progress in this line of research may suggest effective methods for diagnosing and supporting autistic individuals.This article is part of the themed issue 'Auditory and visual scene analysis'. © 2017 The Author(s).

  15. Study on general design of dual-DMD based infrared two-band scene simulation system

    Science.gov (United States)

    Pan, Yue; Qiao, Yang; Xu, Xi-ping

    2017-02-01

    Mid-wave infrared(MWIR) and long-wave infrared(LWIR) two-band scene simulation system is a kind of testing equipment that used for infrared two-band imaging seeker. Not only it would be qualified for working waveband, but also realize the essence requests that infrared radiation characteristics should correspond to the real scene. Past single-digital micromirror device (DMD) based infrared scene simulation system does not take the huge difference between targets and background radiation into account, and it cannot realize the separated modulation to two-band light beam. Consequently, single-DMD based infrared scene simulation system cannot accurately express the thermal scene model that upper-computer built, and it is not that practical. To solve the problem, we design a dual-DMD based, dual-channel, co-aperture, compact-structure infrared two-band scene simulation system. The operating principle of the system is introduced in detail, and energy transfer process of the hardware-in-the-loop simulation experiment is analyzed as well. Also, it builds the equation about the signal-to-noise ratio of infrared detector in the seeker, directing the system overall design. The general design scheme of system is given, including the creation of infrared scene model, overall control, optical-mechanical structure design and image registration. By analyzing and comparing the past designs, we discuss the arrangement of optical engine framework in the system. According to the main content of working principle and overall design, we summarize each key techniques in the system.

  16. Cross-cultural differences in item and background memory: examining the influence of emotional intensity and scene congruency.

    Science.gov (United States)

    Mickley Steinmetz, Katherine R; Sturkie, Charlee M; Rochester, Nina M; Liu, Xiaodong; Gutchess, Angela H

    2018-07-01

    After viewing a scene, individuals differ in what they prioritise and remember. Culture may be one factor that influences scene memory, as Westerners have been shown to be more item-focused than Easterners (see Masuda, T., & Nisbett, R. E. (2001). Attending holistically versus analytically: Comparing the context sensitivity of Japanese and Americans. Journal of Personality and Social Psychology, 81, 922-934). However, cultures may differ in their sensitivity to scene incongruences and emotion processing, which may account for cross-cultural differences in scene memory. The current study uses hierarchical linear modeling (HLM) to examine scene memory while controlling for scene congruency and the perceived emotional intensity of the images. American and East Asian participants encoded pictures that included a positive, negative, or neutral item placed on a neutral background. After a 20-min delay, participants were shown the item and background separately along with similar and new items and backgrounds to assess memory specificity. Results indicated that even when congruency and emotional intensity were controlled, there was evidence that Americans had better item memory than East Asians. Incongruent scenes were better remembered than congruent scenes. However, this effect did not differ by culture. This suggests that Americans' item focus may result in memory changes that are robust despite variations in scene congruency and perceived emotion.

  17. Wall grid structure for interior scene synthesis

    KAUST Repository

    Xu, Wenzhuo; Wang, Bin; Yan, Dongming

    2015-01-01

    We present a system for automatically synthesizing a diverse set of semantically valid, and well-arranged 3D interior scenes for a given empty room shape. Unlike existing work on layout synthesis, that typically knows potentially needed 3D models

  18. PERIODIC ACCRETION INSTABILITIES IN THE PROTOSTAR L1634 IRS 7

    Energy Technology Data Exchange (ETDEWEB)

    Hodapp, Klaus W. [Institute for Astronomy, University of Hawaii, 640 N. Aohoku Place, Hilo, HI 96720 (United States); Chini, Rolf, E-mail: hodapp@ifa.hawaii.edu, E-mail: rolf.chini@astro.ruhr-uni-bochum.de [Astronomisches Institut, Ruhr-Universität Bochum, Universitätsstraße 150, D-44801 Bochum (Germany)

    2015-11-10

    The small molecular cloud Lynds 1634 contains at least three outflow sources. We found one of these, IRS 7, to be variable with a period of 37.14 ± 0.04 days and an amplitude of approximately 2 mag in the K{sub s} band. The light curve consists of a quiescent phase with little or no variation, and a rapid outburst phase. During the outburst phase, the rapid variation in brightness generates light echoes that propagate into the surrounding molecular cloud, allowing a measurement of the distance to IRS 7 of 404 pc ± 35 pc. We observed only a marginally significant change in the H − K color during the outburst phase. The K-band spectrum of IRS 7 shows CO bandhead emission but its equivalent width does not change significantly with the phase of the light curve. The H{sub 2} 1–0 S(1) line emission does not follow the variability of the continuum flux. We also used the imaging data for a proper motion study of the outflows originating from the IRS 7 and the far-infrared source IRAS 05173-0555, and confirm that these are indeed distinct outflows.

  19. Popular music scenes and aging bodies.

    Science.gov (United States)

    Bennett, Andy

    2018-06-01

    During the last two decades there has been increasing interest in the phenomenon of the aging popular music audience (Bennett & Hodkinson, 2012). Although the specter of the aging fan is by no means new, the notion of, for example, the aging rocker or the aging punk has attracted significant sociological attention, not least of all because of what this says about the shifting socio-cultural significance of rock and punk and similar genres - which at the time of their emergence were inextricably tied to youth and vociferously marketed as "youth musics". As such, initial interpretations of aging music fans tended to paint a somewhat negative picture, suggesting a sense in which such fans were cultural misfits (Ross, 1994). In more recent times, however, work informed by cultural aging perspectives has begun to consider how so-called "youth cultural" identities may in fact provide the basis of more stable and evolving identities over the life course (Bennett, 2013). Starting from this position, the purpose of this article is to critically examine how aging members of popular music scenes might be recast as a salient example of the more pluralistic fashion in which aging is anticipated, managed and articulated in contemporary social settings. The article then branches out to consider two ways that aging members of music scenes continue their scene involvement. The first focuses on evolving a series of discourses that legitimately position them as aging bodies in cultural spaces that also continue to be inhabited by significant numbers of people in their teens, twenties and thirties. The second sees aging fans taking advantage of new opportunities for consuming live music including winery concerts and dinner and show events. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. Eye Movements when Looking at Unusual/Weird Scenes: Are There Cultural Differences?

    Science.gov (United States)

    Rayner, Keith; Castelhano, Monica S.; Yang, Jinmian

    2009-01-01

    Recent studies have suggested that eye movement patterns while viewing scenes differ for people from different cultural backgrounds and that these differences in how scenes are viewed are due to differences in the prioritization of information (background or foreground). The current study examined whether there are cultural differences in how…

  1. Mesoporous silica nanoparticle supported PdIr bimetal catalyst for selective hydrogenation, and the significant promotional effect of Ir

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Hui; Huang, Chao; Yang, Fan [The Key Laboratory of Fuel Cell Technology of Guangdong Province, School of Chemistry and Chemical Engineering, South China University of Technology, Guangzhou 510641 (China); Yang, Xu [Key Laboratory of Renewable Energy, Guangzhou Institute of Energy Conversion, Chinese Academy of Sciences, Guangzhou (China); Du, Li [The Key Laboratory of Fuel Cell Technology of Guangdong Province, School of Chemistry and Chemical Engineering, South China University of Technology, Guangzhou 510641 (China); Key Laboratory of Renewable Energy, Guangzhou Institute of Energy Conversion, Chinese Academy of Sciences, Guangzhou (China); Liao, Shijun, E-mail: chsjliao@scut.edu.cn [The Key Laboratory of Fuel Cell Technology of Guangdong Province, School of Chemistry and Chemical Engineering, South China University of Technology, Guangzhou 510641 (China); Key Laboratory of Renewable Energy, Guangzhou Institute of Energy Conversion, Chinese Academy of Sciences, Guangzhou (China)

    2015-12-01

    Graphical abstract: A mesoporous silica nanoparticle (MSN) supported bimetal catalyst, PdIr/MSN, was prepared by a facile impregnation and hydrogen reduction method. The strong promotional effect of Ir was observed and thoroughly investigated. At the optimal molar ratio of Ir to Pd (N{sub Ir}/N{sub Pd} = 0.1), the activity of PdIr{sub 0.1}/MSN was up to eight times and 28 times higher than that of monometallic Pd/MSN and Ir/MSN, respectively. The catalysts were characterized comprehensively by X-ray diffraction, transmission electron microscopy, X-ray photoelectron spectroscopy, and hydrogen temperature programmed reduction, which revealed that the promotional effect of Ir may be due to the enhanced dispersion of active components on the MSN, and to the intensified Pd–Ir electronic interaction caused by the addition of Ir. - Highlights: • Mesoporous nanoparticles were synthesized and used as support for metal catalyst. • PdIr bimetallic catalyst exhibited significantly improved hydrogenation activity. • The strong promotion of Ir was recognized firstly and investigated intensively. • PdIr exhibits 18 times higher activity than Pd to the hydrogenation of nitrobenzene. - Abstract: A mesoporous silica nanoparticle (MSN) supported bimetal catalyst, PdIr/MSN, was prepared by a facile impregnation and hydrogen reduction method. The strong promotional effect of Ir was observed and thoroughly investigated. At the optimal molar ratio of Ir to Pd (N{sub Ir}/N{sub Pd} = 0.1), the activity of PdIr{sub 0.1}/MSN was up to eight times and 28 times higher than that of monometallic Pd/MSN and Ir/MSN, respectively. The catalysts were characterized comprehensively by X-ray diffraction, transmission electron microscopy, X-ray photoelectron spectroscopy, and hydrogen temperature programmed reduction, which revealed that the promotional effect of Ir may be due to the enhanced dispersion of active components on the MSN, and to the intensified Pd–Ir electronic interaction

  2. The elephant in the room: inconsistency in scene viewing and representation

    OpenAIRE

    Spotorno, Sara; Tatler, Benjamin W.

    2017-01-01

    We examined the extent to which semantic informativeness, consistency with expectations and perceptual salience contribute to object prioritization in scene viewing and representation. In scene viewing (Experiments 1–2), semantic guidance overshadowed perceptual guidance in determining fixation order, with the greatest prioritization for objects that were diagnostic of the scene’s depicted event. Perceptual properties affected selection of consistent objects (regardless of their informativene...

  3. Z-depth integration: a new technique for manipulating z-depth properties in composited scenes

    Science.gov (United States)

    Steckel, Kayla; Whittinghill, David

    2014-02-01

    This paper presents a new technique in the production pipeline of asset creation for virtual environments called Z-Depth Integration (ZeDI). ZeDI is intended to reduce the time required to place elements at the appropriate z-depth within a scene. Though ZeDI is intended for use primarily in two-dimensional scene composition, depth-dependent "flat" animated objects are often critical elements of augmented and virtual reality applications (AR/VR). ZeDI is derived from "deep image compositing", a capacity implemented within the OpenEXR file format. In order to trick the human eye into perceiving overlapping scene elements as being in front of or behind one another, the developer must manually manipulate which pixels of an element are visible in relation to other objects embedded within the environment's image sequence. ZeDI improves on this process by providing a means for interacting with procedurally extracted z-depth data from a virtual environment scene. By streamlining the process of defining objects' depth characteristics, it is expected that the time and energy required for developers to create compelling AR/VR scenes will be reduced. In the proof of concept presented in this manuscript, ZeDI is implemented for pre-rendered virtual scene construction via an AfterEffects software plug-in.

  4. Special effects used in creating 3D animated scenes-part 1

    Science.gov (United States)

    Avramescu, A. M.

    2015-11-01

    In present, with the help of computer, we can create special effects that look so real that we almost don't perceive them as being different. These special effects are somehow hard to differentiate from the real elements like those on the screen. With the increasingly accesible 3D field that has more and more areas of application, the 3D technology goes easily from architecture to product designing. Real like 3D animations are used as means of learning, for multimedia presentations of big global corporations, for special effects and even for virtual actors in movies. Technology, as part of the movie art, is considered a prerequisite but the cinematography is the first art that had to wait for the correct intersection of technological development, innovation and human vision in order to attain full achievement. Increasingly more often, the majority of industries is using 3D sequences (three dimensional). 3D represented graphics, commercials and special effects from movies are all designed in 3D. The key for attaining real visual effects is to successfully combine various distinct elements: characters, objects, images and video scenes; like all these elements represent a whole that works in perfect harmony. This article aims to exhibit a game design from these days. Considering the advanced technology and futuristic vision of designers, nowadays we have different and multifarious game models. Special effects are decisively contributing in the creation of a realistic three-dimensional scene. These effects are essential for transmitting the emotional state of the scene. Creating the special effects is a work of finesse in order to achieve high quality scenes. Special effects can be used to get the attention of the onlooker on an object from a scene. Out of the conducted study, the best-selling game of the year 2010 was Call of Duty: Modern Warfare 2. This way, the article aims for the presented scene to be similar with many locations from this type of games, more

  5. Cascade generation in Al laser induced plasma

    Science.gov (United States)

    Nagli, Lev; Gaft, Michael; Raichlin, Yosef; Gornushkin, Igor

    2018-05-01

    We found cascade IR generation in Al laser induced plasma. This generation includes doublet transitions 3s 25s 2S1/2 → 3s24p 2P1/2,3/2 → 3s24s 2S1/2; corresponding to strong lines at 2110 and 2117 nm, and much weaker lines at 1312-1315 nm. The 3s25s2S 1/2 starting IR generation level is directly pumped from the 3s23p 2P3/2 ground level. The starting level for UV generation at 396.2 nm (transitions 3s24s 2S1/2 → 4p 2P3/2) is populated due to the fast collisional processes in the plasma plume. These differences led to different time and special dependences on the lasing in the IR and UV spectral range within the aluminum laser induced plasma.

  6. BOOTES-IR: near IR follow-up GRB observations by a robotic system

    International Nuclear Information System (INIS)

    Castro-Tirado, A.J.; Postrigo, A. de Ugarte; Jelinek, M.

    2005-01-01

    BOOTES-IR is the extension of the BOOTES experiment, which operates in Southern Spain since 1998, to the near IR (NIR). The goal is to follow up the early stage of the gamma ray burst (GRB) afterglow emission in the NIR, alike BOOTES does already at optical wavelengths. The scientific case that drives the BOOTES-IR performance is the study of GRBs with the support of spacecraft like INTEGRAL, SWIFT and GLAST. Given that the afterglow emission in both, the NIR and the optical, in the instances immediately following a GRB, is extremely bright (reached V = 8.9 in one case), it should be possible to detect this prompt emission at NIR wavelengths too. The combined observations by BOOTES-IR and BOOTES-1 and BOOTES-2 will allow for real time identification of trustworthy candidates to have a high redshift (z > 5). It is expected that, few minutes after a GRB, the IR magnitudes be H ∼ 7-10, hence very high quality spectra can be obtained for objects as far as z = 10 by larger instruments

  7. Development of Cytoplasmic Male Sterile IR24 and IR64 Using CW-CMS/Rf17 System.

    Science.gov (United States)

    Toriyama, Kinya; Kazama, Tomohiko

    2016-12-01

    A wild-abortive-type (WA) cytoplasmic male sterility (CMS) has been almost exclusively used for breeding three-line hybrid rice. Many indica cultivars are known to carry restorer genes for WA-CMS lines and cannot be used as maintainer lines. Especially elite indica cultivars IR24 and IR64 are known to be restorer lines for WA-CMS lines, and are used as male parents for hybrid seed production. If we develop CMS IR24 and CMS IR64, the combination of F1 pairs in hybrid rice breeding programs will be greatly broadened. For production of CMS lines and restorer lines of IR24 and IR64, we employed Chinese wild rice (CW)-type CMS/Restorer of fertility 17 (Rf17) system, in which fertility is restored by a single nuclear gene, Rf17. Successive backcrossing and marker-assisted selection of Rf17 succeeded to produce completely male sterile CMS lines and fully restored restorer lines of IR24 and IR64. CW-cytoplasm did not affect agronomic characteristics. Since IR64 is one of the most popular mega-varieties and used for breeding of many modern varieties, the CW-CMS line of IR64 will be useful for hybrid rice breeding.

  8. The TApIR experiment. IR absorption spectra of liquid hydrogen isotopologues

    International Nuclear Information System (INIS)

    Groessle, Robin

    2015-01-01

    The scope of the thesis is the infrared absorption spectroscopy of liquid hydrogen isotopologues with the tritium absorption infrared spectroscopy (TApIR) experiment at the tritium laboratory Karlsruhe (TLK). The calibration process from the sample preparation to the reference measurements are described. A further issue is the classical evaluation of FTIR absorption spectra and the extension using the rolling circle filter (RCF) including the effects on statistical and systematical errors. The impact of thermal and nuclear spin temperature on the IR absorption spectra is discussed. An empirical based modeling for the IR absorption spectra of liquid hydrogen isotopologues is performed.

  9. Non-uniform crosstalk reduction for dynamic scenes

    NARCIS (Netherlands)

    Smit, F.A.; Liere, van R.; Fröhlich, B.

    2007-01-01

    Stereo displays suffer from crosstalk, an effect that reduces or even inhibits the viewer's ability to correctly perceive depth. Previous work on software crosstalk reduction focussed on the preprocessing of static scenes which are viewed from a fixed viewpoint. However, in virtual environments

  10. Multimodal computational attention for scene understanding and robotics

    CERN Document Server

    Schauerte, Boris

    2016-01-01

    This book presents state-of-the-art computational attention models that have been successfully tested in diverse application areas and can build the foundation for artificial systems to efficiently explore, analyze, and understand natural scenes. It gives a comprehensive overview of the most recent computational attention models for processing visual and acoustic input. It covers the biological background of visual and auditory attention, as well as bottom-up and top-down attentional mechanisms and discusses various applications. In the first part new approaches for bottom-up visual and acoustic saliency models are presented and applied to the task of audio-visual scene exploration of a robot. In the second part the influence of top-down cues for attention modeling is investigated. .

  11. Adding a dimension to the infrared spectra of interfaces: 2D SFG spectroscopy via mid-IR pulse shaping

    Science.gov (United States)

    Zanni, Martin

    2012-02-01

    Sum-frequency generation spectroscopy provides an infrared spectrum of interfaces and thus has widespread use in the materials and chemical sciences. In this presentation, I will present our recent work in developing a 2D pulse sequence to generate 2D SFG spectra of interfaces, in analogy to 2D infrared spectra used to measure bulk species. To develop this spectroscopy, we have utilized many of the tricks-of-the-trade developed in the 2D IR and 2D Vis communities in the last decade, including mid-IR pulse shaping. With mid-IR pulse shaping, the 2D pulse sequence is manipulated by computer programming in the desired frequency resolution, rotating frame, and signal pathway. We believe that 2D SFG will become an important tool in the interfacial sciences in an analogous way that 2D IR is now being used in many disciplines.

  12. Occlusion culling and calculation for a computer generated hologram using spatial frequency index method

    International Nuclear Information System (INIS)

    Zhao, Kai; Yan, Xingpeng; Jiang, Xiaoyu; Huang, Yingqing

    2015-01-01

    A spatial frequency index method is proposed to cull occlusion and generate a hologram. Object points with the same spatial frequency are put into a set for their mutual occlusion. The hidden surfaces of the three-dimensional (3D) scene are quickly removed through culling the object points that are furthest from the hologram plane in the set. The phases of plane wave, which are only interrelated with the spatial frequencies, are precomputed and stored in a table. According to the spatial frequency of the object points, the phases of plane wave for generating fringes are obtained directly from the table. Three 3D scenes are chosen to verify the spatial frequency index method. Both numerical simulation and optical reconstruction are performed. Experimental results demonstrate that the proposed method can cull the hidden surfaces of the 3D scene correctly. The occlusion effect of the 3D scene can be well reproduced. The computational speed is better than that obtained using conventional methods but is still time-consuming. (paper)

  13. Molecular Active Sites in Heterogeneous Ir-La/C-Catalyzed Carbonylation of Methanol to Acetates.

    Science.gov (United States)

    Kwak, Ja Hun; Dagle, Robert; Tustin, Gerald C; Zoeller, Joseph R; Allard, Lawrence F; Wang, Yong

    2014-02-06

    We report that when Ir and La halides are deposited on carbon, exposure to CO spontaneously generates a discrete molecular heterobimetallic structure, containing an Ir-La covalent bond that acts as a highly active, selective, and stable heterogeneous catalyst for the carbonylation of methanol to produce acetic acid. This catalyst exhibits a very high productivity of ∼1.5 mol acetyl/mol Ir·s with >99% selectivity to acetyl (acetic acid and methyl acetate) without detectable loss in activity or selectivity for more than 1 month of continuous operation. The enhanced activity can be mechanistically rationalized by the presence of La within the ligand sphere of the discrete molecular Ir-La heterobimetallic structure, which acts as a Lewis acid to accelerate the normally rate-limiting CO insertion in Ir-catalyzed carbonylation. Similar approaches may provide opportunities for attaining molecular (single site) behavior similar to homogeneous catalysis on heterogeneous surfaces for other industrial applications.

  14. Distinct contributions of functional and deep neural network features to representational similarity of scenes in human brain and behavior.

    Science.gov (United States)

    Groen, Iris Ia; Greene, Michelle R; Baldassano, Christopher; Fei-Fei, Li; Beck, Diane M; Baker, Chris I

    2018-03-07

    Inherent correlations between visual and semantic features in real-world scenes make it difficult to determine how different scene properties contribute to neural representations. Here, we assessed the contributions of multiple properties to scene representation by partitioning the variance explained in human behavioral and brain measurements by three feature models whose inter-correlations were minimized a priori through stimulus preselection. Behavioral assessments of scene similarity reflected unique contributions from a functional feature model indicating potential actions in scenes as well as high-level visual features from a deep neural network (DNN). In contrast, similarity of cortical responses in scene-selective areas was uniquely explained by mid- and high-level DNN features only, while an object label model did not contribute uniquely to either domain. The striking dissociation between functional and DNN features in their contribution to behavioral and brain representations of scenes indicates that scene-selective cortex represents only a subset of behaviorally relevant scene information.

  15. Anticipatory Scene Representation in Preschool Children's Recall and Recognition Memory

    Science.gov (United States)

    Kreindel, Erica; Intraub, Helene

    2017-01-01

    Behavioral and neuroscience research on boundary extension (false memory beyond the edges of a view of a scene) has provided new insights into the constructive nature of scene representation, and motivates questions about development. Early research with children (as young as 6-7 years) was consistent with boundary extension, but relied on an…

  16. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set-Effect of Pasteurization.

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-02-26

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified.

  17. Object Attention Patches for Text Detection and Recognition in Scene Images using SIFT

    NARCIS (Netherlands)

    Sriman, Bowornrat; Schomaker, Lambertus; De Marsico, Maria; Figueiredo, Mário; Fred, Ana

    2015-01-01

    Natural urban scene images contain many problems for character recognition such as luminance noise, varying font styles or cluttered backgrounds. Detecting and recognizing text in a natural scene is a difficult problem. Several techniques have been proposed to overcome these problems. These are,

  18. Oxytocin increases amygdala reactivity to threatening scenes in females.

    Science.gov (United States)

    Lischke, Alexander; Gamer, Matthias; Berger, Christoph; Grossmann, Annette; Hauenstein, Karlheinz; Heinrichs, Markus; Herpertz, Sabine C; Domes, Gregor

    2012-09-01

    The neuropeptide oxytocin (OT) is well known for its profound effects on social behavior, which appear to be mediated by an OT-dependent modulation of amygdala activity in the context of social stimuli. In humans, OT decreases amygdala reactivity to threatening faces in males, but enhances amygdala reactivity to similar faces in females, suggesting sex-specific differences in OT-dependent threat-processing. To further explore whether OT generally enhances amygdala-dependent threat-processing in females, we used functional magnetic resonance imaging (fMRI) in a randomized within-subject crossover design to measure amygdala activity in response to threatening and non-threatening scenes in 14 females following intranasal administration of OT or placebo. Participants' eye movements were recorded to investigate whether an OT-dependent modulation of amygdala activity is accompanied by enhanced exploration of salient scene features. Although OT had no effect on participants' gazing behavior, it increased amygdala reactivity to scenes depicting social and non-social threat. In females, OT may, thus, enhance the detection of threatening stimuli in the environment, potentially by interacting with gonadal steroids, such as progesterone and estrogen. Copyright © 2012 Elsevier Ltd. All rights reserved.

  19. A Benchmark for Endoluminal Scene Segmentation of Colonoscopy Images

    Directory of Open Access Journals (Sweden)

    David Vázquez

    2017-01-01

    Full Text Available Colorectal cancer (CRC is the third cause of cancer death worldwide. Currently, the standard approach to reduce CRC-related mortality is to perform regular screening in search for polyps and colonoscopy is the screening tool of choice. The main limitations of this screening procedure are polyp miss rate and the inability to perform visual assessment of polyp malignancy. These drawbacks can be reduced by designing decision support systems (DSS aiming to help clinicians in the different stages of the procedure by providing endoluminal scene segmentation. Thus, in this paper, we introduce an extended benchmark of colonoscopy image segmentation, with the hope of establishing a new strong benchmark for colonoscopy image analysis research. The proposed dataset consists of 4 relevant classes to inspect the endoluminal scene, targeting different clinical needs. Together with the dataset and taking advantage of advances in semantic segmentation literature, we provide new baselines by training standard fully convolutional networks (FCNs. We perform a comparative study to show that FCNs significantly outperform, without any further postprocessing, prior results in endoluminal scene segmentation, especially with respect to polyp segmentation and localization.

  20. Development of the osmium-191 → iridium-191m radionuclide generator. Annual report

    International Nuclear Information System (INIS)

    Treves, S.; Packard, A.B.

    1985-01-01

    The use of /sup 191m/Ir in radionuclide angiography has been the subject of increasing interest in recent years. The 191 Os-/sup 191m/Ir generator that has been used for these studies suffers, however, from low /sup 191m/Ir yield (10%/ml) and higher than desirable 191 Os breakthrough (5 x 10 -3 %). We have recently developed a /sup 191m/Ir generator that has higher yield (25 to 30%/ml) and lower breakthrough ( -4 %) when eluted with an eluent (0.001 M oxalic acid/0.9% saline) that does not require buffering prior to injection. Studies within the last year have shown the eluate of this generator to be non-toxic at up to 100 times the expected human dose and work is in progress to obtain approval for human use of this system. While a significant improvement over past generator designs, the yield of this generator is still modest and the evaluation of new osmium complexes for use on the generator has continued. Clinical studies involving the use of /sup 191m/Ir for first-pass angiography in adults and children have continued. A comparison of ejection fractions measured in adults with both /sup 99m/Tc and /sup 191m/Ir has confirmed the feasibilty of /sup 191m/Ir for radionuclide angiography in both the left and right ventricles of adults. Studies in collaboration with Baylor Medical College have demonstrated the efficacy of /sup 191m/Ir in combination with the multi-wire gamma camera. 31 refs., 2 figs., 10 tabs

  1. Large-scale building scenes reconstruction from close-range images based on line and plane feature

    Science.gov (United States)

    Ding, Yi; Zhang, Jianqing

    2007-11-01

    Automatic generate 3D models of buildings and other man-made structures from images has become a topic of increasing importance, those models may be in applications such as virtual reality, entertainment industry and urban planning. In this paper we address the main problems and available solution for the generation of 3D models from terrestrial images. We first generate a coarse planar model of the principal scene planes and then reconstruct windows to refine the building models. There are several points of novelty: first we reconstruct the coarse wire frame model use the line segments matching with epipolar geometry constraint; Secondly, we detect the position of all windows in the image and reconstruct the windows by established corner points correspondences between images, then add the windows to the coarse model to refine the building models. The strategy is illustrated on image triple of college building.

  2. Face, Body, and Center of Gravity Mediate Person Detection in Natural Scenes

    Science.gov (United States)

    Bindemann, Markus; Scheepers, Christoph; Ferguson, Heather J.; Burton, A. Mike

    2010-01-01

    Person detection is an important prerequisite of social interaction, but is not well understood. Following suggestions that people in the visual field can capture a viewer's attention, this study examines the role of the face and the body for person detection in natural scenes. We observed that viewers tend first to look at the center of a scene,…

  3. Phase-Sensitive Control Of Molecular Dissociation Through Attosecond Pump/Strong-Field Mid-IR Probe Spectroscopy

    Science.gov (United States)

    2016-04-15

    splitter (consisting of a thin, uncoated, silicon plate at brewsters angle) and the beams were focused onto the OPA crystal. For this work two...experiments in the future. These technologies include • Two-color driven (EUV/mid-IR) ion spectroscopy: we designed an interferometer combining EUV...isolated single-femtosecond EUV pulse generation: combining the use of low ionization threshold gas, an annual near-IR drive beam , polarization

  4. Inhibition of PTP1B Restores IRS1-Mediated Hepatic Insulin Signaling in IRS2-Deficient Mice

    Science.gov (United States)

    González-Rodríguez, Águeda; Gutierrez, Jose A. Mas; Sanz-González, Silvia; Ros, Manuel; Burks, Deborah J.; Valverde, Ángela M.

    2010-01-01

    OBJECTIVE Mice with complete deletion of insulin receptor substrate 2 (IRS2) develop hyperglycemia, impaired hepatic insulin signaling, and elevated gluconeogenesis, whereas mice deficient for protein tyrosine phosphatase (PTP)1B display an opposing hepatic phenotype characterized by increased sensitivity to insulin. To define the relationship between these two signaling pathways in the regulation of liver metabolism, we used genetic and pharmacological approaches to study the effects of inhibiting PTP1B on hepatic insulin signaling and expression of gluconeogenic enzymes in IRS2−/− mice. RESEARCH DESIGN AND METHODS We analyzed glucose homeostasis and insulin signaling in liver and isolated hepatocytes from IRS2−/− and IRS2−/−/PTP1B−/− mice. Additionally, hepatic insulin signaling was assessed in control and IRS2−/− mice treated with resveratrol, an antioxidant present in red wine. RESULTS In livers of hyperglycemic IRS2−/− mice, the expression levels of PTP1B and its association with the insulin receptor (IR) were increased. The absence of PTP1B in the double-mutant mice restored hepatic IRS1-mediated phosphatidylinositol (PI) 3-kinase/Akt/Foxo1 signaling. Moreover, resveratrol treatment of hyperglycemic IRS2−/− mice decreased hepatic PTP1B mRNA and inhibited PTP1B activity, thereby restoring IRS1-mediated PI 3-kinase/Akt/Foxo1 signaling and peripheral insulin sensitivity. CONCLUSIONS By regulating the phosphorylation state of IR, PTB1B determines sensitivity to insulin in liver and exerts a unique role in the interplay between IRS1 and IRS2 in the modulation of hepatic insulin action. PMID:20028942

  5. Virtual Relighting of a Virtualized Scene by Estimating Surface Reflectance Properties

    OpenAIRE

    福富, 弘敦; 町田, 貴史; 横矢, 直和

    2011-01-01

    In mixed reality that merges real and virtual worlds, it is required to interactively manipulate the illumination conditions in a virtualized space. In general, specular reflections in a scene make it difficult to interactively manipulate the illumination conditions. Our goal is to provide an opportunity to simulate the original scene, including diffuse and specular relfections, with novel viewpoints and illumination conditions. Thus, we propose a new method for estimating diffuse and specula...

  6. Efficient 3D scene modeling and mosaicing

    CERN Document Server

    Nicosevici, Tudor

    2013-01-01

    This book proposes a complete pipeline for monocular (single camera) based 3D mapping of terrestrial and underwater environments. The aim is to provide a solution to large-scale scene modeling that is both accurate and efficient. To this end, we have developed a novel Structure from Motion algorithm that increases mapping accuracy by registering camera views directly with the maps. The camera registration uses a dual approach that adapts to the type of environment being mapped.   In order to further increase the accuracy of the resulting maps, a new method is presented, allowing detection of images corresponding to the same scene region (crossovers). Crossovers then used in conjunction with global alignment methods in order to highly reduce estimation errors, especially when mapping large areas. Our method is based on Visual Bag of Words paradigm (BoW), offering a more efficient and simpler solution by eliminating the training stage, generally required by state of the art BoW algorithms.   Also, towards dev...

  7. Power scaling of ultrafast mid-IR source enabled by high-power fiber laser technology

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Gengji

    2017-11-15

    Ultrafast laser sources with high repetition-rate (>10 MHz) and tunable in the mid-infrared (IR) wavelength range of 7-18 μm hold promise for many important spectroscopy applications. Currently, these ultrafast mid- to longwavelength-IR sources can most easily be achieved via difference-frequency generation (DFG) between a pump beam and a signal beam. However, current ultrafast mid- to longwavelength-IR sources feature a low average power, which limits their applications. In this thesis, we propose and demonstrate a novel approach to power scaling of DFG-based ultrafast mid-IR laser sources. The essence of this novel approach is the generation of a high-energy signal beam. Both the pump beam and the signal beam are derived from a home-built Yb-fiber laser system that emits 165-fs pulses centered at 1035 nm with 30-MHz repetition rate and 14.5-W average power (corresponding to 483-nJ pulse energy). We employ fiber-optic self-phase modulation (SPM) to broaden the laser spectrum and generate isolated spectral lobes. Filtering the rightmost spectral lobe leads to femtosecond pulses with >10 nJ pulse energy. Tunable between 1.1-1.2 μm, this SPM-enabled ultrafast source exhibits ∝100 times higher pulse energy than can be obtained from Raman soliton sources in this wavelength range. We use this SPM-enabled source as the signal beam and part of the Yb-fiber laser output as the pump beam. By performing DFG in GaSe crystals, we demonstrate that power scaling of a DFG-based mid-IR source can be efficiently achieved by increasing the signal energy. The resulting mid-IR source is tunable from 7.4 μm to 16.8 μm. Up to 5.04-mW mid-IR pulses centered at 11 μm are achieved. The corresponding pulse energy is 167 pJ, representing nearly one order of magnitude improvement compared with other reported DFG-based mid-IR sources at this wavelength. Despite of low pulse energy, Raman soliton sources have become a popular choice as the signal source. We carry out a detailed study on

  8. Power scaling of ultrafast mid-IR source enabled by high-power fiber laser technology

    International Nuclear Information System (INIS)

    Zhou, Gengji

    2017-11-01

    Ultrafast laser sources with high repetition-rate (>10 MHz) and tunable in the mid-infrared (IR) wavelength range of 7-18 μm hold promise for many important spectroscopy applications. Currently, these ultrafast mid- to longwavelength-IR sources can most easily be achieved via difference-frequency generation (DFG) between a pump beam and a signal beam. However, current ultrafast mid- to longwavelength-IR sources feature a low average power, which limits their applications. In this thesis, we propose and demonstrate a novel approach to power scaling of DFG-based ultrafast mid-IR laser sources. The essence of this novel approach is the generation of a high-energy signal beam. Both the pump beam and the signal beam are derived from a home-built Yb-fiber laser system that emits 165-fs pulses centered at 1035 nm with 30-MHz repetition rate and 14.5-W average power (corresponding to 483-nJ pulse energy). We employ fiber-optic self-phase modulation (SPM) to broaden the laser spectrum and generate isolated spectral lobes. Filtering the rightmost spectral lobe leads to femtosecond pulses with >10 nJ pulse energy. Tunable between 1.1-1.2 μm, this SPM-enabled ultrafast source exhibits ∝100 times higher pulse energy than can be obtained from Raman soliton sources in this wavelength range. We use this SPM-enabled source as the signal beam and part of the Yb-fiber laser output as the pump beam. By performing DFG in GaSe crystals, we demonstrate that power scaling of a DFG-based mid-IR source can be efficiently achieved by increasing the signal energy. The resulting mid-IR source is tunable from 7.4 μm to 16.8 μm. Up to 5.04-mW mid-IR pulses centered at 11 μm are achieved. The corresponding pulse energy is 167 pJ, representing nearly one order of magnitude improvement compared with other reported DFG-based mid-IR sources at this wavelength. Despite of low pulse energy, Raman soliton sources have become a popular choice as the signal source. We carry out a detailed study on

  9. Unsupervised semantic indoor scene classification for robot vision based on context of features using Gist and HSV-SIFT

    Science.gov (United States)

    Madokoro, H.; Yamanashi, A.; Sato, K.

    2013-08-01

    This paper presents an unsupervised scene classification method for actualizing semantic recognition of indoor scenes. Background and foreground features are respectively extracted using Gist and color scale-invariant feature transform (SIFT) as feature representations based on context. We used hue, saturation, and value SIFT (HSV-SIFT) because of its simple algorithm with low calculation costs. Our method creates bags of features for voting visual words created from both feature descriptors to a two-dimensional histogram. Moreover, our method generates labels as candidates of categories for time-series images while maintaining stability and plasticity together. Automatic labeling of category maps can be realized using labels created using adaptive resonance theory (ART) as teaching signals for counter propagation networks (CPNs). We evaluated our method for semantic scene classification using KTH's image database for robot localization (KTH-IDOL), which is popularly used for robot localization and navigation. The mean classification accuracies of Gist, gray SIFT, one class support vector machines (OC-SVM), position-invariant robust features (PIRF), and our method are, respectively, 39.7, 58.0, 56.0, 63.6, and 79.4%. The result of our method is 15.8% higher than that of PIRF. Moreover, we applied our method for fine classification using our original mobile robot. We obtained mean classification accuracy of 83.2% for six zones.

  10. Fitting boxes to Manhattan scenes using linear integer programming

    KAUST Repository

    Li, Minglei

    2016-02-19

    We propose an approach for automatic generation of building models by assembling a set of boxes using a Manhattan-world assumption. The method first aligns the point cloud with a per-building local coordinate system, and then fits axis-aligned planes to the point cloud through an iterative regularization process. The refined planes partition the space of the data into a series of compact cubic cells (candidate boxes) spanning the entire 3D space of the input data. We then choose to approximate the target building by the assembly of a subset of these candidate boxes using a binary linear programming formulation. The objective function is designed to maximize the point cloud coverage and the compactness of the final model. Finally, all selected boxes are merged into a lightweight polygonal mesh model, which is suitable for interactive visualization of large scale urban scenes. Experimental results and a comparison with state-of-the-art methods demonstrate the effectiveness of the proposed framework.

  11. STATYBINIŲ MEDŽIAGŲ KONKURENCINGUMAS IR TENDENCIJOS

    OpenAIRE

    Kontrimas, Robertas

    2010-01-01

    Darbe analizuojamas statybinių medžiagų konkurencingumas, nustatyti statybinių medžiagų konkurencingumą įtakojantys veiksniai ir pateikti pasiūlymai rinkos gerinimui. Pasitvirtino hipotezė, kad statybinių medžiagų paklausą ir kainas įtakoja klientų poreikiai ir jų finansinės galimybės, tačiau pasaulinės krizės įtaka yra labai ženkli,. Atlikta darbuotojų ir pirkėjų apklausa padėjo nustatyti, kokios statybinės medžiagos dažniausiai yra perkamos, kaip klientai ir darbuotojai vertina įmonę ir jos...

  12. Surface-illuminant ambiguity and color constancy: effects of scene complexity and depth cues.

    Science.gov (United States)

    Kraft, James M; Maloney, Shannon I; Brainard, David H

    2002-01-01

    Two experiments were conducted to study how scene complexity and cues to depth affect human color constancy. Specifically, two levels of scene complexity were compared. The low-complexity scene contained two walls with the same surface reflectance and a test patch which provided no information about the illuminant. In addition to the surfaces visible in the low-complexity scene, the high-complexity scene contained two rectangular solid objects and 24 paper samples with diverse surface reflectances. Observers viewed illuminated objects in an experimental chamber and adjusted the test patch until it appeared achromatic. Achromatic settings made tinder two different illuminants were used to compute an index that quantified the degree of constancy. Two experiments were conducted: one in which observers viewed the stimuli directly, and one in which they viewed the scenes through an optical system that reduced cues to depth. In each experiment, constancy was assessed for two conditions. In the valid-cue condition, many cues provided valid information about the illuminant change. In the invalid-cue condition, some image cues provided invalid information. Four broad conclusions are drawn from the data: (a) constancy is generally better in the valid-cue condition than in the invalid-cue condition: (b) for the stimulus configuration used, increasing image complexity has little effect in the valid-cue condition but leads to increased constancy in the invalid-cue condition; (c) for the stimulus configuration used, reducing cues to depth has little effect for either constancy condition: and (d) there is moderate individual variation in the degree of constancy exhibited, particularly in the degree to which the complexity manipulation affects performance.

  13. Effect of Smoking Scenes in Films on Immediate Smoking

    Science.gov (United States)

    Shmueli, Dikla; Prochaska, Judith J.; Glantz, Stanton A.

    2010-01-01

    Background The National Cancer Institute has concluded that exposure to smoking in movies causes adolescent smoking and there are similar results for young adults. Purpose This study investigated whether exposure of young adult smokers to images of smoking in films stimulated smoking behavior. Methods 100 cigarette smokers aged 18–25 years were randomly assigned to watch a movie montage composed with or without smoking scenes and paraphernalia followed by a10-minute recess. The outcome was whether or not participants smoked during the recess. Data were collected and analyzed in 2008 and 2009. Results Smokers who watched the smoking scenes were more likely to smoke during the break (OR3.06, 95% CI=1.01, 9.29). In addition to this acute effect of exposure, smokers who had seen more smoking in movies before the day of the experiment were more likely to smoke during the break (OR 6.73; 1.00–45.25 comparing the top to bottom percentiles of exposure) were more likely to smoke during the break. Level of nicotine dependence (OR 1.71; 1.27–2.32 per point on the FTND scale), “contemplation” (OR 9.07; 1.71–47.99) and “precontemplation” (OR 7.30; 1.39–38.36) stages of change, and impulsivity (OR 1.21; 1.03–1.43), were also associated with smoking during the break. Participants who watched the montage with smoking scenes and those with a higher level of nicotine dependence were also more likely to have smoked within 30 minutes after the study. Conclusions There is a direct link between viewing smoking scenes and immediate subsequent smoking behavior. This finding suggests that individuals attempting to limit or quit smoking should be advised to refrain from or reduce their exposure to movies that contain smoking. PMID:20307802

  14. An FT-Raman, FT-IR, and Quantum Chemical Investigation of Stanozolol and Oxandrolone

    Directory of Open Access Journals (Sweden)

    Tibebe Lemma

    2017-12-01

    Full Text Available We have studied the Fourier Transform Infrared (FT-IR and the Fourier transform Raman (FT-Raman spectra of stanozolol and oxandrolone, and we have performed quantum chemical calculations based on the density functional theory (DFT with a B3LYP/6-31G (d, p level of theory. The FT-IR and FT-Raman spectra were collected in a solid phase. The consistency between the calculated and experimental FT-IR and FT-Raman data indicates that the B3LYP/6-31G (d, p can generate reliable geometry and related properties of the title compounds. Selected experimental bands were assigned and characterized on the basis of the scaled theoretical wavenumbers by their total energy distribution. The good agreement between the experimental and theoretical spectra allowed positive assignment of the observed vibrational absorption bands. Finally, the calculation results were applied to simulate the Raman and IR spectra of the title compounds, which show agreement with the observed spectra.

  15. Effects of varying presentation time on long-term recognition memory for scenes: Verbatim and gist representations.

    Science.gov (United States)

    Ahmad, Fahad N; Moscovitch, Morris; Hockley, William E

    2017-04-01

    Konkle, Brady, Alvarez and Oliva (Psychological Science, 21, 1551-1556, 2010) showed that participants have an exceptional long-term memory (LTM) for photographs of scenes. We examined to what extent participants' exceptional LTM for scenes is determined by presentation time during encoding. In addition, at retrieval, we varied the nature of the lures in a forced-choice recognition task so that they resembled the target in gist (i.e., global or categorical) information, but were distinct in verbatim information (e.g., an "old" beach scene and a similar "new" beach scene; exemplar condition) or vice versa (e.g., a beach scene and a new scene from a novel category; novel condition). In Experiment 1, half of the list of scenes was presented for 1 s, whereas the other half was presented for 4 s. We found lower performance for shorter study presentation time in the exemplar test condition and similar performance for both study presentation times in the novel test condition. In Experiment 2, participants showed similar performance in an exemplar test for which the lure was of a different category but a category that was used at study. In Experiment 3, when presentation time was lowered to 500 ms, recognition accuracy was reduced in both novel and exemplar test conditions. A less detailed memorial representation of the studied scene containing more gist (i.e., meaning) than verbatim (i.e., surface or perceptual details) information is retrieved from LTM after a short compared to a long study presentation time. We conclude that our findings support fuzzy-trace theory.

  16. The Role of Binocular Disparity in Rapid Scene and Pattern Recognition

    Directory of Open Access Journals (Sweden)

    Matteo Valsecchi

    2013-04-01

    Full Text Available We investigated the contribution of binocular disparity to the rapid recognition of scenes and simpler spatial patterns using a paradigm combining backward masked stimulus presentation and short-term match-to-sample recognition. First, we showed that binocular disparity did not contribute significantly to the recognition of briefly presented natural and artificial scenes, even when the availability of monocular cues was reduced. Subsequently, using dense random dot stereograms as stimuli, we showed that observers were in principle able to extract spatial patterns defined only by disparity under brief, masked presentations. Comparing our results with the predictions from a cue-summation model, we showed that combining disparity with luminance did not per se disrupt the processing of disparity. Our results suggest that the rapid recognition of scenes is mediated mostly by a monocular comparison of the images, although we can rely on stereo in fast pattern recognition.

  17. Binary patterns encoded convolutional neural networks for texture recognition and remote sensing scene classification

    Science.gov (United States)

    Anwer, Rao Muhammad; Khan, Fahad Shahbaz; van de Weijer, Joost; Molinier, Matthieu; Laaksonen, Jorma

    2018-04-01

    Designing discriminative powerful texture features robust to realistic imaging conditions is a challenging computer vision problem with many applications, including material recognition and analysis of satellite or aerial imagery. In the past, most texture description approaches were based on dense orderless statistical distribution of local features. However, most recent approaches to texture recognition and remote sensing scene classification are based on Convolutional Neural Networks (CNNs). The de facto practice when learning these CNN models is to use RGB patches as input with training performed on large amounts of labeled data (ImageNet). In this paper, we show that Local Binary Patterns (LBP) encoded CNN models, codenamed TEX-Nets, trained using mapped coded images with explicit LBP based texture information provide complementary information to the standard RGB deep models. Additionally, two deep architectures, namely early and late fusion, are investigated to combine the texture and color information. To the best of our knowledge, we are the first to investigate Binary Patterns encoded CNNs and different deep network fusion architectures for texture recognition and remote sensing scene classification. We perform comprehensive experiments on four texture recognition datasets and four remote sensing scene classification benchmarks: UC-Merced with 21 scene categories, WHU-RS19 with 19 scene classes, RSSCN7 with 7 categories and the recently introduced large scale aerial image dataset (AID) with 30 aerial scene types. We demonstrate that TEX-Nets provide complementary information to standard RGB deep model of the same network architecture. Our late fusion TEX-Net architecture always improves the overall performance compared to the standard RGB network on both recognition problems. Furthermore, our final combination leads to consistent improvement over the state-of-the-art for remote sensing scene classification.

  18. Discrimination of Chinese Sauce liquor using FT-IR and two-dimensional correlation IR spectroscopy

    Science.gov (United States)

    Sun, Su-Qin; Li, Chang-Wen; Wei, Ji-Ping; Zhou, Qun; Noda, Isao

    2006-11-01

    We applied the three-step IR macro-fingerprint identification method to obtain the IR characteristic fingerprints of so-called Chinese Sauce liquor (Moutai liquor and Kinsly liquor) and a counterfeit Moutai. These fingerprints can be used for the identification and discrimination of similar liquor products. The comparison of their conventional IR spectra, as the first step of identification, shows that the primary difference in Sauce liquor is the intensity of characteristic peaks at 1592 and 1225 cm -1. The comparison of the second derivative IR spectra, as the second step of identification, shows that the characteristic absorption in 1400-1800 cm -1 is substantially different. The comparison of 2D-IR correlation spectra, as the third and final step of identification, can discriminate the liquors from another direction. Furthermore, the method was successfully applied to the discrimination of a counterfeit Moutai from the genuine Sauce liquor. The success of the three-step IR macro-fingerprint identification to provide a rapid and effective method for the identification of Chinese liquor suggests the potential extension of this technique to the identification and discrimination of other wine and spirits, as well.

  19. The effects of alcohol intoxication on attention and memory for visual scenes.

    Science.gov (United States)

    Harvey, Alistair J; Kneller, Wendy; Campbell, Alison C

    2013-01-01

    This study tests the claim that alcohol intoxication narrows the focus of visual attention on to the more salient features of a visual scene. A group of alcohol intoxicated and sober participants had their eye movements recorded as they encoded a photographic image featuring a central event of either high or low salience. All participants then recalled the details of the image the following day when sober. We sought to determine whether the alcohol group would pay less attention to the peripheral features of the encoded scene than their sober counterparts, whether this effect of attentional narrowing was stronger for the high-salience event than for the low-salience event, and whether it would lead to a corresponding deficit in peripheral recall. Alcohol was found to narrow the focus of foveal attention to the central features of both images but did not facilitate recall from this region. It also reduced the overall amount of information accurately recalled from each scene. These findings demonstrate that the concept of alcohol myopia originally posited to explain the social consequences of intoxication (Steele & Josephs, 1990) may be extended to explain the relative neglect of peripheral information during the processing of visual scenes.

  20. Camera pose estimation for augmented reality in a small indoor dynamic scene

    Science.gov (United States)

    Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad

    2017-09-01

    Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.

  1. Relationship between Childhood Meal Scenes at Home Remembered by University Students and their Current Personality

    OpenAIRE

    恩村, 咲希; Onmura, Saki

    2013-01-01

    This study examines the relationship between childhood meal scenes at home that are remembered by university students and their current personality. The meal scenes are analyzed in terms of companions, conversation content, conversation frequency, atmosphere, and consideration of meals. The scale of the conversation content in childhood meal scenes was prepared on the basis of the results of a preliminary survey. The result showed that a relationship was found between personality traits and c...

  2. History of Reading Struggles Linked to Enhanced Learning in Low Spatial Frequency Scenes

    Science.gov (United States)

    Schneps, Matthew H.; Brockmole, James R.; Sonnert, Gerhard; Pomplun, Marc

    2012-01-01

    People with dyslexia, who face lifelong struggles with reading, exhibit numerous associated low-level sensory deficits including deficits in focal attention. Countering this, studies have shown that struggling readers outperform typical readers in some visual tasks that integrate distributed information across an expanse. Though such abilities would be expected to facilitate scene memory, prior investigations using the contextual cueing paradigm failed to find corresponding advantages in dyslexia. We suggest that these studies were confounded by task-dependent effects exaggerating known focal attention deficits in dyslexia, and that, if natural scenes were used as the context, advantages would emerge. Here, we investigate this hypothesis by comparing college students with histories of severe lifelong reading difficulties (SR) and typical readers (TR) in contexts that vary attention load. We find no differences in contextual-cueing when spatial contexts are letter-like objects, or when contexts are natural scenes. However, the SR group significantly outperforms the TR group when contexts are low-pass filtered natural scenes [F(3, 39) = 3.15, p<.05]. These findings suggest that perception or memory for low spatial frequency components in scenes is enhanced in dyslexia. These findings are important because they suggest strengths for spatial learning in a population otherwise impaired, carrying implications for the education and support of students who face challenges in school. PMID:22558210

  3. History of reading struggles linked to enhanced learning in low spatial frequency scenes.

    Directory of Open Access Journals (Sweden)

    Matthew H Schneps

    Full Text Available People with dyslexia, who face lifelong struggles with reading, exhibit numerous associated low-level sensory deficits including deficits in focal attention. Countering this, studies have shown that struggling readers outperform typical readers in some visual tasks that integrate distributed information across an expanse. Though such abilities would be expected to facilitate scene memory, prior investigations using the contextual cueing paradigm failed to find corresponding advantages in dyslexia. We suggest that these studies were confounded by task-dependent effects exaggerating known focal attention deficits in dyslexia, and that, if natural scenes were used as the context, advantages would emerge. Here, we investigate this hypothesis by comparing college students with histories of severe lifelong reading difficulties (SR and typical readers (TR in contexts that vary attention load. We find no differences in contextual-cueing when spatial contexts are letter-like objects, or when contexts are natural scenes. However, the SR group significantly outperforms the TR group when contexts are low-pass filtered natural scenes [F(3, 39 = 3.15, p<.05]. These findings suggest that perception or memory for low spatial frequency components in scenes is enhanced in dyslexia. These findings are important because they suggest strengths for spatial learning in a population otherwise impaired, carrying implications for the education and support of students who face challenges in school.

  4. Validation of Correction Algorithms for Near-IR Analysis of Human Milk in an Independent Sample Set—Effect of Pasteurization

    Science.gov (United States)

    Kotrri, Gynter; Fusch, Gerhard; Kwan, Celia; Choi, Dasol; Choi, Arum; Al Kafi, Nisreen; Rochow, Niels; Fusch, Christoph

    2016-01-01

    Commercial infrared (IR) milk analyzers are being increasingly used in research settings for the macronutrient measurement of breast milk (BM) prior to its target fortification. These devices, however, may not provide reliable measurement if not properly calibrated. In the current study, we tested a correction algorithm for a Near-IR milk analyzer (Unity SpectraStar, Brookfield, CT, USA) for fat and protein measurements, and examined the effect of pasteurization on the IR matrix and the stability of fat, protein, and lactose. Measurement values generated through Near-IR analysis were compared against those obtained through chemical reference methods to test the correction algorithm for the Near-IR milk analyzer. Macronutrient levels were compared between unpasteurized and pasteurized milk samples to determine the effect of pasteurization on macronutrient stability. The correction algorithm generated for our device was found to be valid for unpasteurized and pasteurized BM. Pasteurization had no effect on the macronutrient levels and the IR matrix of BM. These results show that fat and protein content can be accurately measured and monitored for unpasteurized and pasteurized BM. Of additional importance is the implication that donated human milk, generally low in protein content, has the potential to be target fortified. PMID:26927169

  5. The Hip-Hop club scene: Gender, grinding and sex.

    Science.gov (United States)

    Muñoz-Laboy, Miguel; Weinstein, Hannah; Parker, Richard

    2007-01-01

    Hip-Hop culture is a key social medium through which many young men and women from communities of colour in the USA construct their gender. In this study, we focused on the Hip-Hop club scene in New York City with the intention of unpacking narratives of gender dynamics from the perspective of young men and women, and how these relate to their sexual experiences. We conducted a three-year ethnographic study that included ethnographic observations of Hip-Hop clubs and their social scene, and in-depth interviews with young men and young women aged 15-21. This paper describes how young people negotiate gender relations on the dance floor of Hip-Hop clubs. The Hip-Hop club scene represents a context or setting where young men's masculinities are contested by the social environment, where women challenge hypermasculine privilege and where young people can set the stage for what happens next in their sexual and emotional interactions. Hip-Hop culture therefore provides a window into the gender and sexual scripts of many urban minority youth. A fuller understanding of these patterns can offer key insights into the social construction of sexual risk, as well as the possibilities for sexual health promotion, among young people in urban minority populations.

  6. Unbiased Photocatalytic Hydrogen Generation from Pure Water on Stable Ir-treated In 0.33 Ga 0.67 N Nanorods

    KAUST Repository

    Ebaid, Mohamed; Priante, Davide; Liu, Guangyu; Zhao, Chao; Sharizal Alias, Mohd; Buttner, Ulrich; Khee Ng, Tien; Taylor Isimjan, Tayirjan; Idriss, Hicham; Ooi, Boon S.

    2017-01-01

    InGaN-based nanostructures have recently been recognized as promising materials for efficient solar hydrogen generation. This is due to their chemical stability, adjustable optoelectronic properties, suitable band edge alignment, and large surface-to-volume ratio. The inherent high density of surface trapping states and the lack of compatible conductive substrates, however, hindered their use as stable photo-catalysts. We have designed, synthesized and tested an efficient photocatalytic system using stable In0.33Ga0.67N-based nanorods (NRs) grown on an all-metal stack substrate (Ti-Mo) for a better electron transfer process. In addition, we have applied a bifunctional ultrathin thiol-based organic surface treatment using 1,2-ethanedithiol (EDT), in which sulfur atoms protected the surface from oxidation. This treatment has dual functions, it passivates the surface (by the removal of dangling bonds) and creates ligands for linking Ir-metal ions as oxygen evolution centers on top of the semiconductor. This treatment when applied to In0.33Ga0.67N NRs resulted in a photo-catalyst that achieved 3.5% solar-to-hydrogen (STH) efficiency, in pure water (pH~7, buffer solution) under simulated one-sun (AM1.5G) illumination and without electrical bias. Over the tested period, a steady increase of the gas evolution rate was observed from which a turnover frequency of 0.23s-1 was calculated. The novel growth of InGaN-based NRs on a metal as well as the versatile surface functionalization techniques (EDT-Ir) have a high potential for making stable photo-catalysts with adjustable band gaps and band edges to harvest sun light.

  7. Unbiased Photocatalytic Hydrogen Generation from Pure Water on Stable Ir-treated In 0.33 Ga 0.67 N Nanorods

    KAUST Repository

    Ebaid, Mohamed

    2017-05-11

    InGaN-based nanostructures have recently been recognized as promising materials for efficient solar hydrogen generation. This is due to their chemical stability, adjustable optoelectronic properties, suitable band edge alignment, and large surface-to-volume ratio. The inherent high density of surface trapping states and the lack of compatible conductive substrates, however, hindered their use as stable photo-catalysts. We have designed, synthesized and tested an efficient photocatalytic system using stable In0.33Ga0.67N-based nanorods (NRs) grown on an all-metal stack substrate (Ti-Mo) for a better electron transfer process. In addition, we have applied a bifunctional ultrathin thiol-based organic surface treatment using 1,2-ethanedithiol (EDT), in which sulfur atoms protected the surface from oxidation. This treatment has dual functions, it passivates the surface (by the removal of dangling bonds) and creates ligands for linking Ir-metal ions as oxygen evolution centers on top of the semiconductor. This treatment when applied to In0.33Ga0.67N NRs resulted in a photo-catalyst that achieved 3.5% solar-to-hydrogen (STH) efficiency, in pure water (pH~7, buffer solution) under simulated one-sun (AM1.5G) illumination and without electrical bias. Over the tested period, a steady increase of the gas evolution rate was observed from which a turnover frequency of 0.23s-1 was calculated. The novel growth of InGaN-based NRs on a metal as well as the versatile surface functionalization techniques (EDT-Ir) have a high potential for making stable photo-catalysts with adjustable band gaps and band edges to harvest sun light.

  8. Scene Classification Using High Spatial Resolution Multispectral Data

    National Research Council Canada - National Science Library

    Garner, Jamada

    2002-01-01

    ...), High-spatial resolution (8-meter), 4-color MSI data from IKONOS provide a new tool for scene classification, The utility of these data are studied for the purpose of classifying the Elkhorn Slough and surrounding wetlands in central...

  9. The Introduction of an Undergraduate Interventional Radiology (IR) Curriculum: Impact on Medical Student Knowledge and Interest in IR

    International Nuclear Information System (INIS)

    Shaikh, M.; Shaygi, B.; Asadi, H.; Thanaratnam, P.; Pennycooke, K.; Mirza, M.; Lee, M.

    2016-01-01

    IntroductionInterventional radiology (IR) plays a vital role in modern medicine, with increasing demand for services, but with a shortage of experienced interventionalists. The aim of this study was to determine the impact of a recently introduced IR curriculum on perception, knowledge, and interest of medical students regarding various aspects of IR.MethodsIn 2014, an anonymous web-based questionnaire was sent to 309 4th year medical students in a single institution within an EU country, both before and after delivery of a 10-h IR teaching curriculum.ResultsSeventy-six percent (236/309) of the respondents participated in the pre-IR module survey, while 50 % (157/309) responded to the post-IR module survey. While 62 % (147/236) of the respondents reported poor or no knowledge of IR compared to other medical disciplines in the pre-IR module survey, this decreased to 17 % (27/157) in the post-IR module survey. The correct responses regarding knowledge of selected IR procedures improved from 70 to 94 % for venous access, 78 to 99 % for uterine fibroid embolization, 75 to 97 % for GI bleeding embolization, 60 to 92 % for trauma embolization, 71 to 92 % for tumor ablation, and 81 to 94 % for angioplasty and stenting in peripheral arterial disease. With regard to knowledge of IR clinical roles, responses improved from 42 to 59 % for outpatient clinic review of patients and having inpatient beds, 63–76 % for direct patient consultation, and 43–60 % for having regular ward rounds. The number of students who would consider a career in IR increased from 60 to 73 %.ConclusionDelivering an undergraduate IR curriculum increased the knowledge and understanding of various aspects of IR and also the general enthusiasm for pursuing this specialty as a future career choice.

  10. The Introduction of an Undergraduate Interventional Radiology (IR) Curriculum: Impact on Medical Student Knowledge and Interest in IR

    Energy Technology Data Exchange (ETDEWEB)

    Shaikh, M. [Bradford Royal Infirmary, Department of Radiology, Bradford Teaching Hospital Foundation Trust (United Kingdom); Shaygi, B. [Royal Devon and Exeter Hospital, Interventional Radiology Department (United Kingdom); Asadi, H., E-mail: asadi.hamed@gmail.com; Thanaratnam, P.; Pennycooke, K.; Mirza, M.; Lee, M., E-mail: mlee@rcsi.ie [Beaumont Hospital, Interventional Radiology Service, Department of Radiology (Ireland)

    2016-04-15

    IntroductionInterventional radiology (IR) plays a vital role in modern medicine, with increasing demand for services, but with a shortage of experienced interventionalists. The aim of this study was to determine the impact of a recently introduced IR curriculum on perception, knowledge, and interest of medical students regarding various aspects of IR.MethodsIn 2014, an anonymous web-based questionnaire was sent to 309 4th year medical students in a single institution within an EU country, both before and after delivery of a 10-h IR teaching curriculum.ResultsSeventy-six percent (236/309) of the respondents participated in the pre-IR module survey, while 50 % (157/309) responded to the post-IR module survey. While 62 % (147/236) of the respondents reported poor or no knowledge of IR compared to other medical disciplines in the pre-IR module survey, this decreased to 17 % (27/157) in the post-IR module survey. The correct responses regarding knowledge of selected IR procedures improved from 70 to 94 % for venous access, 78 to 99 % for uterine fibroid embolization, 75 to 97 % for GI bleeding embolization, 60 to 92 % for trauma embolization, 71 to 92 % for tumor ablation, and 81 to 94 % for angioplasty and stenting in peripheral arterial disease. With regard to knowledge of IR clinical roles, responses improved from 42 to 59 % for outpatient clinic review of patients and having inpatient beds, 63–76 % for direct patient consultation, and 43–60 % for having regular ward rounds. The number of students who would consider a career in IR increased from 60 to 73 %.ConclusionDelivering an undergraduate IR curriculum increased the knowledge and understanding of various aspects of IR and also the general enthusiasm for pursuing this specialty as a future career choice.

  11. Ion beam synthesis of IrSi3 by implantation of 2 MeV Ir ions

    International Nuclear Information System (INIS)

    Sjoreen, T.P.; Chisholm, M.F.; Hinneberg, H.J.

    1992-11-01

    Formation of a buried IrSi 3 layer in (111) oriented Si by ion implantation and annealing has been studied at an implantation energy of 2 MeV for substrate temperatures of 450--550C. Rutherford backscattering (RBS), ion channeling and cross-sectional transmission electron microscopy showed that a buried epitaxial IrSi 3 layer is produced at 550C by implanting ≥ 3.4 x 10 17 Ir/cm 2 and subsequently annealing for 1 h at 1000C plus 5 h at 1100C. At a dose of 3.4 x 10 17 Ir/cm 2 , the thickness of the layer varied between 120 and 190 nm and many large IrSi 3 precipitates were present above and below the film. Increasing the dose to 4.4 x 10 17 Ir/cm 2 improved the layer uniformity at the expense of increased lattice damage in the overlying Si. RBS analysis of layer formation as a function of substrate temperature revealed the competition between the mechanisms for optimizing surface crystallinity vs. IrSi 3 layer formation. Little apparent substrate temperature dependence was evident in the as-implanted state but after annealing the crystallinity of the top Si layer was observed to deteriorate with increasing substrate temperature while the precipitate coarsening and coalescence improved

  12. Gene complementation. Neither Ir-GLphi gene need be present in the proliferative I cell to generate an immune response to Poly(Glu55Lys36Phe9)n

    International Nuclear Information System (INIS)

    Longo, D.L.; Schwartz, R.H.

    1980-01-01

    The cellular requirements for immune response (Ir) gene expression in a T cell proliferative response under dual Ir gene control were examined with radiation-induced bone marrow chimeras. The response to poly(Glu55Lys36Phe9)n (GLphi) requires two responder alleles that in the [B10.A x B10.A(18R)]F1 map in I-Ab and I-Ek/Cd. Chimeras in which a mixture of the nonresponder B10.A parental cells and the nonresponder B10.A(18R) parental cells were allowed to mature in a responder F1 environment did not respond to GLphi. When T cells from such A + 18R leads to F1 chimeras were primed in the presence of responder antigen-presenting cells (APC), the chimeric T cells responded to GLphi. When bone marrow cells from (B10.A X B10)F1 responder animals were allowed to mature in a low-responder B10 of B10.A parental environment, neither chimera could respond to GLphi. This demonstrated that the presence of high-responder APC, which derive from the donar bone marrow, was not sufficient to generate a GLphi response. Finally, B10.A(4R) T cells, which possess neither Ir-GLphi responder allele, could be educated to mount a GLphi-proliferative response provided that they matured in a responder environment and were primed with APC expressing both responder alleles. Therefore, the gene products of the complementing Ir-GLphi responder alleles appear to function as a single restriction element at the level of the APC

  13. CSRNet: Dilated Convolutional Neural Networks for Understanding the Highly Congested Scenes

    OpenAIRE

    Li, Yuhong; Zhang, Xiaofan; Chen, Deming

    2018-01-01

    We propose a network for Congested Scene Recognition called CSRNet to provide a data-driven and deep learning method that can understand highly congested scenes and perform accurate count estimation as well as present high-quality density maps. The proposed CSRNet is composed of two major components: a convolutional neural network (CNN) as the front-end for 2D feature extraction and a dilated CNN for the back-end, which uses dilated kernels to deliver larger reception fields and to replace po...

  14. Smulkaus ir vidutinio verslo konkurencingumas Lietuvoje

    OpenAIRE

    Vijeikis, Juozas; Makštutis, Antanas

    2009-01-01

    Straipsnio mokslinė problema, naujumas ir aktualumas. Konkurencingumas kaip įmonių efektyvios veiklos reiškinys yra aktualus šalies verslo gyvenime vykdant darnios ekonominės plėtros politiką. Ši politika kaip problema smulkaus ir vidutinio verslo (SVV) plėtrai ir konkurencingumui didinti nėra sistemiškai ištirta ir aprašyta Lietuvos sąlygomis mokslinėje ir praktinėje literatūroje. Vienas svarbiausių veiksnių, siekiant spartaus ekonominio augimo, yra darnios verslininkystės plėtra Lietuvoje n...

  15. Structural, phase stability, electronic, elastic properties and hardness of IrN{sub 2} and zinc blende IrN: First-principles calculations

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Zhaobo [Key Laboratory of Advanced Materials of Yunnan Province & Key Laboratory of Advanced Materials of Non-Ferrous and Precious Rare Metals Ministry of Education, Kunming University of Science and Technology, Kunming 650093 (China); Zhou, Xiaolong, E-mail: kmzxlong@163.com [Key Laboratory of Advanced Materials of Yunnan Province & Key Laboratory of Advanced Materials of Non-Ferrous and Precious Rare Metals Ministry of Education, Kunming University of Science and Technology, Kunming 650093 (China); Zhang, Kunhua [State Key Laboratory of Rare Precious Metals Comprehensive Utilization of New Technologies, Kunming Institute of Precious Metals, Kunming 650106 (China)

    2016-12-15

    First-principle calculations were performed to investigate the structural, phase stability, electronic, elastic properties and hardness of monoclinic structure IrN{sub 2} (m-IrN{sub 2}), orthorhombic structure IrN{sub 2} (o-IrN{sub 2}) and zinc blende structure IrN (ZB IrN). The results show us that only m-IrN{sub 2} is both thermodynamic and dynamic stability. The calculated band structure and density of states (DOS) curves indicate that o-IrN{sub 2} and ZB Ir-N compounds we calculated have metallic behavior while m-IrN{sub 2} has a small band gap of ~0.3 eV, and exist a common hybridization between Ir-5d and N-2p states, which forming covalent bonding between Ir and N atoms. The difference charge density reveals the electron transfer from Ir atom to N atom for three Ir-N compounds, which forming strong directional covalent bonds. Notable, a strong N-N bond appeared in m-IrN{sub 2} and o-IrN{sub 2}. The ratio of bulk to shear modulus (B/G) indicate that three Ir-N compounds we calculated are ductile, and ZB IrN possesses a better ductility than two types IrN{sub 2}. m-IrN{sub 2} has highest Debye temperature (736 K), illustrating it possesses strongest covalent bonding. The hardness of three Ir-N compounds were also calculated, and the results reveal that m-IrN{sub 2} (18.23 GPa) and o-IrN{sub 2} (18.02 GPa) are ultraincompressible while ZB IrN has a negative value, which may be attributed to phase transition at ca. 1.98 GPa.

  16. Where's Wally: the influence of visual salience on referring expression generation.

    Science.gov (United States)

    Clarke, Alasdair D F; Elsner, Micha; Rohde, Hannah

    2013-01-01

    REFERRING EXPRESSION GENERATION (REG) PRESENTS THE CONVERSE PROBLEM TO VISUAL SEARCH: given a scene and a specified target, how does one generate a description which would allow somebody else to quickly and accurately locate the target?Previous work in psycholinguistics and natural language processing has failed to find an important and integrated role for vision in this task. That previous work, which relies largely on simple scenes, tends to treat vision as a pre-process for extracting feature categories that are relevant to disambiguation. However, the visual search literature suggests that some descriptions are better than others at enabling listeners to search efficiently within complex stimuli. This paper presents a study testing whether participants are sensitive to visual features that allow them to compose such "good" descriptions. Our results show that visual properties (salience, clutter, area, and distance) influence REG for targets embedded in images from the Where's Wally? books. Referring expressions for large targets are shorter than those for smaller targets, and expressions about targets in highly cluttered scenes use more words. We also find that participants are more likely to mention non-target landmarks that are large, salient, and in close proximity to the target. These findings identify a key role for visual salience in language production decisions and highlight the importance of scene complexity for REG.

  17. Behind the scenes at the LHC inauguration

    CERN Document Server

    2008-01-01

    On 21 October the LHC inauguration ceremony will take place and people from all over CERN have been busy preparing. With delegations from 38 countries attending, including ministers and heads of state, the Bulletin has gone behind the scenes to see what it takes to put together an event of this scale.

  18. Application-specific specialty microstructured optical fibers for mid-IR and THz photonics (Invited)

    DEFF Research Database (Denmark)

    Pal, Bishnu P.; Barh, Ajanta; Varshney, Ravi K.

    2016-01-01

    A review of several of our designed specialty microstructured optical fibers (MOFs) for mid-IR and THz generation and transmission including high power transmission is presented. Extensive results on performance of the designed MOFs are described....

  19. AIRS/Aqua Level 1C Infrared (IR) resampled and corrected radiances V006

    Data.gov (United States)

    National Aeronautics and Space Administration — The AIRS Infrared (IR) level 1C data set contains AIRS infrared calibrated and geolocated radiances in W/m2/micron/ster. This data set is generated from AIRS level...

  20. Development of a new osmium-191: Iridium-191m radionuclide generator: Final report

    International Nuclear Information System (INIS)

    Treves, S.; Packard, A.B.

    1986-01-01

    The use of iridium-191m (T/sub 1/2/ = 5s) for first-pass radionuclide angiography offers the potential advantages of lower patient radiation dose and the ability to obtain repeated studies without interference from the previously administered radioisotope. These potential advantages have been offset by the absence of satisfactory 191 Os-/sup 191m/Ir generators. The goal of this project was, therefore, the development of an 191 Os-/sup 191m/Ir generator that would be suitable for clinical use. This goal was first sought through modifications of an existing 191 Os-/sup 191m/Ir generator design (i.e., changes in the ion exchange material and eluent) but these changes did not lead to the required improvements. A new approach was then undertaken in which different chemical forms of the 191 Os parent were evaluated in prototype generators. The complex trans-dioxobisoxalatoosmate (VI) led to a generator with higher /sup 191m/Ir yield (25 to 30%/mL) and lower 191 Os breakthrough ( -4 %) with a more physiologically compatible eluent than had been previously achieved. Toxicity studies were conducted on the eluate and an IND subsequently obtained. While this is not a final solution to the problem of developing a clinically acceptable 191 Os-/sup 191m/Ir generator, the ''oxalate'' generator is the most significant improvement of the 191 Os-/sup 191m/Ir generator to date and will be used in an expanded program of clinical studies. 16 refs., 16 tabs

  1. Protection of p+-n-Si Photoanodes by Sputter-Deposited Ir/IrOxThin Films

    DEFF Research Database (Denmark)

    Mei, Bastian Timo; Seger, Brian; Pedersen, Thomas

    2014-01-01

    Sputter deposition of Ir/IrOx on p+-n-Si without interfacial corrosion protection layers yielded photoanodes capable of efficient water oxidation (OER) in acidic media (1 M H2SO4). Stability of at least 18 h was shown by chronoamperomety at 1.23 V versus RHE (reversible hydrogen electrode) under 38...... density of 1 mA/cm2 at 1.05 V vs. RHE. Further improvement by heat treatment resulted in a cathodic shift of 40 mV and enabled a current density of 10 mA/cm2 (requirements for a 10% efficient tandem device) at 1.12 V vs. RHS under irradiation. Thus, the simple IrOx/Ir/p+-n-Si structures not only provide...

  2. Automatic generation of pictorial transcripts of video programs

    Science.gov (United States)

    Shahraray, Behzad; Gibbon, David C.

    1995-03-01

    An automatic authoring system for the generation of pictorial transcripts of video programs which are accompanied by closed caption information is presented. A number of key frames, each of which represents the visual information in a segment of the video (i.e., a scene), are selected automatically by performing a content-based sampling of the video program. The textual information is recovered from the closed caption signal and is initially segmented based on its implied temporal relationship with the video segments. The text segmentation boundaries are then adjusted, based on lexical analysis and/or caption control information, to account for synchronization errors due to possible delays in the detection of scene boundaries or the transmission of the caption information. The closed caption text is further refined through linguistic processing for conversion to lower- case with correct capitalization. The key frames and the related text generate a compact multimedia presentation of the contents of the video program which lends itself to efficient storage and transmission. This compact representation can be viewed on a computer screen, or used to generate the input to a commercial text processing package to generate a printed version of the program.

  3. Desirable and undesirable future thoughts call for different scene construction processes.

    Science.gov (United States)

    de Vito, S; Neroni, M A; Gamboz, N; Della Sala, S; Brandimonte, M A

    2015-01-01

    Despite the growing interest in the ability of foreseeing (episodic future thinking), it is still unclear how healthy people construct possible future scenarios. We suggest that different future thoughts require different processes of scene construction. Thirty-five participants were asked to imagine desirable and less desirable future events. Imagining desirable events increased the ease of scene construction, the frequency of life scripts, the number of internal details, and the clarity of sensorial and spatial temporal information. The initial description of general personal knowledge lasted longer in undesirable than in desirable anticipations. Finally, participants were more prone to explicitly indicate autobiographical memory as the main source of their simulations of undesirable episodes, whereas they equally related the simulations of desirable events to autobiographical events or semantic knowledge. These findings show that desirable and undesirable scenarios call for different mechanisms of scene construction. The present study emphasizes that future thinking cannot be considered as a monolithic entity.

  4. Application of composite small calibration objects in traffic accident scene photogrammetry.

    Science.gov (United States)

    Chen, Qiang; Xu, Hongguo; Tan, Lidong

    2015-01-01

    In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.

  5. Application of composite small calibration objects in traffic accident scene photogrammetry.

    Directory of Open Access Journals (Sweden)

    Qiang Chen

    Full Text Available In order to address the difficulty of arranging large calibration objects and the low measurement accuracy of small calibration objects in traffic accident scene photogrammetry, a photogrammetric method based on a composite of small calibration objects is proposed. Several small calibration objects are placed around the traffic accident scene, and the coordinate system of the composite calibration object is given based on one of them. By maintaining the relative position and coplanar relationship of the small calibration objects, the local coordinate system of each small calibration object is transformed into the coordinate system of the composite calibration object. The two-dimensional direct linear transformation method is improved based on minimizing the reprojection error of the calibration points of all objects. A rectified image is obtained using the nonlinear optimization method. The increased accuracy of traffic accident scene photogrammetry using a composite small calibration object is demonstrated through the analysis of field experiments and case studies.

  6. Novel cross-talk between IGF-IR and DDR1 regulates IGF-IR trafficking, signaling and biological responses

    Science.gov (United States)

    Sacco, Antonella; Morcavallo, Alaide; Vella, Veronica; Voci, Concetta; Spatuzza, Michela; Xu, Shi-Qiong; Iozzo, Renato V.; Vigneri, Riccardo; Morrione, Andrea; Belfiore, Antonino

    2015-01-01

    The insulin-like growth factor-I receptor (IGF-IR), plays a key role in regulating mammalian development and growth, and is frequently deregulated in cancer contributing to tumor initiation and progression. Discoidin domain receptor 1 (DDR1), a collagen receptor tyrosine-kinase, is as well frequently overexpressed in cancer and implicated in cancer progression. Thus, we investigated whether a functional cross-talk between the IGF-IR and DDR1 exists and plays any role in cancer progression. Using human breast cancer cells we found that DDR1 constitutively associated with the IGF-IR. However, this interaction was enhanced by IGF-I stimulation, which promoted rapid DDR1 tyrosine-phosphorylation and co-internalization with the IGF-IR. Significantly, DDR1 was critical for IGF-IR endocytosis and trafficking into early endosomes, IGF-IR protein expression and IGF-I intracellular signaling and biological effects, including cell proliferation, migration and colony formation. These biological responses were inhibited by DDR1 silencing and enhanced by DDR1 overexpression. Experiments in mouse fibroblasts co-transfected with the human IGF-IR and DDR1 gave similar results and indicated that, in the absence of IGF-IR, collagen-dependent phosphorylation of DDR1 is impaired. These results demonstrate a critical role of DDR1 in the regulation of IGF-IR action, and identify DDR1 as a novel important target for breast cancers that overexpress IGF-IR. PMID:25840417

  7. Number 13 / Part I. Music. 3. Mad Scenes: A Warning against Overwhelming Passions

    Directory of Open Access Journals (Sweden)

    Marisi Rossella

    2017-03-01

    Full Text Available This study focuses on mad scenes in poetry and musical theatre, stressing that, according to Aristotle’s theory on catharsis and the Affektenlehre, they had a pedagogical role on the audience. Some mad scenes by J.S. Bach, Handel and Mozart are briefly analyzed, highlighting their most relevant textual and musical characteristics.

  8. Generativity and Themes of Agency and Communion in Adult Autobiography.

    Science.gov (United States)

    Mansfield, Elizabeth D.; McAdams, Dan P.

    1996-01-01

    Examines differences between 70 more- and less-generative adults through a new coding system for analyzing themes of agency and communion in significant life-story scenes. The study revealed that highly generative adults express greater levels of the communion themes of dialog and care/help and greater levels of agency/communion integration. (LSR)

  9. Out of Mind, Out of Sight: Unexpected Scene Elements Frequently Go Unnoticed Until Primed.

    Science.gov (United States)

    Slavich, George M; Zimbardo, Philip G

    2013-12-01

    The human visual system employs a sophisticated set of strategies for scanning the environment and directing attention to stimuli that can be expected given the context and a person's past experience. Although these strategies enable us to navigate a very complex physical and social environment, they can also cause highly salient, but unexpected stimuli to go completely unnoticed. To examine the generality of this phenomenon, we conducted eight studies that included 15 different experimental conditions and 1,577 participants in all. These studies revealed that a large majority of participants do not report having seen a woman in the center of an urban scene who was photographed in midair as she was committing suicide. Despite seeing the scene repeatedly, 46 % of all participants failed to report seeing a central figure and only 4.8 % reported seeing a falling person. Frequency of noticing the suicidal woman was highest for participants who read a narrative priming story that increased the extent to which she was schematically congruent with the scene. In contrast to this robust effect of inattentional blindness , a majority of participants reported seeing other peripheral objects in the visual scene that were equally difficult to detect, yet more consistent with the scene. Follow-up qualitative analyses revealed that participants reported seeing many elements that were not actually present, but which could have been expected given the overall context of the scene. Together, these findings demonstrate the robustness of inattentional blindness and highlight the specificity with which different visual primes may increase noticing behavior.

  10. Influence of semantic consistency and perceptual features on visual attention during scene viewing in toddlers.

    Science.gov (United States)

    Helo, Andrea; van Ommen, Sandrien; Pannasch, Sebastian; Danteny-Dordoigne, Lucile; Rämä, Pia

    2017-11-01

    Conceptual representations of everyday scenes are built in interaction with visual environment and these representations guide our visual attention. Perceptual features and object-scene semantic consistency have been found to attract our attention during scene exploration. The present study examined how visual attention in 24-month-old toddlers is attracted by semantic violations and how perceptual features (i. e. saliency, centre distance, clutter and object size) and linguistic properties (i. e. object label frequency and label length) affect gaze distribution. We compared eye movements of 24-month-old toddlers and adults while exploring everyday scenes which either contained an inconsistent (e.g., soap on a breakfast table) or consistent (e.g., soap in a bathroom) object. Perceptual features such as saliency, centre distance and clutter of the scene affected looking times in the toddler group during the whole viewing time whereas looking times in adults were affected only by centre distance during the early viewing time. Adults looked longer to inconsistent than consistent objects either if the objects had a high or a low saliency. In contrast, toddlers presented semantic consistency effect only when objects were highly salient. Additionally, toddlers with lower vocabulary skills looked longer to inconsistent objects while toddlers with higher vocabulary skills look equally long to both consistent and inconsistent objects. Our results indicate that 24-month-old children use scene context to guide visual attention when exploring the visual environment. However, perceptual features have a stronger influence in eye movement guidance in toddlers than in adults. Our results also indicate that language skills influence cognitive but not perceptual guidance of eye movements during scene perception in toddlers. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. The Interplay of Episodic and Semantic Memory in Guiding Repeated Search in Scenes

    Science.gov (United States)

    Vo, Melissa L.-H.; Wolfe, Jeremy M.

    2013-01-01

    It seems intuitive to think that previous exposure or interaction with an environment should make it easier to search through it and, no doubt, this is true in many real-world situations. However, in a recent study, we demonstrated that previous exposure to a scene does not necessarily speed search within that scene. For instance, when observers…

  12. A signal normalization technique for illumination-based synchronization of 1,000-fps real-time vision sensors in dynamic scenes.

    Science.gov (United States)

    Hou, Lei; Kagami, Shingo; Hashimoto, Koichi

    2010-01-01

    To acquire images of dynamic scenes from multiple points of view simultaneously, the acquisition time of vision sensors should be synchronized. In this paper, an illumination-based synchronization derived from the phase-locked loop (PLL) mechanism based on the signal normalization method is proposed and evaluated. To eliminate the system dependency due to the amplitude fluctuation of the reference illumination, which may be caused by the moving objects or relative positional distance change between the light source and the observed objects, the fluctuant amplitude of the reference signal is normalized framely by the estimated maximum amplitude between the reference signal and its quadrature counterpart to generate a stable synchronization in highly dynamic scenes. Both simulated results and real world experimental results demonstrated successful synchronization result that 1,000-Hz frame rate vision sensors can be successfully synchronized to a LED illumination or its reflected light with satisfactory stability and only 28-μs jitters.

  13. A Signal Normalization Technique for Illumination-Based Synchronization of 1,000-fps Real-Time Vision Sensors in Dynamic Scenes

    Directory of Open Access Journals (Sweden)

    Koichi Hashimoto

    2010-09-01

    Full Text Available To acquire images of dynamic scenes from multiple points of view simultaneously, the acquisition time of vision sensors should be synchronized. In this paper, an illumination-based synchronization derived from the phase-locked loop (PLL mechanism based on the signal normalization method is proposed and evaluated. To eliminate the system dependency due to the amplitude fluctuation of the reference illumination, which may be caused by the moving objects or relative positional distance change between the light source and the observed objects, the fluctuant amplitude of the reference signal is normalized framely by the estimated maximum amplitude between the reference signal and its quadrature counterpart to generate a stable synchronization in highly dynamic scenes. Both simulated results and real world experimental results demonstrated successful synchronization result that 1,000-Hz frame rate vision sensors can be successfully synchronized to a LED illumination or its reflected light with satisfactory stability and only 28-μs jitters.

  14. Tachistoscopic illumination and masking of real scenes.

    Science.gov (United States)

    Chichka, David; Philbeck, John W; Gajewski, Daniel A

    2015-03-01

    Tachistoscopic presentation of scenes has been valuable for studying the emerging properties of visual scene representations. The spatial aspects of this work have generally been focused on the conceptual locations (e.g., next to the refrigerator) and directional locations of objects in 2-D arrays and/or images. Less is known about how the perceived egocentric distance of objects develops. Here we describe a novel system for presenting brief glimpses of a real-world environment, followed by a mask. The system includes projectors with mechanical shutters for projecting the fixation and masking images, a set of LED floodlights for illuminating the environment, and computer-controlled electronics to set the timing and initiate the process. Because a real environment is used, most visual distance and depth cues can be manipulated using traditional methods. The system is inexpensive, robust, and its components are readily available in the marketplace. This article describes the system and the timing characteristics of each component. We verified the system's ability to control exposure to time scales as low as a few milliseconds.

  15. IOT Overview: IR Instruments

    Science.gov (United States)

    Mason, E.

    In this instrument review chapter the calibration plans of ESO IR instruments are presented and briefly reviewed focusing, in particular, on the case of ISAAC, which has been the first IR instrument at VLT and whose calibration plan served as prototype for the coming instruments.

  16. Range and intensity vision for rock-scene segmentation

    CSIR Research Space (South Africa)

    Mkwelo, SG

    2007-11-01

    Full Text Available This paper presents another approach to segmenting a scene of rocks on a conveyor belt for the purposes of measuring rock size. Rock size estimation instruments are used to monitor, optimize and control milling and crushing in the mining industry...

  17. Evaluating Color Descriptors for Object and Scene Recognition

    NARCIS (Netherlands)

    van de Sande, K.E.A.; Gevers, T.; Snoek, C.G.M.

    2010-01-01

    Image category recognition is important to access visual information on the level of objects and scene types. So far, intensity-based descriptors have been widely used for feature extraction at salient points. To increase illumination invariance and discriminative power, color descriptors have been

  18. Falling out of time: enhanced memory for scenes presented at behaviorally irrelevant points in time in posttraumatic stress disorder (PTSD).

    Science.gov (United States)

    Levy-Gigi, Einat; Kéri, Szabolcs

    2012-01-01

    Spontaneous encoding of the visual environment depends on the behavioral relevance of the task performed simultaneously. If participants identify target letters or auditory tones while viewing a series of briefly presented natural and urban scenes, they demonstrate effective scene recognition only when a target, but not a behaviorally irrelevant distractor, appears together with the scene. Here, we show that individuals with posttraumatic stress disorder (PTSD), who witnessed the red sludge disaster in Hungary, show the opposite pattern of performance: enhanced recognition of scenes presented together with distractors and deficient recognition of scenes presented with targets. The recognition of trauma-related and neutral scenes was not different in individuals with PTSD. We found a positive correlation between memory for scenes presented with auditory distractors and re-experiencing symptoms (memory intrusions and flashbacks). These results suggest that abnormal encoding of visual scenes at behaviorally irrelevant events might be associated with intrusive experiences by disrupting the flow of time.

  19. Falling out of time: enhanced memory for scenes presented at behaviorally irrelevant points in time in posttraumatic stress disorder (PTSD.

    Directory of Open Access Journals (Sweden)

    Einat Levy-Gigi

    Full Text Available Spontaneous encoding of the visual environment depends on the behavioral relevance of the task performed simultaneously. If participants identify target letters or auditory tones while viewing a series of briefly presented natural and urban scenes, they demonstrate effective scene recognition only when a target, but not a behaviorally irrelevant distractor, appears together with the scene. Here, we show that individuals with posttraumatic stress disorder (PTSD, who witnessed the red sludge disaster in Hungary, show the opposite pattern of performance: enhanced recognition of scenes presented together with distractors and deficient recognition of scenes presented with targets. The recognition of trauma-related and neutral scenes was not different in individuals with PTSD. We found a positive correlation between memory for scenes presented with auditory distractors and re-experiencing symptoms (memory intrusions and flashbacks. These results suggest that abnormal encoding of visual scenes at behaviorally irrelevant events might be associated with intrusive experiences by disrupting the flow of time.

  20. Atom condensation on an atomically smooth surface: Ir, Re, W, and Pd on Ir(111)

    International Nuclear Information System (INIS)

    Wang, S.C.; Ehrlich, G.

    1991-01-01

    The distribution of condensing metal atoms over the two types of sites present on an atomically smooth Ir(111) has been measured in a field ion microscope. For Ir, Re, W, and Pd from a thermal source, condensing on Ir(111) at ∼20 K, the atoms are randomly distributed, as expected if they condense at the first site struck

  1. Single-nucleotide polymorphism of INS, INSR, IRS1, IRS2, PPAR-G ...

    Indian Academy of Sciences (India)

    2017-03-02

    Mar 2, 2017 ... Abstract. Polycystic ovary syndrome (PCOS) is the most common and a complex female endocrine disorder, and is one of the leading cause of female infertility. Here, we aimed to investigate the association of single-nucleotide polymorphism of INS, INSR,. IRS1, IRS2, PPAR-G and CAPN10 gene in the ...

  2. Using 3D range cameras for crime scene documentation and legal medicine

    Science.gov (United States)

    Cavagnini, Gianluca; Sansoni, Giovanna; Trebeschi, Marco

    2009-01-01

    Crime scene documentation and legal medicine analysis are part of a very complex process which is aimed at identifying the offender starting from the collection of the evidences on the scene. This part of the investigation is very critical, since the crime scene is extremely volatile, and once it is removed, it can not be precisely created again. For this reason, the documentation process should be as complete as possible, with minimum invasiveness. The use of optical 3D imaging sensors has been considered as a possible aid to perform the documentation step, since (i) the measurement is contactless and (ii) the process required to editing and modeling the 3D data is quite similar to the reverse engineering procedures originally developed for the manufacturing field. In this paper we show the most important results obtained in the experimentation.

  3. Suppression of superconductivity in Nb by IrMn in IrMn/Nb bilayers

    KAUST Repository

    Wu, B. L.; Yang, Y. M.; Guo, Z. B.; Wu, Y. H.; Qiu, J. J.

    2013-01-01

    Effect of antiferromagnet on superconductivity has been investigated in IrMn/Nb bilayers. Significant suppression of both transition temperature (Tc) and lower critical field (Hc1) of Nb is found in IrMn/Nb bilayers as compared to a single layer Nb

  4. Making a scene: exploring the dimensions of place through Dutch popular music, 1960-2010

    NARCIS (Netherlands)

    Brandellero, A.; Pfeffer, K.

    2015-01-01

    This paper applies a multi-layered conceptualisation of place to the analysis of particular music scenes in the Netherlands, 1960-2010. We focus on: the clustering of music-related activities in locations; the delineation of spatially tied music scenes, based on a shared identity, reproduced over

  5. On formation mechanism of Pd-Ir bimetallic nanoparticles through thermal decomposition of [Pd(NH3)4][IrCl6

    Science.gov (United States)

    Asanova, Tatyana I.; Asanov, Igor P.; Kim, Min-Gyu; Gerasimov, Evgeny Yu.; Zadesenets, Andrey V.; Plyusnin, Pavel E.; Korenev, Sergey V.

    2013-10-01

    The formation mechanism of Pd-Ir nanoparticles during thermal decomposition of double complex salt [Pd(NH3)4][IrCl6] has been studied by in situ X-ray absorption (XAFS) and photoelectron (XPS) spectroscopies. The changes in the structure of the Pd and Ir closest to the surroundings and chemical states of Pd, Ir, Cl, and N atoms were traced in the range from room temperature to 420 °C in inert atmosphere. It was established that the thermal decomposition process is carried out in 5 steps. The Pd-Ir nanoparticles are formed in pyramidal/rounded Pd-rich (10-200 nm) and dendrite Ir-rich (10-50 nm) solid solutions. A d charge depletion at Ir site and a gain at Pd, as well as the intra-atomic charge redistribution between the outer d and s and p electrons of both Ir and Pd in Pd-Ir nanoparticles, were found to occur.

  6. On formation mechanism of Pd–Ir bimetallic nanoparticles through thermal decomposition of [Pd(NH3)4][IrCl6

    International Nuclear Information System (INIS)

    Asanova, Tatyana I.; Asanov, Igor P.; Kim, Min-Gyu; Gerasimov, Evgeny Yu.; Zadesenets, Andrey V.; Plyusnin, Pavel E.; Korenev, Sergey V.

    2013-01-01

    The formation mechanism of Pd–Ir nanoparticles during thermal decomposition of double complex salt [Pd(NH 3 ) 4 ][IrCl 6 ] has been studied by in situ X-ray absorption (XAFS) and photoelectron (XPS) spectroscopies. The changes in the structure of the Pd and Ir closest to the surroundings and chemical states of Pd, Ir, Cl, and N atoms were traced in the range from room temperature to 420 °C in inert atmosphere. It was established that the thermal decomposition process is carried out in 5 steps. The Pd–Ir nanoparticles are formed in pyramidal/rounded Pd-rich (10–200 nm) and dendrite Ir-rich (10–50 nm) solid solutions. A d charge depletion at Ir site and a gain at Pd, as well as the intra-atomic charge redistribution between the outer d and s and p electrons of both Ir and Pd in Pd–Ir nanoparticles, were found to occur.Graphical Abstract

  7. HERWIRI1.0: MC realization of IR-improved DGLAP-CS parton showers

    International Nuclear Information System (INIS)

    Joseph, S.; Majhi, S.; Ward, B.F.L.; Yost, S.A.

    2010-01-01

    We present Monte Carlo data showing the comparison between the parton shower generated by the standard Dokshitzer-Gribov-Lipatov-Altarelli-Parisi-Callan-Symanzik (DGLAP-CS) kernels and that generated with the new IR-improved DGLAP-CS kernels recently developed by one of us. We do this in the context of HERWIG6.5 by implementing the new kernels therein to generate a new MC, HERWIRI1.0, for hadron-hadron interactions at high energies. We discuss possible phenomenological implications for precision LHC theory. We also present comparisons with FNAL data.

  8. Distinct signalling properties of insulin receptor substrate (IRS)-1 and IRS-2 in mediating insulin/IGF-1 action

    DEFF Research Database (Denmark)

    Rabiee, Atefeh; Krüger, Marcus; Ardenkjær-Larsen, Jacob

    2018-01-01

    Insulin/IGF-1 action is driven by a complex and highly integrated signalling network. Loss-of-function studies indicate that the major insulin/IGF-1 receptor substrate (IRS) proteins, IRS-1 and IRS-2, mediate different biological functions in vitro and in vivo, suggesting specific signalling...... properties despite their high degree of homology. To identify mechanisms contributing to the differential signalling properties of IRS-1 and IRS-2 in the mediation of insulin/IGF-1 action, we performed comprehensive mass spectrometry (MS)-based phosphoproteomic profiling of brown preadipocytes from wild type......, IRS-1-/-and IRS-2-/-mice in the basal and IGF-1-stimulated states. We applied stable isotope labeling by amino acids in cell culture (SILAC) for the accurate quantitation of changes in protein phosphorylation. We found ~10% of the 6262 unique phosphorylation sites detected to be regulated by IGF-1...

  9. Cortical networks dynamically emerge with the interplay of slow and fast oscillations for memory of a natural scene.

    Science.gov (United States)

    Mizuhara, Hiroaki; Sato, Naoyuki; Yamaguchi, Yoko

    2015-05-01

    Neural oscillations are crucial for revealing dynamic cortical networks and for serving as a possible mechanism of inter-cortical communication, especially in association with mnemonic function. The interplay of the slow and fast oscillations might dynamically coordinate the mnemonic cortical circuits to rehearse stored items during working memory retention. We recorded simultaneous EEG-fMRI during a working memory task involving a natural scene to verify whether the cortical networks emerge with the neural oscillations for memory of the natural scene. The slow EEG power was enhanced in association with the better accuracy of working memory retention, and accompanied cortical activities in the mnemonic circuits for the natural scene. Fast oscillation showed a phase-amplitude coupling to the slow oscillation, and its power was tightly coupled with the cortical activities for representing the visual images of natural scenes. The mnemonic cortical circuit with the slow neural oscillations would rehearse the distributed natural scene representations with the fast oscillation for working memory retention. The coincidence of the natural scene representations could be obtained by the slow oscillation phase to create a coherent whole of the natural scene in the working memory. Copyright © 2015 Elsevier Inc. All rights reserved.

  10. How children remember neutral and emotional pictures: boundary extension in children's scene memories.

    Science.gov (United States)

    Candel, Ingrid; Merckelbach, Harald; Houben, Katrijn; Vandyck, Inne

    2004-01-01

    Boundary extension is the tendency to remember more of a scene than was actually shown. The dominant interpretation of this memory illusion is that it originates from schemata that people construct when viewing a scene. Evidence of boundary extension has been obtained primarily with adult participants who remember neutral pictures. The current study addressed the developmental stability of this phenomenon. Therefore, we investigated whether children aged 10-12 years display boundary extension for neutral pictures. Moreover, we examined emotional scene memory. Eighty-seven children drew pictures from memory after they had seen either neutral or emotional pictures. Both their neutral and emotional drawings revealed boundary extension. Apparently, the schema construction that underlies boundary extension is a robust and ubiquitous process.

  11. Using selected scenes from Brazilian films to teach about substance use disorders, within medical education.

    Science.gov (United States)

    Castaldelli-Maia, João Mauricio; Oliveira, Hercílio Pereira; Andrade, Arthur Guerra; Lotufo-Neto, Francisco; Bhugra, Dinesh

    2012-01-01

    Themes like alcohol and drug abuse, relationship difficulties, psychoses, autism and personality dissociation disorders have been widely used in films. Psychiatry and psychiatric conditions in various cultural settings are increasingly taught using films. Many articles on cinema and psychiatry have been published but none have presented any methodology on how to select material. Here, the authors look at the portrayal of abusive use of alcohol and drugs during the Brazilian cinema revival period (1994 to 2008). Qualitative study at two universities in the state of São Paulo. Scenes were selected from films available at rental stores and were analyzed using a specifically designed protocol. We assessed how realistic these scenes were and their applicability for teaching. One author selected 70 scenes from 50 films (graded for realism and teaching applicability > 8). These were then rated by another two judges. Rating differences among the three judges were assessed using nonparametric tests (P 8) were defined as "quality scenes". Thirty-nine scenes from 27 films were identified as "quality scenes". Alcohol, cannabis, cocaine, hallucinogens and inhalants were included in these. Signs and symptoms of intoxication, abusive/harmful use and dependence were shown. We have produced rich teaching material for discussing psychopathology relating to alcohol and drug use that can be used both at undergraduate and at postgraduate level. Moreover, it could be seen that certain drug use behavioral patterns are deeply rooted in some Brazilian films and groups.

  12. Where’s Wally: The influence of visual salience on referring expression generation

    Directory of Open Access Journals (Sweden)

    Alasdair Daniel Francis Clarke

    2013-06-01

    Full Text Available Referring expression generation (REG presents the converse problem to visualsearch: Given a scene and a specified target, how does one generate adescription which would allow somebody else to quickly and accurately locatethe target? Previous work in psycholinguistics and natural language processingthat has addressed this question identifies only a limited role for vision inthis task. That previous work, which relies largely on simple scenes, tends totreat vision as a pre-process for extracting feature categories that arerelevant to disambiguation. However, the visual search literature suggeststhat some descriptions are better than others at enabling listeners to searchefficiently within complex stimuli. This paper presents the results of a studytesting whether speakers are sensitive to visual features that allow them tocompose such `good' descriptions. Our results show that visual properties(salience, clutter, area, and distance influence REG for targets embedded inimages from the *Where's Wally?* books, which are an order of magnitudemore complex than traditional stimuli. Referring expressions for large salienttargets are shorter than those for smaller and less salient targets, and targets within highly cluttered scenes are described using more words.We also find that speakers are more likely to mention non-target landmarks thatare large, salient, and in close proximity to the target. These findingsidentfy a key role for visual salience in language production decisions and highlight the importance of scene complexity for REG.

  13. Standoff alpha radiation detection for hot cell imaging and crime scene investigation

    Science.gov (United States)

    Kerst, Thomas; Sand, Johan; Ihantola, Sakari; Peräjärvi, Kari; Nicholl, Adrian; Hrnecek, Erich; Toivonen, Harri; Toivonen, Juha

    2018-02-01

    This paper presents the remote detection of alpha contamination in a nuclear facility. Alpha-active material in a shielded nuclear radiation containment chamber has been localized by optical means. Furthermore, sources of radiation danger have been identified in a staged crime scene setting. For this purpose, an electron-multiplying charge-coupled device camera was used to capture photons generated by alpha-induced air scintillation (radioluminescence). The detected radioluminescence was superimposed with a regular photograph to reveal the origin of the light and thereby the alpha radioactive material. The experimental results show that standoff detection of alpha contamination is a viable tool in radiation threat detection. Furthermore, the radioluminescence spectrum in the air is spectrally analyzed. Possibilities of camera-based alpha threat detection under various background lighting conditions are discussed.

  14. Density functional study of the L10-αIrV transition in IrV and RhV

    International Nuclear Information System (INIS)

    Mehl, Michael J.; Hart, Gus L.W.; Curtarolo, Stefano

    2011-01-01

    Research highlights: → The computational determination of the ground state of a material can be a difficult task, particularly if the ground state is uncommon and so not found in usual databases. In this paper we consider the alpha-IrV structure, a low temperature structure found only in two compounds, IrV and RhV. In both cases this structure can be considered as a distorted tetragonal structure, and the tetragonal 'L1 0 ' structure is the high temperature structure for both compounds. We show, however, that the logical path for the transition from the L1 0 to the alpha-IrV structure is energetically forbidden, and find a series of unstable and metastable structures which have a lower energy than the L1 0 phase, but are higher in energy than the alpha-IrV phase. We also consider the possibility of the alpha-IrV structure appearing in neighboring compounds. We find that both IrTi and RhTi are candidates. - Abstract: Both IrV and RhV crystallize in the αIrV structure, with a transition to the higher symmetry L1 0 structure at high temperature, or with the addition of excess Ir or Rh. Here we present evidence that this transition is driven by the lowering of the electronic density of states at the Fermi level of the αIrV structure. The transition has long been thought to be second order, with a simple doubling of the L1 0 unit cell due to an unstable phonon at the R point (0 1/2 1/2). We use first-principles calculations to show that all phonons at the R point are, in fact, stable, but do find a region of reciprocal space where the L1 0 structure has unstable (imaginary frequency) phonons. We use the frozen phonon method to examine two of these modes, relaxing the structures associated with the unstable phonon modes to obtain new structures which are lower in energy than L1 0 but still above αIrV. We examine the phonon spectra of these structures as well, looking for instabilities, and find further instabilities, and more relaxed structures, all of which have

  15. Fixation and saliency during search of natural scenes: the case of visual agnosia.

    Science.gov (United States)

    Foulsham, Tom; Barton, Jason J S; Kingstone, Alan; Dewhurst, Richard; Underwood, Geoffrey

    2009-07-01

    Models of eye movement control in natural scenes often distinguish between stimulus-driven processes (which guide the eyes to visually salient regions) and those based on task and object knowledge (which depend on expectations or identification of objects and scene gist). In the present investigation, the eye movements of a patient with visual agnosia were recorded while she searched for objects within photographs of natural scenes and compared to those made by students and age-matched controls. Agnosia is assumed to disrupt the top-down knowledge available in this task, and so may increase the reliance on bottom-up cues. The patient's deficit in object recognition was seen in poor search performance and inefficient scanning. The low-level saliency of target objects had an effect on responses in visual agnosia, and the most salient region in the scene was more likely to be fixated by the patient than by controls. An analysis of model-predicted saliency at fixation locations indicated a closer match between fixations and low-level saliency in agnosia than in controls. These findings are discussed in relation to saliency-map models and the balance between high and low-level factors in eye guidance.

  16. Extracting flat-field images from scene-based image sequences using phase correlation

    Energy Technology Data Exchange (ETDEWEB)

    Caron, James N., E-mail: Caron@RSImd.com [Research Support Instruments, 4325-B Forbes Boulevard, Lanham, Maryland 20706 (United States); Montes, Marcos J. [Naval Research Laboratory, Code 7231, 4555 Overlook Avenue, SW, Washington, DC 20375 (United States); Obermark, Jerome L. [Naval Research Laboratory, Code 8231, 4555 Overlook Avenue, SW, Washington, DC 20375 (United States)

    2016-06-15

    Flat-field image processing is an essential step in producing high-quality and radiometrically calibrated images. Flat-fielding corrects for variations in the gain of focal plane array electronics and unequal illumination from the system optics. Typically, a flat-field image is captured by imaging a radiometrically uniform surface. The flat-field image is normalized and removed from the images. There are circumstances, such as with remote sensing, where a flat-field image cannot be acquired in this manner. For these cases, we developed a phase-correlation method that allows the extraction of an effective flat-field image from a sequence of scene-based displaced images. The method uses sub-pixel phase correlation image registration to align the sequence to estimate the static scene. The scene is removed from sequence producing a sequence of misaligned flat-field images. An average flat-field image is derived from the realigned flat-field sequence.

  17. Attosecond pulse trains generated using two color laser fields

    International Nuclear Information System (INIS)

    Mauritsson, J.; Louisiana State University, Baton Rouge, LA; Johnsson, P.; Gustafsson, E.; L'Hullier, A.; Schafer, K.J.; Gaarde, M.B.

    2006-01-01

    Complete test of publication follows. We present the generation of attosecond pulse trains from a superposition of an infrared (IR) laser field and its second harmonic. Our attosecond pulses are synthesized by selecting a number of synchronized harmonics generated in argon. By adding the second harmonic to the driving field the inversion symmetry of generation process is broken and both odd and even harmonics are generated. Consecutive half cycles in the two color field differ beyond the simple sign change that occurs in a one color field and have very different shapes and amplitudes. This sub-cycle structure of the field, which governs the generation of the attosecond pulses, depends strongly on the relative phase and intensity of the two fields, thereby providing additional control over the generation process. The generation of attosecond pulses is frequently described using the semi-classical three step model where an electron is: (1) ionized through tunneling ionization during one half cycle; (2) reaccelerated back towards the ion core by the next half cycle; where it (3) recombines with the ground-state releasing the access energy in a short burst of light. In the two color field the symmetry between the ionizing and reaccelerating field is broken, which leads to two possible scenarios: the electron can either be ionized during a strong half cycle and reaccelerated by a weaker field or vice versa. The periodicity is a full IR cycle in both cases and hence two trains of attosecond pulses are generated which are offset from each other. The generation efficiency, however, is very different for the two cases since it is determined mainly by the electric field strength at the time of tunneling and one of the trains will therefore dominate the other. We investigate experimentally both the spectral and temporal structure of the generated attosecond pulse trains as a function of the relative phase between the two driving fields. We find that for a wide range of

  18. The perception of naturalness correlates with low-level visual features of environmental scenes.

    Directory of Open Access Journals (Sweden)

    Marc G Berman

    Full Text Available Previous research has shown that interacting with natural environments vs. more urban or built environments can have salubrious psychological effects, such as improvements in attention and memory. Even viewing pictures of nature vs. pictures of built environments can produce similar effects. A major question is: What is it about natural environments that produces these benefits? Problematically, there are many differing qualities between natural and urban environments, making it difficult to narrow down the dimensions of nature that may lead to these benefits. In this study, we set out to uncover visual features that related to individuals' perceptions of naturalness in images. We quantified naturalness in two ways: first, implicitly using a multidimensional scaling analysis and second, explicitly with direct naturalness ratings. Features that seemed most related to perceptions of naturalness were related to the density of contrast changes in the scene, the density of straight lines in the scene, the average color saturation in the scene and the average hue diversity in the scene. We then trained a machine-learning algorithm to predict whether a scene was perceived as being natural or not based on these low-level visual features and we could do so with 81% accuracy. As such we were able to reliably predict subjective perceptions of naturalness with objective low-level visual features. Our results can be used in future studies to determine if these features, which are related to naturalness, may also lead to the benefits attained from interacting with nature.

  19. Attention in natural scenes: Affective-motivational factors guide gaze independently of visual salience.

    Science.gov (United States)

    Schomaker, Judith; Walper, Daniel; Wittmann, Bianca C; Einhäuser, Wolfgang

    2017-04-01

    In addition to low-level stimulus characteristics and current goals, our previous experience with stimuli can also guide attentional deployment. It remains unclear, however, if such effects act independently or whether they interact in guiding attention. In the current study, we presented natural scenes including every-day objects that differed in affective-motivational impact. In the first free-viewing experiment, we presented visually-matched triads of scenes in which one critical object was replaced that varied mainly in terms of motivational value, but also in terms of valence and arousal, as confirmed by ratings by a large set of observers. Treating motivation as a categorical factor, we found that it affected gaze. A linear-effect model showed that arousal, valence, and motivation predicted fixations above and beyond visual characteristics, like object size, eccentricity, or visual salience. In a second experiment, we experimentally investigated whether the effects of emotion and motivation could be modulated by visual salience. In a medium-salience condition, we presented the same unmodified scenes as in the first experiment. In a high-salience condition, we retained the saturation of the critical object in the scene, and decreased the saturation of the background, and in a low-salience condition, we desaturated the critical object while retaining the original saturation of the background. We found that highly salient objects guided gaze, but still found additional additive effects of arousal, valence and motivation, confirming that higher-level factors can also guide attention, as measured by fixations towards objects in natural scenes. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Effect of Viewing Smoking Scenes in Motion Pictures on Subsequent Smoking Desire in Audiences in South Korea.

    Science.gov (United States)

    Sohn, Minsung; Jung, Minsoo

    2017-07-17

    In the modern era of heightened awareness of public health, smoking scenes in movies remain relatively free from public monitoring. The effect of smoking scenes in movies on the promotion of viewers' smoking desire remains unknown. The study aimed to explore whether exposure of adolescent smokers to images of smoking in fılms could stimulate smoking behavior. Data were derived from a national Web-based sample survey of 748 Korean high-school students. Participants aged 16-18 years were randomly assigned to watch three short video clips with or without smoking scenes. After adjusting covariates using propensity score matching, paired sample t test and logistic regression analyses compared the difference in smoking desire before and after exposure of participants to smoking scenes. For male adolescents, cigarette craving was significantly higher in those who watched movies with smoking scenes than in the control group who did not view smoking scenes (t 307.96 =2.066, Pfilms and assigning a smoking-related screening grade to films is warranted. ©Minsung Sohn, Minsoo Jung. Originally published in JMIR Public Health and Surveillance (http://publichealth.jmir.org), 17.07.2017.

  1. Accident or homicide--virtual crime scene reconstruction using 3D methods.

    Science.gov (United States)

    Buck, Ursula; Naether, Silvio; Räss, Beat; Jackowski, Christian; Thali, Michael J

    2013-02-10

    The analysis and reconstruction of forensically relevant events, such as traffic accidents, criminal assaults and homicides are based on external and internal morphological findings of the injured or deceased person. For this approach high-tech methods are gaining increasing importance in forensic investigations. The non-contact optical 3D digitising system GOM ATOS is applied as a suitable tool for whole body surface and wound documentation and analysis in order to identify injury-causing instruments and to reconstruct the course of event. In addition to the surface documentation, cross-sectional imaging methods deliver medical internal findings of the body. These 3D data are fused into a whole body model of the deceased. Additional to the findings of the bodies, the injury inflicting instruments and incident scene is documented in 3D. The 3D data of the incident scene, generated by 3D laser scanning and photogrammetry, is also included into the reconstruction. Two cases illustrate the methods. In the fist case a man was shot in his bedroom and the main question was, if the offender shot the man intentionally or accidentally, as he declared. In the second case a woman was hit by a car, driving backwards into a garage. It was unclear if the driver drove backwards once or twice, which would indicate that he willingly injured and killed the woman. With this work, we demonstrate how 3D documentation, data merging and animation enable to answer reconstructive questions regarding the dynamic development of patterned injuries, and how this leads to a real data based reconstruction of the course of event. Copyright © 2012 Elsevier Ireland Ltd. All rights reserved.

  2. Position-Invariant Robust Features for Long-Term Recognition of Dynamic Outdoor Scenes

    Science.gov (United States)

    Kawewong, Aram; Tangruamsub, Sirinart; Hasegawa, Osamu

    A novel Position-Invariant Robust Feature, designated as PIRF, is presented to address the problem of highly dynamic scene recognition. The PIRF is obtained by identifying existing local features (i.e. SIFT) that have a wide baseline visibility within a place (one place contains more than one sequential images). These wide-baseline visible features are then represented as a single PIRF, which is computed as an average of all descriptors associated with the PIRF. Particularly, PIRFs are robust against highly dynamical changes in scene: a single PIRF can be matched correctly against many features from many dynamical images. This paper also describes an approach to using these features for scene recognition. Recognition proceeds by matching an individual PIRF to a set of features from test images, with subsequent majority voting to identify a place with the highest matched PIRF. The PIRF system is trained and tested on 2000+ outdoor omnidirectional images and on COLD datasets. Despite its simplicity, PIRF offers a markedly better rate of recognition for dynamic outdoor scenes (ca. 90%) than the use of other features. Additionally, a robot navigation system based on PIRF (PIRF-Nav) can outperform other incremental topological mapping methods in terms of time (70% less) and memory. The number of PIRFs can be reduced further to reduce the time while retaining high accuracy, which makes it suitable for long-term recognition and localization.

  3. The IRS-1 signaling system.

    Science.gov (United States)

    Myers, M G; Sun, X J; White, M F

    1994-07-01

    Insulin-receptor substrate 1 (IRS-1) is a principal substrate of the receptor tyrosine kinase for insulin and insulin-like growth factor 1, and a substrate for a tyrosine kinase activated by interleukin 4. IRS-1 undergoes multisite tyrosine phosphorylation and mediates downstream signals by 'docking' various proteins that contain Src homology 2 domains. IRS-1 appears to be a unique molecule; however, 4PS, a protein found mainly in hemopoietic cells, may represent another member of this family.

  4. pH Mapping on Tooth Surfaces for Quantitative Caries Diagnosis Using Micro Ir/IrOx pH Sensor.

    Science.gov (United States)

    Ratanaporncharoen, Chindanai; Tabata, Miyuki; Kitasako, Yuichi; Ikeda, Masaomi; Goda, Tatsuro; Matsumoto, Akira; Tagami, Junji; Miyahara, Yuji

    2018-04-03

    A quantitative diagnostic method for dental caries would improve oral health, which directly affects the quality of life. Here we describe the preparation and application of Ir/IrOx pH sensors, which are used to measure the surface pH of dental caries. The pH level is used as an indicator to distinguish between active and arrested caries. After a dentist visually inspected and defined 18 extracted dentinal caries at various positions as active or arrested caries, the surface pH values of sound and caries areas were directly measured with an Ir/IrOx pH sensor with a diameter of 300 μm as a dental explorer. The average pH values of the sound root, the arrested caries, and active caries were 6.85, 6.07, and 5.30, respectively. The pH obtained with an Ir/IrOx sensor was highly correlated with the inspection results by the dentist, indicating that the types of caries were successfully categorized. This caries testing technique using a micro Ir/IrOx pH sensor provides an accurate quantitative caries evaluation and has potential in clinical diagnosis.

  5. The probability of object-scene co-occurrence influences object identification processes.

    Science.gov (United States)

    Sauvé, Geneviève; Harmand, Mariane; Vanni, Léa; Brodeur, Mathieu B

    2017-07-01

    Contextual information allows the human brain to make predictions about the identity of objects that might be seen and irregularities between an object and its background slow down perception and identification processes. Bar and colleagues modeled the mechanisms underlying this beneficial effect suggesting that the brain stocks information about the statistical regularities of object and scene co-occurrence. Their model suggests that these recurring regularities could be conceptualized along a continuum in which the probability of seeing an object within a given scene can be high (probable condition), moderate (improbable condition) or null (impossible condition). In the present experiment, we propose to disentangle the electrophysiological correlates of these context effects by directly comparing object-scene pairs found along this continuum. We recorded the event-related potentials of 30 healthy participants (18-34 years old) and analyzed their brain activity in three time windows associated with context effects. We observed anterior negativities between 250 and 500 ms after object onset for the improbable and impossible conditions (improbable more negative than impossible) compared to the probable condition as well as a parieto-occipital positivity (improbable more positive than impossible). The brain may use different processing pathways to identify objects depending on whether the probability of co-occurrence with the scene is moderate (rely more on top-down effects) or null (rely more on bottom-up influences). The posterior positivity could index error monitoring aimed to ensure that no false information is integrated into mental representations of the world.

  6. HOM [higher order mode] losses at the IR [interaction region] of the B-factory

    International Nuclear Information System (INIS)

    Heifets, S.

    1990-08-01

    Masking at the interaction region (IR) will presumably reduce the synchrotron radiation background in the detector. One possible layout of the IR for B-factory shows a rather complicated system of masks. A bunch passing each mask will generate RF waves. These waves (called usually higher order modes, HOM-s) will be absorbed in the beam pipe wall producing additional heating and, interacting with the beam, kicking particles in the radial and azimuthal directions. This may change the bunch motion and its emittance. These effects are estimated in the present note

  7. Semantic Categorization Precedes Affective Evaluation of Visual Scenes

    Science.gov (United States)

    Nummenmaa, Lauri; Hyona, Jukka; Calvo, Manuel G.

    2010-01-01

    We compared the primacy of affective versus semantic categorization by using forced-choice saccadic and manual response tasks. Participants viewed paired emotional and neutral scenes involving humans or animals flashed rapidly in extrafoveal vision. Participants were instructed to categorize the targets by saccading toward the location occupied by…

  8. Analysis of the development of missile-borne IR imaging detecting technologies

    Science.gov (United States)

    Fan, Jinxiang; Wang, Feng

    2017-10-01

    Today's infrared imaging guiding missiles are facing many challenges. With the development of targets' stealth, new-style IR countermeasures and penetrating technologies as well as the complexity of the operational environments, infrared imaging guiding missiles must meet the higher requirements of efficient target detection, capability of anti-interference and anti-jamming and the operational adaptability in complex, dynamic operating environments. Missileborne infrared imaging detecting systems are constrained by practical considerations like cost, size, weight and power (SWaP), and lifecycle requirements. Future-generation infrared imaging guiding missiles need to be resilient to changing operating environments and capable of doing more with fewer resources. Advanced IR imaging detecting and information exploring technologies are the key technologies that affect the future direction of IR imaging guidance missiles. Infrared imaging detecting and information exploring technologies research will support the development of more robust and efficient missile-borne infrared imaging detecting systems. Novelty IR imaging technologies, such as Infrared adaptive spectral imaging, are the key to effectively detect, recognize and track target under the complicated operating and countermeasures environments. Innovative information exploring techniques for the information of target, background and countermeasures provided by the detection system is the base for missile to recognize target and counter interference, jamming and countermeasure. Modular hardware and software development is the enabler for implementing multi-purpose, multi-function solutions. Uncooled IRFPA detectors and High-operating temperature IRFPA detectors as well as commercial-off-the-shelf (COTS) technology will support the implementing of low-cost infrared imaging guiding missiles. In this paper, the current status and features of missile-borne IR imaging detecting technologies are summarized. The key

  9. The Effect of Scene Variation on the Redundant Use of Color in Definite Reference

    Science.gov (United States)

    Koolen, Ruud; Goudbeek, Martijn; Krahmer, Emiel

    2013-01-01

    This study investigates to what extent the amount of variation in a visual scene causes speakers to mention the attribute color in their definite target descriptions, focusing on scenes in which this attribute is not needed for identification of the target. The results of our three experiments show that speakers are more likely to redundantly…

  10. Deep Salient Feature Based Anti-Noise Transfer Network for Scene Classification of Remote Sensing Imagery

    Directory of Open Access Journals (Sweden)

    Xi Gong

    2018-03-01

    Full Text Available Remote sensing (RS scene classification is important for RS imagery semantic interpretation. Although tremendous strides have been made in RS scene classification, one of the remaining open challenges is recognizing RS scenes in low quality variance (e.g., various scales and noises. This paper proposes a deep salient feature based anti-noise transfer network (DSFATN method that effectively enhances and explores the high-level features for RS scene classification in different scales and noise conditions. In DSFATN, a novel discriminative deep salient feature (DSF is introduced by saliency-guided DSF extraction, which conducts a patch-based visual saliency (PBVS algorithm using “visual attention” mechanisms to guide pre-trained CNNs for producing the discriminative high-level features. Then, an anti-noise network is proposed to learn and enhance the robust and anti-noise structure information of RS scene by directly propagating the label information to fully-connected layers. A joint loss is used to minimize the anti-noise network by integrating anti-noise constraint and a softmax classification loss. The proposed network architecture can be easily trained with a limited amount of training data. The experiments conducted on three different scale RS scene datasets show that the DSFATN method has achieved excellent performance and great robustness in different scales and noise conditions. It obtains classification accuracy of 98.25%, 98.46%, and 98.80%, respectively, on the UC Merced Land Use Dataset (UCM, the Google image dataset of SIRI-WHU, and the SAT-6 dataset, advancing the state-of-the-art substantially.

  11. Using selected scenes from Brazilian films to teach about substance use disorders, within medical education

    Directory of Open Access Journals (Sweden)

    João Mauricio Castaldelli-Maia

    Full Text Available CONTEXT AND OBJECTIVES: Themes like alcohol and drug abuse, relationship difficulties, psychoses, autism and personality dissociation disorders have been widely used in films. Psychiatry and psychiatric conditions in various cultural settings are increasingly taught using films. Many articles on cinema and psychiatry have been published but none have presented any methodology on how to select material. Here, the authors look at the portrayal of abusive use of alcohol and drugs during the Brazilian cinema revival period (1994 to 2008. DESIGN AND SETTING: Qualitative study at two universities in the state of São Paulo. METHODS: Scenes were selected from films available at rental stores and were analyzed using a specifically designed protocol. We assessed how realistic these scenes were and their applicability for teaching. One author selected 70 scenes from 50 films (graded for realism and teaching applicability > 8. These were then rated by another two judges. Rating differences among the three judges were assessed using nonparametric tests (P 8 were defined as "quality scenes". RESULTS: Thirty-nine scenes from 27 films were identified as "quality scenes". Alcohol, cannabis, cocaine, hallucinogens and inhalants were included in these. Signs and symptoms of intoxication, abusive/harmful use and dependence were shown. CONCLUSIONS: We have produced rich teaching material for discussing psychopathology relating to alcohol and drug use that can be used both at undergraduate and at postgraduate level. Moreover, it could be seen that certain drug use behavioral patterns are deeply rooted in some Brazilian films and groups.

  12. Near-Space TOPSAR Large-Scene Full-Aperture Imaging Scheme Based on Two-Step Processing

    Directory of Open Access Journals (Sweden)

    Qianghui Zhang

    2016-07-01

    Full Text Available Free of the constraints of orbit mechanisms, weather conditions and minimum antenna area, synthetic aperture radar (SAR equipped on near-space platform is more suitable for sustained large-scene imaging compared with the spaceborne and airborne counterparts. Terrain observation by progressive scans (TOPS, which is a novel wide-swath imaging mode and allows the beam of SAR to scan along the azimuth, can reduce the time of echo acquisition for large scene. Thus, near-space TOPS-mode SAR (NS-TOPSAR provides a new opportunity for sustained large-scene imaging. An efficient full-aperture imaging scheme for NS-TOPSAR is proposed in this paper. In this scheme, firstly, two-step processing (TSP is adopted to eliminate the Doppler aliasing of the echo. Then, the data is focused in two-dimensional frequency domain (FD based on Stolt interpolation. Finally, a modified TSP (MTSP is performed to remove the azimuth aliasing. Simulations are presented to demonstrate the validity of the proposed imaging scheme for near-space large-scene imaging application.

  13. Scene data fusion: Real-time standoff volumetric gamma-ray imaging

    Energy Technology Data Exchange (ETDEWEB)

    Barnowski, Ross [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720, United States of America (United States); Haefner, Andrew; Mihailescu, Lucian [Lawrence Berkeley National Lab - Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720, United States of America (United States); Vetter, Kai [Department of Nuclear Engineering, UC Berkeley, 4155 Etcheverry Hall, MC 1730, Berkeley, CA 94720, United States of America (United States); Lawrence Berkeley National Lab - Applied Nuclear Physics, 1 Cyclotron Road, Berkeley, CA 94720, United States of America (United States)

    2015-11-11

    An approach to gamma-ray imaging has been developed that enables near real-time volumetric (3D) imaging of unknown environments thus improving the utility of gamma-ray imaging for source-search and radiation mapping applications. The approach, herein dubbed scene data fusion (SDF), is based on integrating mobile radiation imagers with real-time tracking and scene reconstruction algorithms to enable a mobile mode of operation and 3D localization of gamma-ray sources. A 3D model of the scene, provided in real-time by a simultaneous localization and mapping (SLAM) algorithm, is incorporated into the image reconstruction reducing the reconstruction time and improving imaging performance. The SDF concept is demonstrated in this work with a Microsoft Kinect RGB-D sensor, a real-time SLAM solver, and a cart-based Compton imaging platform comprised of two 3D position-sensitive high purity germanium (HPGe) detectors. An iterative algorithm based on Compton kinematics is used to reconstruct the gamma-ray source distribution in all three spatial dimensions. SDF advances the real-world applicability of gamma-ray imaging for many search, mapping, and verification scenarios by improving the tractiblity of the gamma-ray image reconstruction and providing context for the 3D localization of gamma-ray sources within the environment in real-time.

  14. Integration and segregation in auditory scene analysis

    Science.gov (United States)

    Sussman, Elyse S.

    2005-03-01

    Assessment of the neural correlates of auditory scene analysis, using an index of sound change detection that does not require the listener to attend to the sounds [a component of event-related brain potentials called the mismatch negativity (MMN)], has previously demonstrated that segregation processes can occur without attention focused on the sounds and that within-stream contextual factors influence how sound elements are integrated and represented in auditory memory. The current study investigated the relationship between the segregation and integration processes when they were called upon to function together. The pattern of MMN results showed that the integration of sound elements within a sound stream occurred after the segregation of sounds into independent streams and, further, that the individual streams were subject to contextual effects. These results are consistent with a view of auditory processing that suggests that the auditory scene is rapidly organized into distinct streams and the integration of sequential elements to perceptual units takes place on the already formed streams. This would allow for the flexibility required to identify changing within-stream sound patterns, needed to appreciate music or comprehend speech..

  15. Registration of eye reflection and scene images using an aspherical eye model.

    Science.gov (United States)

    Nakazawa, Atsushi; Nitschke, Christian; Nishida, Toyoaki

    2016-11-01

    This paper introduces an image registration algorithm between an eye reflection and a scene image. Although there are currently a large number of image registration algorithms, this task remains difficult due to nonlinear distortions at the eye surface and large amounts of noise, such as iris texture, eyelids, eyelashes, and their shadows. To overcome this issue, we developed an image registration method combining an aspherical eye model that simulates nonlinear distortions considering eye geometry and a two-step iterative registration strategy that obtains dense correspondence of the feature points to achieve accurate image registrations for the entire image region. We obtained a database of eye reflection and scene images featuring four subjects in indoor and outdoor scenes and compared the registration performance with different asphericity conditions. Results showed that the proposed approach can perform accurate registration with an average accuracy of 1.05 deg by using the aspherical cornea model. This work is relevant for eye image analysis in general, enabling novel applications and scenarios.

  16. Ambient visual information confers a context-specific, long-term benefit on memory for haptic scenes.

    Science.gov (United States)

    Pasqualotto, Achille; Finucane, Ciara M; Newell, Fiona N

    2013-09-01

    We investigated the effects of indirect, ambient visual information on haptic spatial memory. Using touch only, participants first learned an array of objects arranged in a scene and were subsequently tested on their recognition of that scene which was always hidden from view. During haptic scene exploration, participants could either see the surrounding room or were blindfolded. We found a benefit in haptic memory performance only when ambient visual information was available in the early stages of the task but not when participants were initially blindfolded. Specifically, when ambient visual information was available a benefit on performance was found in a subsequent block of trials during which the participant was blindfolded (Experiment 1), and persisted over a delay of one week (Experiment 2). However, we found that the benefit for ambient visual information did not transfer to a novel environment (Experiment 3). In Experiment 4 we further investigated the nature of the visual information that improved haptic memory and found that geometric information about a surrounding (virtual) room rather than isolated object landmarks, facilitated haptic scene memory. Our results suggest that vision improves haptic memory for scenes by providing an environment-centred, allocentric reference frame for representing object location through touch. Copyright © 2013 Elsevier B.V. All rights reserved.

  17. Panoramic Search: The Interaction of Memory and Vision in Search through a Familiar Scene

    Science.gov (United States)

    Oliva, Aude; Wolfe, Jeremy M. Arsenio, Helga C.

    2004-01-01

    How do observers search through familiar scenes? A novel panoramic search method is used to study the interaction of memory and vision in natural search behavior. In panoramic search, observers see part of an unchanging scene larger than their current field of view. A target object can be visible, present in the display but hidden from view, or…

  18. Combination of Morphological Operations with Structure based Partitioning and grouping for Text String detection from Natural Scenes

    OpenAIRE

    Vyankatesh V. Rampurkar; Gyankamal J. Chhajed

    2014-01-01

    Text information in natural scene images serves as important clues for many image-based applications such as scene perceptive, content-based image retrieval, assistive direction-finding and automatic geocoding. Now days different approaches like countours based, Image binarization and enhancement based, Gradient based and colour reduction based techniques can be used for the text detection from natural scenes. In this paper the combination of morphological operations with structure based part...

  19. Validation of a single summary score for the Prolapse/Incontinence Sexual Questionnaire-IUGA revised (PISQ-IR).

    Science.gov (United States)

    Constantine, Melissa L; Pauls, Rachel N; Rogers, Rebecca R; Rockwood, Todd H

    2017-12-01

    The Prolapse/Incontinence Sexual Questionnaire-International Urogynecology Association (IUGA) Revised (PISQ-IR) measures sexual function in women with pelvic floor disorders (PFDs) yet is unwieldy, with six individual subscale scores for sexually active women and four for women who are not. We hypothesized that a valid and responsive summary score could be created for the PISQ-IR. Item response data from participating women who completed a revised version of the PISQ-IR at three clinical sites were used to generate item weights using a magnitude estimation (ME) and Q-sort (Q) approaches. Item weights were applied to data from the original PISQ-IR validation to generate summary scores. Correlation and factor analysis methods were used to evaluate validity and responsiveness of summary scores. Weighted and nonweighted summary scores for the sexually active PISQ-IR demonstrated good criterion validity with condition-specific measures: Incontinence Severity Index = 0.12, 0.11, 0.11; Pelvic Floor Distress Inventory-20 = 0.39, 0.39, 0.12; Epidemiology of Prolapse and Incontinence Questionnaire-Q35 = 0.26 0,.25, 0.40); Female Sexual Functioning Index subscale total score = 0.72, 0.75, 0.72 for nonweighted, ME, and Q summary scores, respectively. Responsiveness evaluation showed weighted and nonweighted summary scores detected moderate effect sizes (Cohen's d > 0.5). Weighted items for those NSA demonstrated significant floor effects and did not meet criterion validity. A PISQ-IR summary score for use with sexually active women, nonweighted or calculated with ME or Q item weights, is a valid and reliable measure for clinical use. The summary scores provide value for assesing clinical treatment of pelvic floor disorders.

  20. Successful synthesis and thermal stability of immiscible metal Au-Rh, Au-Ir andAu-Ir-Rh nanoalloys

    Science.gov (United States)

    Shubin, Yury; Plyusnin, Pavel; Sharafutdinov, Marat; Makotchenko, Evgenia; Korenev, Sergey

    2017-05-01

    We successfully prepared face-centred cubic nanoalloys in systems of Au-Ir, Au-Rh and Au-Ir-Rh, with large bulk miscibility gaps, in one-run reactions under thermal decomposition of specially synthesised single-source precursors, namely, [AuEn2][Ir(NO2)6], [AuEn2][Ir(NO2)6] х [Rh(NO2)6]1-х and [AuEn2][Rh(NO2)6]. The precursors employed contain all desired metals ‘mixed’ at the atomic level, thus providing significant advantages for obtaining alloys. The observations using high-resolution transmission electron microscopy show that the nanoalloy structures are composed of well-dispersed aggregates of crystalline domains with a mean size of 5 ± 3 nm. Еnergy dispersive x-ray spectroscopy and x-ray powder diffraction (XRD) measurements confirm the formation of AuIr, AuRh, AuIr0.75Rh0.25, AuIr0.50Rh0.50 and AuIr0.25Rh0.75 metastable solid solutions. In situ high-temperature synchrotron XRD (HTXRD) was used to study the formation mechanism of nanoalloys. The observed transformations are described by the ‘conversion chemistry’ mechanism characterised by the primary development of particles comprising atoms of only one type, followed by a chemical reaction resulting in the final formation of a nanoalloy. The obtained metastable nanoalloys exhibit essential thermal stability. Exposure to 180 °C for 30 h does not cause any dealloying process.

  1. Characteristics of Ir/Au transition edge sensor

    International Nuclear Information System (INIS)

    Kunieda, Yuichi; Ohno, Masashi; Nakazawa, Masaharu; Takahashi, Hiroyuki; Fukuda, Daiji; Ohkubo, Masataka

    2004-01-01

    A new type of microcalorimeter has been developed using a transition edge sensor (TES) and an electro-thermal feedback (ETF) method to achieve higher energy resolution and higher count rate. We are developing a superconducting Ir-based transition edge sensor (TES) microcalorimeters. To improve thermal conductivity and achieve higher energy resolution with an Ir-TES, we fabricated an Ir/Au bilayer TES by depositing gold on Ir and investigated the influence of intermediate between superconducting and normal states at the transition edge for signal responses by microscopic observation in the Ir/Au-TES. (T. Tanaka)

  2. Video Scene Parsing with Predictive Feature Learning

    OpenAIRE

    Jin, Xiaojie; Li, Xin; Xiao, Huaxin; Shen, Xiaohui; Lin, Zhe; Yang, Jimei; Chen, Yunpeng; Dong, Jian; Liu, Luoqi; Jie, Zequn; Feng, Jiashi; Yan, Shuicheng

    2016-01-01

    In this work, we address the challenging video scene parsing problem by developing effective representation learning methods given limited parsing annotations. In particular, we contribute two novel methods that constitute a unified parsing framework. (1) \\textbf{Predictive feature learning}} from nearly unlimited unlabeled video data. Different from existing methods learning features from single frame parsing, we learn spatiotemporal discriminative features by enforcing a parsing network to ...

  3. How context information and target information guide the eyes from the first epoch of search in real-world scenes.

    Science.gov (United States)

    Spotorno, Sara; Malcolm, George L; Tatler, Benjamin W

    2014-02-11

    This study investigated how the visual system utilizes context and task information during the different phases of a visual search task. The specificity of the target template (the picture or the name of the target) and the plausibility of target position in real-world scenes were manipulated orthogonally. Our findings showed that both target template information and guidance of spatial context are utilized to guide eye movements from the beginning of scene inspection. In both search initiation and subsequent scene scanning, the availability of a specific visual template was particularly useful when the spatial context of the scene was misleading and the availability of a reliable scene context facilitated search mainly when the template was abstract. Target verification was affected principally by the level of detail of target template, and was quicker in the case of a picture cue. The results indicate that the visual system can utilize target template guidance and context guidance flexibly from the beginning of scene inspection, depending upon the amount and the quality of the available information supplied by either of these high-level sources. This allows for optimization of oculomotor behavior throughout the different phases of search within a real-world scene.

  4. Jaunesnių ir vyresnių klasių mokinių konfliktų ir jų sprendimų ypatumai

    OpenAIRE

    Stočkutė, Jovita

    2012-01-01

    Tyrimo objektas – jaunesnių ir vyresnių klasių mokinių konfliktai ir jų sprendimų ypatumai. Tyrimo tikslas – išanalizuoti jaunesnių ir vyresnių klasių mokinių konfliktus ir jų sprendimų ypatumus. Hipotezės – keliame prielaidas, kad - vyresnių klasių mokiniai konfliktuoti pamokose linkę labiau, nei jaunesnių klasių mokiniai. - vyresnių klasių mokiniai naudoja įvairesnes konflikto sprendimo strategijas nei jaunesnių klasių mokiniai. Tyrimo uždaviniai: 1. Atskleisti jaune...

  5. The lifesaving potential of specialized on-scene medical support for urban tactical operations.

    Science.gov (United States)

    Metzger, Jeffery C; Eastman, Alexander L; Benitez, Fernando L; Pepe, Paul E

    2009-01-01

    Since the 1980s, the specialized field of tactical medicine has evolved with growing support from numerous law-enforcement and medical organizations. On-scene backup from tactical emergency medical support (TEMS) providers has not only permitted more immediate advanced medical aid to injured officers, victims, bystanders, and suspects, but also allows for rapid after-incident medical screening or minor treatments that can obviate an unnecessary transport to an emergency department. The purpose of this report is to document one very explicit benefit of TEMS deployment, namely, a situation in which a police officer's life was saved by the routine on-scene presence of specialized TEMS physicians. In this specific case, a police officer was shot in the anterior neck during a law-enforcement operation and became moribund with massive hemorrhage and compromised airway. Two TEMS physicians, who had been integrated into the tactical law-enforcement team, were on scene, controlled the hemorrhage, and provided a surgical airway. By the time of arrival at the hospital, the patient had begun purposeful movements and, within 12 hours, was alert and oriented. Considering the rapid decline in the patient's condition, it was later deemed by quality assurance reviewers that the on-scene presence of these TEMS providers was lifesaving.

  6. Iridium Interfacial Stack - IrIS

    Science.gov (United States)

    Spry, David

    2012-01-01

    Iridium Interfacial Stack (IrIS) is the sputter deposition of high-purity tantalum silicide (TaSi2-400 nm)/platinum (Pt-200 nm)/iridium (Ir-200 nm)/platinum (Pt-200 nm) in an ultra-high vacuum system followed by a 600 C anneal in nitrogen for 30 minutes. IrIS simultaneously acts as both a bond metal and a diffusion barrier. This bondable metallization that also acts as a diffusion barrier can prevent oxygen from air and gold from the wire-bond from infiltrating silicon carbide (SiC) monolithically integrated circuits (ICs) operating above 500 C in air for over 1,000 hours. This TaSi2/Pt/Ir/Pt metallization is easily bonded for electrical connection to off-chip circuitry and does not require extra anneals or masking steps. There are two ways that IrIS can be used in SiC ICs for applications above 500 C: it can be put directly on a SiC ohmic contact metal, such as Ti, or be used as a bond metal residing on top of an interconnect metal. For simplicity, only the use as a bond metal is discussed. The layer thickness ratio of TaSi2 to the first Pt layer deposited thereon should be 2:1. This will allow Si from the TaSi2 to react with the Pt to form Pt2Si during the 600 C anneal carried out after all layers have been deposited. The Ir layer does not readily form a silicide at 600 C, and thereby prevents the Si from migrating into the top-most Pt layer during future anneals and high-temperature IC operation. The second (i.e., top-most) deposited Pt layer needs to be about 200 nm to enable easy wire bonding. The thickness of 200 nm for Ir was chosen for initial experiments; further optimization of the Ir layer thickness may be possible via further experimentation. Ir itself is not easily wire-bonded because of its hardness and much higher melting point than Pt. Below the iridium layer, the TaSi2 and Pt react and form desired Pt2Si during the post-deposition anneal while above the iridium layer remains pure Pt as desired to facilitate easy and strong wire-bonding to the Si

  7. OH/IR stars in the Galaxy

    International Nuclear Information System (INIS)

    Baud, B.

    1978-01-01

    Radio astronomical observations leading to the discovery of 71 OH/IR sources are described in this thesis. These OH/IR sources are characterized by their double peaked OH emission profile at a wavelength of 18 cm and by their strong IR infrared emission. An analysis of the distribution and radial velocities of a number of previously known and new OH/IR sources was performed. The parameter ΔV (the velocity separation between two emission peaks of the 18 cm line profile) was found to be a good criterion for a population classification with respect to stellar age

  8. First-principles study on cubic pyrochlore iridates Y2Ir2O7 and Pr2Ir2O7

    International Nuclear Information System (INIS)

    Ishii, Fumiyuki; Mizuta, Yo Pierre; Kato, Takehiro; Ozaki, Taisuke; Weng Hongming; Onoda, Shigeki

    2015-01-01

    Fully relativistic first-principles electronic structure calculations based on a noncollinear local spin density approximation (LSDA) are performed for pyrochlore iridates Y 2 Ir 2 O 7 and Pr 2 Ir 2 O 7 . The all-in, all-out antiferromagnetic (AF) order is stablized by the on-site Coulomb repulsion U > U c in the LSDA+U scheme, with U c ∼ 1.1 eV and 1.3 eV for Y 2 Ir 2 O 7 and Pr 2 Ir 2 O 7 , respectively. AF semimetals with and without Weyl points and then a topologically trivial AF insulator successively appear with further increasing U. For U = 1.3 eV, Y 2 Ir 2 O 7 is a topologically trivial narrow-gap AF insulator having an ordered local magnetic moment ∼0.5μ B /Ir, while Pr 2 Ir 2 O 7 is barely a paramagnetic semimetal with electron and hole concentrations of 0.016/Ir, in overall agreements with experiments. With decreasing oxygen position parameter x describing the trigonal compression of IrO 6 octahedra, Pr 2 Ir 2 O 7 is driven through a non-Fermi-liquid semimetal having only an isolated Fermi point of Γ 8 + , showing a quadratic band touching, to a Z 2 topological insulator. (author)

  9. The effects of scene characteristics, resolution, and compression on the ability to recognize objects in video

    Science.gov (United States)

    Dumke, Joel; Ford, Carolyn G.; Stange, Irena W.

    2011-03-01

    Public safety practitioners increasingly use video for object recognition tasks. These end users need guidance regarding how to identify the level of video quality necessary for their application. The quality of video used in public safety applications must be evaluated in terms of its usability for specific tasks performed by the end user. The Public Safety Communication Research (PSCR) project performed a subjective test as one of the first in a series to explore visual intelligibility in video-a user's ability to recognize an object in a video stream given various conditions. The test sought to measure the effects on visual intelligibility of three scene parameters (target size, scene motion, scene lighting), several compression rates, and two resolutions (VGA (640x480) and CIF (352x288)). Seven similarly sized objects were used as targets in nine sets of near-identical source scenes, where each set was created using a different combination of the parameters under study. Viewers were asked to identify the objects via multiple choice questions. Objective measurements were performed on each of the scenes, and the ability of the measurement to predict visual intelligibility was studied.

  10. The detectability of cracks using sonic IR

    Science.gov (United States)

    Morbidini, Marco; Cawley, Peter

    2009-05-01

    This paper proposes a methodology to study the detectability of fatigue cracks in metals using sonic IR (also known as thermosonics). The method relies on the validation of simple finite-element thermal models of the cracks and specimens in which the thermal loads have been defined by means of a priori measurement of the additional damping introduced in the specimens by each crack. This estimate of crack damping is used in conjunction with a local measurement of the vibration strain during ultrasonic excitation to retrieve the power released at the crack; these functions are then input to the thermal model of the specimens to find the resulting temperature rises (sonic IR signals). The method was validated on mild steel beams with two-dimensional cracks obtained in the low-cycle fatigue regime as well as nickel-based superalloy beams with three-dimensional "thumbnail" cracks generated in the high-cycle fatigue regime. The equivalent 40kHz strain necessary to obtain a desired temperature rise was calculated for cracks in the nickel superalloy set, and the detectability of cracks as a function of length in the range of 1-5mm was discussed.

  11. Radioluminescence dating: the IR emission of feldspar

    International Nuclear Information System (INIS)

    Schilles, Thomas.; Habermann, Jan

    2000-01-01

    A new luminescence reader for radioluminescence (RL) measurements is presented. The system allows detection of RL emissions in the near infrared region (IR). Basic bleaching properties of the IR-RL emission of feldspars are investigated. Sunlight-bleaching experiments as a test for sensitivity changes are presented. IR-bleaching experiments were carried out to obtain information about the underlying physical processes of the IR-RL emission

  12. Representation of Gravity-Aligned Scene Structure in Ventral Pathway Visual Cortex.

    Science.gov (United States)

    Vaziri, Siavash; Connor, Charles E

    2016-03-21

    The ventral visual pathway in humans and non-human primates is known to represent object information, including shape and identity [1]. Here, we show the ventral pathway also represents scene structure aligned with the gravitational reference frame in which objects move and interact. We analyzed shape tuning of recently described macaque monkey ventral pathway neurons that prefer scene-like stimuli to objects [2]. Individual neurons did not respond to a single shape class, but to a variety of scene elements that are typically aligned with gravity: large planes in the orientation range of ground surfaces under natural viewing conditions, planes in the orientation range of ceilings, and extended convex and concave edges in the orientation range of wall/floor/ceiling junctions. For a given neuron, these elements tended to share a common alignment in eye-centered coordinates. Thus, each neuron integrated information about multiple gravity-aligned structures as they would be seen from a specific eye and head orientation. This eclectic coding strategy provides only ambiguous information about individual structures but explicit information about the environmental reference frame and the orientation of gravity in egocentric coordinates. In the ventral pathway, this could support perceiving and/or predicting physical events involving objects subject to gravity, recognizing object attributes like animacy based on movement not caused by gravity, and/or stabilizing perception of the world against changes in head orientation [3-5]. Our results, like the recent discovery of object weight representation [6], imply that the ventral pathway is involved not just in recognition, but also in physical understanding of objects and scenes. Copyright © 2016 Elsevier Ltd. All rights reserved.

  13. Pamokslo ir eseistikos sąveika Juliaus Sasnausko ir Giedrės Kazlauskaitės eseistikoje

    OpenAIRE

    Skirmantienė, Daiva

    2010-01-01

    Jaunosios kartos rašytojų kunigo pamokslininko Juliaus Sasnausko ir pasaulietės Giedrės Kazlauskaitės kūrybos semantinį ir įdėjinį lauką padeda suprasti teologinės literatūros ir literatūrinės teologijos sąveika. Teologinių prasmių paieška jų tekstuose atliepia šiuolaikinio žmogaus pastangas per literatūrą, skelbiančią gyvenamojo laikotarpio aktualijas, rasti kelią į tam tikras krikščioniškąsias tiesas ir bandyti reflektuoti savo tikėjimą bei analizuoti išganymo istoriją. Autorių kūryo...

  14. Characterization of a novel miniaturized burst-mode infrared laser system for IR-MALDESI mass spectrometry imaging.

    Science.gov (United States)

    Ekelöf, Måns; Manni, Jeffrey; Nazari, Milad; Bokhart, Mark; Muddiman, David C

    2018-03-01

    Laser systems are widely used in mass spectrometry as sample probes and ionization sources. Mid-infrared lasers are particularly suitable for analysis of high water content samples such as animal and plant tissues, using water as a resonantly excited sacrificial matrix. Commercially available mid-IR lasers have historically been bulky and expensive due to cooling requirements. This work presents a novel air-cooled miniature mid-IR laser with adjustable burst-mode output and details an evaluation of its performance for mass spectrometry imaging. The miniature laser was found capable of generating sufficient energy for complete ablation of animal tissue in the context of an IR-MALDESI experiment with exogenously added ice matrix, yielding several hundred confident metabolite identifications. Graphical abstract The use of a novel miniature 2.94 μm burst-mode laser in IR-MALDESI allows for rapid and sensitive mass spectrometry imaging of a whole mouse.

  15. Rapid gist perception of meaningful real-life scenes: Exploring individual and gender differences in multiple categorization tasks

    Science.gov (United States)

    Vanmarcke, Steven; Wagemans, Johan

    2015-01-01

    In everyday life, we are generally able to dynamically understand and adapt to socially (ir)elevant encounters, and to make appropriate decisions about these. All of this requires an impressive ability to directly filter and obtain the most informative aspects of a complex visual scene. Such rapid gist perception can be assessed in multiple ways. In the ultrafast categorization paradigm developed by Simon Thorpe et al. (1996), participants get a clear categorization task in advance and succeed at detecting the target object of interest (animal) almost perfectly (even with 20 ms exposures). Since this pioneering work, follow-up studies consistently reported population-level reaction time differences on different categorization tasks, indicating a superordinate advantage (animal versus dog) and effects of perceptual similarity (animals versus vehicles) and object category size (natural versus animal versus dog). In this study, we replicated and extended these separate findings by using a systematic collection of different categorization tasks (varying in presentation time, task demands, and stimuli) and focusing on individual differences in terms of e.g., gender and intelligence. In addition to replicating the main findings from the literature, we find subtle, yet consistent gender differences (women faster than men). PMID:26034569

  16. Technical Manual for the SAM Biomass Power Generation Model

    Energy Technology Data Exchange (ETDEWEB)

    Jorgenson, J.; Gilman, P.; Dobos, A.

    2011-09-01

    This technical manual provides context for the implementation of the biomass electric power generation performance model in the National Renewable Energy Laboratory's (NREL's) System Advisor Model (SAM). Additionally, the report details the engineering and scientific principles behind the underlying calculations in the model. The framework established in this manual is designed to give users a complete understanding of behind-the-scenes calculations and the results generated.

  17. Assembling a game development scene? Uncovering Finland’s largest demo party

    Directory of Open Access Journals (Sweden)

    Heikki Tyni

    2014-03-01

    Full Text Available The study takes look at Assembly, a large-scale LAN and demo party founded in 1992 and organized annually in Helsinki, Finland. Assembly is used as a case study to explore the relationship between computer hobbyism – including gaming, demoscene and other related activities – and professional game development. Drawing from expert interviews, a visitor query and news coverage we ask what kind of functions Assembly has played for the scene in general, and on the formation and fostering of the Finnish game industry in particular. The conceptual contribution of the paper is constructed around the interrelated concepts of scene, technicity and gaming capital.

  18. Impulsive IR-multiphoton dissociation of acrolein: observation of non-statistical product vibrational excitation in CO ( v=1-12) by time resolved IR fluorescence spectroscopy

    Science.gov (United States)

    Chowdhury, P. K.

    2000-10-01

    On IR-multiphoton excitation, vibrationally highly excited acrolein molecules undergo concerted dissociation generating CO and ethylene. The vibrationally excited products, CO and ethylene, are detected immediately following the CO 2 laser pulse by observing IR fluorescence at 4.7 and 3.2 μm, respectively. The nascent CO is formed with significant vibrational excitation, with a Boltzmann population distribution for v=1-12 levels corresponding to T v=12 950±50 K. The average vibrational energy in the product CO is found to be 26 kcal mol -1, in contrast to its statistical share of 5 kcal mol -1, available from the product energy distribution. The nascent vibrationally excited ethylene either dissociates by absorbing further infrared laser photons from the tail of the CO 2 laser pulse or relaxes by collisional deactivation. Ethylene IR-fluorescence excitation spectrum showed a structure in the quasi-continuum, with a facile resonance at 10.53 μm corresponding to the 10P(14) CO 2 laser line, which explains the higher acetylene yield observed at a higher pressure. A hydrogen atom transfer mechanism followed by C-C impulsive break in the acrolein transition state may be responsible for such non-statistical product energy distribution.

  19. Multiplexing of spatial modes in the mid-IR region

    Science.gov (United States)

    Gailele, Lucas; Maweza, Loyiso; Dudley, Angela; Ndagano, Bienvenu; Rosales-Guzman, Carmelo; Forbes, Andrew

    2017-02-01

    Traditional optical communication systems optimize multiplexing in polarization and wavelength both trans- mitted in fiber and free-space to attain high bandwidth data communication. Yet despite these technologies, we are expected to reach a bandwidth ceiling in the near future. Communications using orbital angular momentum (OAM) carrying modes offers infinite dimensional states, providing means to increase link capacity by multiplexing spatially overlapping modes in both the azimuthal and radial degrees of freedom. OAM modes are multiplexed and de-multiplexed by the use of spatial light modulators (SLM). Implementation of complex amplitude modulation is employed on laser beams phase and amplitude to generate Laguerre-Gaussian (LG) modes. Modal decomposition is employed to detect these modes due to their orthogonality as they propagate in space. We demonstrate data transfer by sending images as a proof-of concept in a lab-based scheme. We demonstrate the creation and detection of OAM modes in the mid-IR region as a precursor to a mid-IR free-space communication link.

  20. β-Isocyanoalanine as an IR probe: comparison of vibrational dynamics between isonitrile and nitrile-derivatized IR probes.

    Science.gov (United States)

    Maj, Michał; Ahn, Changwoo; Kossowska, Dorota; Park, Kwanghee; Kwak, Kyungwon; Han, Hogyu; Cho, Minhaeng

    2015-05-07

    An infrared (IR) probe based on isonitrile (NC)-derivatized alanine 1 was synthesized and the vibrational properties of its NC stretching mode were investigated using FTIR and femtosecond IR pump-probe spectroscopy. It is found that the NC stretching mode is very sensitive to the hydrogen-bonding ability of solvent molecules. Moreover, its transition dipole strength is larger than that of nitrile (CN) in nitrile-derivatized IR probe 2. The vibrational lifetime of the NC stretching mode is found to be 5.5 ± 0.2 ps in both D2O and DMF solvents, which is several times longer than that of the azido (N3) stretching mode in azido-derivatized IR probe 3. Altogether these properties suggest that the NC group can be a very promising sensing moiety of IR probes for studying the solvation structure and dynamics of biomolecules.

  1. Image policy, subjectivation and argument scenes

    Directory of Open Access Journals (Sweden)

    Ângela Cristina Salgueiro Marques

    2014-12-01

    Full Text Available This paper is aimed at discussing, with focus on Jacques Rancière, how an image policy can be noticed in the creative production of scenes of dissent from which the political agent emerge, appears and constitute himself in a process of subjectivation. The political and critical power of the image is linked to survival acts: operations and attempts that enable to resist to captures, silences and excesses comitted by the media discourses, by the social institutions and by the State.

  2. John Lennon, autograph hound: The fan-musician community in Hamburg's early rock-and-roll scene, 1960–65

    Directory of Open Access Journals (Sweden)

    Julia Sneeringer

    2011-03-01

    Full Text Available This article explores the Beat music scene in Hamburg, West Germany, in the early 1960s. This scene became famous for its role in incubating the Beatles, who played over 250 nights there in 1960–62, but this article focuses on the prominent role of fans in this scene. Here fans were welcomed by bands and club owners as cocreators of a scene that offered respite from the prevailing conformism of West Germany during the Economic Miracle. This scene, born at the confluence of commercial and subcultural impulses, was also instrumental in transforming rock and roll from a working-class niche product to a cross-class lingua franca for youth. It was also a key element in West Germany's broader processes of democratization during the 1960s, opening up social space in which the meanings of authority, respectability, and democracy itself could be questioned and reworked.

  3. Enhancement of Stereo Imagery by Artificial Texture Projection Generated Using a LIDAR

    Science.gov (United States)

    Veitch-Michaelis, Joshua; Muller, Jan-Peter; Walton, David; Storey, Jonathan; Foster, Michael; Crutchley, Benjamin

    2016-06-01

    Passive stereo imaging is capable of producing dense 3D data, but image matching algorithms generally perform poorly on images with large regions of homogenous texture due to ambiguous match costs. Stereo systems can be augmented with an additional light source that can project some form of unique texture onto surfaces in the scene. Methods include structured light, laser projection through diffractive optical elements, data projectors and laser speckle. Pattern projection using lasers has the advantage of producing images with a high signal to noise ratio. We have investigated the use of a scanning visible-beam LIDAR to simultaneously provide enhanced texture within the scene and to provide additional opportunities for data fusion in unmatched regions. The use of a LIDAR rather than a laser alone allows us to generate highly accurate ground truth data sets by scanning the scene at high resolution. This is necessary for evaluating different pattern projection schemes. Results from LIDAR generated random dots are presented and compared to other texture projection techniques. Finally, we investigate the use of image texture analysis to intelligently project texture where it is required while exploiting the texture available in the ambient light image.

  4. ENHANCEMENT OF STEREO IMAGERY BY ARTIFICIAL TEXTURE PROJECTION GENERATED USING A LIDAR

    Directory of Open Access Journals (Sweden)

    J. Veitch-Michaelis

    2016-06-01

    Full Text Available Passive stereo imaging is capable of producing dense 3D data, but image matching algorithms generally perform poorly on images with large regions of homogenous texture due to ambiguous match costs. Stereo systems can be augmented with an additional light source that can project some form of unique texture onto surfaces in the scene. Methods include structured light, laser projection through diffractive optical elements, data projectors and laser speckle. Pattern projection using lasers has the advantage of producing images with a high signal to noise ratio. We have investigated the use of a scanning visible-beam LIDAR to simultaneously provide enhanced texture within the scene and to provide additional opportunities for data fusion in unmatched regions. The use of a LIDAR rather than a laser alone allows us to generate highly accurate ground truth data sets by scanning the scene at high resolution. This is necessary for evaluating different pattern projection schemes. Results from LIDAR generated random dots are presented and compared to other texture projection techniques. Finally, we investigate the use of image texture analysis to intelligently project texture where it is required while exploiting the texture available in the ambient light image.

  5. Automatic structural scene digitalization.

    Science.gov (United States)

    Tang, Rui; Wang, Yuhan; Cosker, Darren; Li, Wenbin

    2017-01-01

    In this paper, we present an automatic system for the analysis and labeling of structural scenes, floor plan drawings in Computer-aided Design (CAD) format. The proposed system applies a fusion strategy to detect and recognize various components of CAD floor plans, such as walls, doors, windows and other ambiguous assets. Technically, a general rule-based filter parsing method is fist adopted to extract effective information from the original floor plan. Then, an image-processing based recovery method is employed to correct information extracted in the first step. Our proposed method is fully automatic and real-time. Such analysis system provides high accuracy and is also evaluated on a public website that, on average, archives more than ten thousands effective uses per day and reaches a relatively high satisfaction rate.

  6. A STEP TOWARDS DYNAMIC SCENE ANALYSIS WITH ACTIVE MULTI-VIEW RANGE IMAGING SYSTEMS

    Directory of Open Access Journals (Sweden)

    M. Weinmann

    2012-07-01

    Full Text Available Obtaining an appropriate 3D description of the local environment remains a challenging task in photogrammetric research. As terrestrial laser scanners (TLSs perform a highly accurate, but time-dependent spatial scanning of the local environment, they are only suited for capturing static scenes. In contrast, new types of active sensors provide the possibility of simultaneously capturing range and intensity information by images with a single measurement, and the high frame rate also allows for capturing dynamic scenes. However, due to the limited field of view, one observation is not sufficient to obtain a full scene coverage and therefore, typically, multiple observations are collected from different locations. This can be achieved by either placing several fixed sensors at different known locations or by using a moving sensor. In the latter case, the relation between different observations has to be estimated by using information extracted from the captured data and then, a limited field of view may lead to problems if there are too many moving objects within it. Hence, a moving sensor platform with multiple and coupled sensor devices offers the advantages of an extended field of view which results in a stabilized pose estimation, an improved registration of the recorded point clouds and an improved reconstruction of the scene. In this paper, a new experimental setup for investigating the potentials of such multi-view range imaging systems is presented which consists of a moving cable car equipped with two synchronized range imaging devices. The presented setup allows for monitoring in low altitudes and it is suitable for getting dynamic observations which might arise from moving cars or from moving pedestrians. Relying on both 3D geometry and 2D imagery, a reliable and fully automatic approach for co-registration of captured point cloud data is presented which is essential for a high quality of all subsequent tasks. The approach involves using

  7. Joint IAEA/NEA IRS guidelines

    International Nuclear Information System (INIS)

    1997-01-01

    The Incident Reporting System (IRS) is an international system jointly operated by the International Atomic Energy Agency (IAEA) and the Nuclear Energy Agency of the Organization for Economic Cooperation and Development (OECD/NEA). The fundamental objective of the IRS is to contribute to improving the safety of commercial nuclear power plants (NPPs) which are operated worldwide. This objective can be achieved by providing timely and detailed information on both technical and human factors related to events of safety significance which occur at these plants. The purpose of these guidelines, which supersede the previous IAEA Safety Series No. 93 (Part II) and the NEA IRS guidelines, is to describe the system and to give users the necessary background and guidance to enable them to produce IRS reports meeting a high standard of quality while retaining the high efficiency of the system expected by all Member States operating nuclear power plants

  8. Memory-guided attention during active viewing of edited dynamic scenes.

    Science.gov (United States)

    Valuch, Christian; König, Peter; Ansorge, Ulrich

    2017-01-01

    Films, TV shows, and other edited dynamic scenes contain many cuts, which are abrupt transitions from one video shot to the next. Cuts occur within or between scenes, and often join together visually and semantically related shots. Here, we tested to which degree memory for the visual features of the precut shot facilitates shifting attention to the postcut shot. We manipulated visual similarity across cuts, and measured how this affected covert attention (Experiment 1) and overt attention (Experiments 2 and 3). In Experiments 1 and 2, participants actively viewed a target movie that randomly switched locations with a second, distractor movie at the time of the cuts. In Experiments 1 and 2, participants were able to deploy attention more rapidly and accurately to the target movie's continuation when visual similarity was high than when it was low. Experiment 3 tested whether this could be explained by stimulus-driven (bottom-up) priming by feature similarity, using one clip at screen center that was followed by two alternative continuations to the left and right. Here, even the highest similarity across cuts did not capture attention. We conclude that following cuts of high visual similarity, memory-guided attention facilitates the deployment of attention, but this effect is (top-down) dependent on the viewer's active matching of scene content across cuts.

  9. Genetics of variation in HOMA-IR and cardiovascular risk factors in Mexican-Americans.

    Science.gov (United States)

    Voruganti, V Saroja; Lopez-Alvarenga, Juan C; Nath, Subrata D; Rainwater, David L; Bauer, Richard; Cole, Shelley A; Maccluer, Jean W; Blangero, John; Comuzzie, Anthony G

    2008-03-01

    Insulin resistance is a major biochemical defect underlying the pathogenesis of cardiovascular disease (CVD). Mexican-Americans are known to have an unfavorable cardiovascular profile. Thus, the aim of this study was to investigate the genetic effect on variation in HOMA-IR and to evaluate its genetic correlations with other phenotypes related to risk of CVD in Mexican-Americans. The homeostatic model assessment method (HOMA-IR) is one of several approaches that are used to measure insulin resistance and was used here to generate a quantitative phenotype for genetic analysis. For 644 adults who had participated in the San Antonio Family Heart Study (SAFHS), estimates of genetic contribution were computed using a variance components method implemented in SOLAR. Traits that exhibited significant heritabilities were body mass index (BMI) (h (2) = 0.43), waist circumference (h (2) = 0.48), systolic blood pressure (h (2) = 0.30), diastolic blood pressure (h (2) = 0.21), pulse pressure (h (2) = 0.32), triglycerides (h (2) = 0.51), LDL cholesterol (h (2) = 0.31), HDL cholesterol (h (2) = 0.24), C-reactive protein (h (2) = 0.17), and HOMA-IR (h (2) = 0.33). A genome-wide scan for HOMA-IR revealed significant evidence of linkage on chromosome 12q24 (close to PAH (phenylalanine hydroxylase), LOD = 3.01, p HOMA-IR with BMI (rho (G) = 0.36), waist circumference (rho (G) = 0.47), pulse pressure (rho (G) = 0.39), and HDL cholesterol (rho (G) = -0.18). Identification of significant linkage for HOMA-IR on chromosome 12q replicates previous family-based studies reporting linkage of phenotypes associated with type 2 diabetes in the same chromosomal region. Significant genetic correlations between HOMA-IR and phenotypes related to CVD risk factors suggest that a common set of gene(s) influence the regulation of these phenotypes.

  10. Was That Levity or Livor Mortis? Crime Scene Investigators' Perspectives on Humor and Work

    Science.gov (United States)

    Vivona, Brian D.

    2012-01-01

    Humor is common and purposeful in most work settings. Although researchers have examined humor and joking behavior in various work settings, minimal research has been done on humor applications in the field of crime scene investigation. The crime scene investigator encounters death, trauma, and tragedy in a more intimate manner than any other…

  11. Įvairialyčiai lantano ir mangano oksido ir multiferoinio bismuto ferito heterodariniai

    Directory of Open Access Journals (Sweden)

    Bonifacas VENGALIS

    2011-11-01

    Full Text Available Pastaruoju metu naujų elektronikos prietaisų gamyboje buvo pasiekta didelė pažanga auginant, tyrinėjant ir pritaikant plonasluoksnes struktūras, sudarytas iš įvairių daugiakomponenčių funkcinių oksidų. Šiai oksidų grupei priklauso superlaidieji kupratai, mangano oksidai (manganitai, pasižymintys magnetovaržos reiškiniu, taip pat kiti feromagnetiniai, feroelektriniai, multiferoiniai oksidai. Manganitams (jų bendra formulė Ln1-xAxMnO3, kur Ln = La, Nd,..., o A - dvivalentis katijonas, toks kaip Ba, Sr ar Ca skiriama daug dėmesio dėl jų įdomių elektrinių savybių bei tinkamumo įvairiems spintronikos prietaisams kurti. Multiferoikai  (feroelektriniai feromagnetai pasižymi magnetoelektriniu efektu, duodančiu unikalią galimybę elektrinėms ir magnetinėms medžiagos savybėms valdyti panaudoti elektrinius ir magnetinius laukus. Bismuto feritas BiFeO3 (BFO, turintis romboedriškai deformuotą perovskito struktūrą, šiuo metu yra vienas labiausiai tyrinėjamų šios klasės junginių. Organiniai puslaidininkiai (OP taip pat atveria daug naujų galimybių elektronikai. Jų pranašumas yra didelė organinių junginių įvairovė ir palyginti paprasta ir pigi plonų sluoksnių gamybos technologija. Be to, OP pasižymi neįprastai didelėmis sukinių relaksacijos laiko vertėmis, todėl ateityje jie gali būti naudojami naujiems spintronikos prietaisams gaminti. Šiame straipsnyje apžvelgiami pastarųjų metų darbo autorių ir jų kolegų atlikti anksčiau minėtų medžiagų tyrimai. Daugiausia dėmesio skiriama magnetovaržinėmis savybėmis pasižyminčių lantano ir mangano oksidų (manganitų bei multiferoinio  BiFeO3 (BFO junginio plonųjų sluoksnių ir heterodarinių auginimui, tarpfazinių ribų tarp minėtų oksidų, laidžiojo SrTiO3 ir organinio puslaidininkio (Alq3 sudarymui, taip pat elektrinėms heterodarinių savybėms. Plonieji La2/3A1/3MnO3 (A = Ca, Sr, Ba, Ce sluoksniai, kurių storis d

  12. Sex differences in the brain response to affective scenes with or without humans.

    Science.gov (United States)

    Proverbio, Alice Mado; Adorni, Roberta; Zani, Alberto; Trestianu, Laura

    2009-10-01

    Recent findings have demonstrated that women might be more reactive than men to viewing painful stimuli (vicarious response to pain), and therefore more empathic [Han, S., Fan, Y., & Mao, L. (2008). Gender difference in empathy for pain: An electrophysiological investigation. Brain Research, 1196, 85-93]. We investigated whether the two sexes differed in their cerebral responses to affective pictures portraying humans in different positive or negative contexts compared to natural or urban scenarios. 440 IAPS slides were presented to 24 Italian students (12 women and 12 men). Half the pictures displayed humans while the remaining scenes lacked visible persons. ERPs were recorded from 128 electrodes and swLORETA (standardized weighted Low-Resolution Electromagnetic Tomography) source reconstruction was performed. Occipital P115 was greater in response to persons than to scenes and was affected by the emotional valence of the human pictures. This suggests that processing of biologically relevant stimuli is prioritized. Orbitofrontal N2 was greater in response to positive than negative human pictures in women but not in men, and not to scenes. A late positivity (LP) to suffering humans far exceeded the response to negative scenes in women but not in men. In both sexes, the contrast suffering-minus-happy humans revealed a difference in the activation of the occipito/temporal, right occipital (BA19), bilateral parahippocampal, left dorsal prefrontal cortex (DPFC) and left amygdala. However, increased right amygdala and right frontal area activities were observed only in women. The humans-minus-scenes contrast revealed a difference in the activation of the middle occipital gyrus (MOG) in men, and of the left inferior parietal (BA40), left superior temporal gyrus (STG, BA38) and right cingulate (BA31) in women (270-290 ms). These data indicate a sex-related difference in the brain response to humans, possibly supporting human empathy.

  13. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Science.gov (United States)

    Kim, Sungho; Song, Woo-Jin; Kim, So-Hyun

    2016-01-01

    Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR) images or infrared (IR) images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT) and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter) and an asymmetric morphological closing filter (AMCF, post-filter) into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC)-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic database generated

  14. Robust Ground Target Detection by SAR and IR Sensor Fusion Using Adaboost-Based Feature Selection

    Directory of Open Access Journals (Sweden)

    Sungho Kim

    2016-07-01

    Full Text Available Long-range ground targets are difficult to detect in a noisy cluttered environment using either synthetic aperture radar (SAR images or infrared (IR images. SAR-based detectors can provide a high detection rate with a high false alarm rate to background scatter noise. IR-based approaches can detect hot targets but are affected strongly by the weather conditions. This paper proposes a novel target detection method by decision-level SAR and IR fusion using an Adaboost-based machine learning scheme to achieve a high detection rate and low false alarm rate. The proposed method consists of individual detection, registration, and fusion architecture. This paper presents a single framework of a SAR and IR target detection method using modified Boolean map visual theory (modBMVT and feature-selection based fusion. Previous methods applied different algorithms to detect SAR and IR targets because of the different physical image characteristics. One method that is optimized for IR target detection produces unsuccessful results in SAR target detection. This study examined the image characteristics and proposed a unified SAR and IR target detection method by inserting a median local average filter (MLAF, pre-filter and an asymmetric morphological closing filter (AMCF, post-filter into the BMVT. The original BMVT was optimized to detect small infrared targets. The proposed modBMVT can remove the thermal and scatter noise by the MLAF and detect extended targets by attaching the AMCF after the BMVT. Heterogeneous SAR and IR images were registered automatically using the proposed RANdom SAmple Region Consensus (RANSARC-based homography optimization after a brute-force correspondence search using the detected target centers and regions. The final targets were detected by feature-selection based sensor fusion using Adaboost. The proposed method showed good SAR and IR target detection performance through feature selection-based decision fusion on a synthetic

  15. An Indoor Scene Recognition-Based 3D Registration Mechanism for Real-Time AR-GIS Visualization in Mobile Applications

    Directory of Open Access Journals (Sweden)

    Wei Ma

    2018-03-01

    Full Text Available Mobile Augmented Reality (MAR systems are becoming ideal platforms for visualization, permitting users to better comprehend and interact with spatial information. Subsequently, this technological development, in turn, has prompted efforts to enhance mechanisms for registering virtual objects in real world contexts. Most existing AR 3D Registration techniques lack the scene recognition capabilities needed to describe accurately the positioning of virtual objects in scenes representing reality. Moreover, the application of such registration methods in indoor AR-GIS systems is further impeded by the limited capacity of these systems to detect the geometry and semantic information in indoor environments. In this paper, we propose a novel method for fusing virtual objects and indoor scenes, based on indoor scene recognition technology. To accomplish scene fusion in AR-GIS, we first detect key points in reference images. Then, we perform interior layout extraction using a Fully Connected Networks (FCN algorithm to acquire layout coordinate points for the tracking targets. We detect and recognize the target scene in a video frame image to track targets and estimate the camera pose. In this method, virtual 3D objects are fused precisely to a real scene, according to the camera pose and the previously extracted layout coordinate points. Our results demonstrate that this approach enables accurate fusion of virtual objects with representations of real world indoor environments. Based on this fusion technique, users can better grasp virtual three-dimensional representations on an AR-GIS platform.

  16. Angular difference feature extraction for urban scene classification using ZY-3 multi-angle high-resolution satellite imagery

    Science.gov (United States)

    Huang, Xin; Chen, Huijun; Gong, Jianya

    2018-01-01

    Spaceborne multi-angle images with a high-resolution are capable of simultaneously providing spatial details and three-dimensional (3D) information to support detailed and accurate classification of complex urban scenes. In recent years, satellite-derived digital surface models (DSMs) have been increasingly utilized to provide height information to complement spectral properties for urban classification. However, in such a way, the multi-angle information is not effectively exploited, which is mainly due to the errors and difficulties of the multi-view image matching and the inaccuracy of the generated DSM over complex and dense urban scenes. Therefore, it is still a challenging task to effectively exploit the available angular information from high-resolution multi-angle images. In this paper, we investigate the potential for classifying urban scenes based on local angular properties characterized from high-resolution ZY-3 multi-view images. Specifically, three categories of angular difference features (ADFs) are proposed to describe the angular information at three levels (i.e., pixel, feature, and label levels): (1) ADF-pixel: the angular information is directly extrapolated by pixel comparison between the multi-angle images; (2) ADF-feature: the angular differences are described in the feature domains by comparing the differences between the multi-angle spatial features (e.g., morphological attribute profiles (APs)). (3) ADF-label: label-level angular features are proposed based on a group of urban primitives (e.g., buildings and shadows), in order to describe the specific angular information related to the types of primitive classes. In addition, we utilize spatial-contextual information to refine the multi-level ADF features using superpixel segmentation, for the purpose of alleviating the effects of salt-and-pepper noise and representing the main angular characteristics within a local area. The experiments on ZY-3 multi-angle images confirm that the proposed

  17. Context modulates attention to social scenes in toddlers with autism

    Science.gov (United States)

    Chawarska, Katarzyna; Macari, Suzanne; Shic, Frederick

    2013-01-01

    Background In typical development, the unfolding of social and communicative skills hinges upon the ability to allocate and sustain attention towards people, a skill present moments after birth. Deficits in social attention have been well documented in autism, though the underlying mechanisms are poorly understood. Methods In order to parse the factors that are responsible for limited social attention in toddlers with autism, we manipulated the context in which a person appeared in their visual field with regard to the presence of salient social (child-directed speech and eye contact) and nonsocial (distractor toys) cues for attention. Participants included 13- to 25-month-old toddlers with autism (AUT; n=54), developmental delay (DD; n=22), and typical development (TD; n=48). Their visual responses were recorded with an eye-tracker. Results In conditions devoid of eye contact and speech, the distribution of attention between key features of the social scene in toddlers with autism was comparable to that in DD and TD controls. However, when explicit dyadic cues were introduced, toddlers with autism showed decreased attention to the entire scene and, when they looked at the scene, they spent less time looking at the speaker’s face and monitoring her lip movements than the control groups. In toddlers with autism, decreased time spent exploring the entire scene was associated with increased symptom severity and lower nonverbal functioning; atypical language profiles were associated with decreased monitoring of the speaker’s face and her mouth. Conclusions While in certain contexts toddlers with autism attend to people and objects in a typical manner, they show decreased attentional response to dyadic cues for attention. Given that mechanisms supporting responsivity to dyadic cues are present shortly after birth and are highly consequential for development of social cognition and communication, these findings have important implications for the understanding of the

  18. Far-infrared pedestrian detection for advanced driver assistance systems using scene context

    Science.gov (United States)

    Wang, Guohua; Liu, Qiong; Wu, Qingyao

    2016-04-01

    Pedestrian detection is one of the most critical but challenging components in advanced driver assistance systems. Far-infrared (FIR) images are well-suited for pedestrian detection even in a dark environment. However, most current detection approaches just focus on pedestrian patterns themselves, where robust and real-time detection cannot be well achieved. We propose a fast FIR pedestrian detection approach, called MAP-HOGLBP-T, to explicitly exploit the scene context for the driver assistance system. In MAP-HOGLBP-T, three algorithms are developed to exploit the scene contextual information from roads, vehicles, and background objects of high homogeneity, and we employ the Bayesian approach to build a classifier learner which respects the scene contextual information. We also develop a multiframe approval scheme to enhance the detection performance based on spatiotemporal continuity of pedestrians. Our empirical study on real-world datasets has demonstrated the efficiency and effectiveness of the proposed method. The performance is shown to be better than that of state-of-the-art low-level feature-based approaches.

  19. Training Small Networks for Scene Classification of Remote Sensing Images via Knowledge Distillation

    Directory of Open Access Journals (Sweden)

    Guanzhou Chen

    2018-05-01

    Full Text Available Scene classification, aiming to identify the land-cover categories of remotely sensed image patches, is now a fundamental task in the remote sensing image analysis field. Deep-learning-model-based algorithms are widely applied in scene classification and achieve remarkable performance, but these high-level methods are computationally expensive and time-consuming. Consequently in this paper, we introduce a knowledge distillation framework, currently a mainstream model compression method, into remote sensing scene classification to improve the performance of smaller and shallower network models. Our knowledge distillation training method makes the high-temperature softmax output of a small and shallow student model match the large and deep teacher model. In our experiments, we evaluate knowledge distillation training method for remote sensing scene classification on four public datasets: AID dataset, UCMerced dataset, NWPU-RESISC dataset, and EuroSAT dataset. Results show that our proposed training method was effective and increased overall accuracy (3% in AID experiments, 5% in UCMerced experiments, 1% in NWPU-RESISC and EuroSAT experiments for small and shallow models. We further explored the performance of the student model on small and unbalanced datasets. Our findings indicate that knowledge distillation can improve the performance of small network models on datasets with lower spatial resolution images, numerous categories, as well as fewer training samples.

  20. Fundamental remote sensing science research program. Part 1: Scene radiation and atmospheric effects characterization project

    Science.gov (United States)

    Murphy, R. E.; Deering, D. W.

    1984-01-01

    Brief articles summarizing the status of research in the scene radiation and atmospheric effect characterization (SRAEC) project are presented. Research conducted within the SRAEC program is focused on the development of empirical characterizations and mathematical process models which relate the electromagnetic energy reflected or emitted from a scene to the biophysical parameters of interest.