WorldWideScience

Sample records for provide additional visualizations

  1. Data visualization methods, data visualization devices, data visualization apparatuses, and articles of manufacture

    Science.gov (United States)

    Turner, Alan E.; Crow, Vernon L.; Payne, Deborah A.; Hetzler, Elizabeth G.; Cook, Kristin A.; Cowley, Wendy E.

    2015-06-30

    Data visualization methods, data visualization devices, data visualization apparatuses, and articles of manufacture are described according to some aspects. In one aspect, a data visualization method includes accessing a plurality of initial documents at a first moment in time, first processing the initial documents providing processed initial documents, first identifying a plurality of first associations of the initial documents using the processed initial documents, generating a first visualization depicting the first associations, accessing a plurality of additional documents at a second moment in time after the first moment in time, second processing the additional documents providing processed additional documents, second identifying a plurality of second associations of the additional documents and at least some of the initial documents, wherein the second identifying comprises identifying using the processed initial documents and the processed additional documents, and generating a second visualization depicting the second associations.

  2. Kinesthetic Imagery Provides Additive Benefits to Internal Visual Imagery on Slalom Task Performance.

    Science.gov (United States)

    Callow, Nichola; Jiang, Dan; Roberts, Ross; Edwards, Martin G

    2017-02-01

    Recent brain imaging research demonstrates that the use of internal visual imagery (IVI) or kinesthetic imagery (KIN) activates common and distinct brain areas. In this paper, we argue that combining the imagery modalities (IVI and KIN) will lead to a greater cognitive representation (with more brain areas activated), and this will cause a greater slalom-based motor performance compared with using IVI alone. To examine this assertion, we randomly allocated 56 participants to one of the three groups: IVI, IVI and KIN, or a math control group. Participants performed a slalom-based driving task in a driving simulator, with average lap time used as a measure of performance. Results revealed that the IVI and KIN group achieved significantly quicker lap times than the IVI and the control groups. The discussion includes a theoretical advancement on why the combination of imagery modalities might facilitate performance, with links made to the cognitive neuroscience literature and applied practice.

  3. Does visual impairment lead to additional disability in adults with intellectual disabilities?

    Science.gov (United States)

    Evenhuis, H M; Sjoukes, L; Koot, H M; Kooijman, A C

    2009-01-01

    This study addresses the question to what extent visual impairment leads to additional disability in adults with intellectual disabilities (ID). In a multi-centre cross-sectional study of 269 adults with mild to profound ID, social and behavioural functioning was assessed with observant-based questionnaires, prior to expert assessment of visual function. With linear regression analysis the percentage of variance, explained by levels of visual function, was calculated for the total population and per ID level. A total of 107/269 participants were visually impaired or blind (WHO criteria). On top of the decrease by ID visual impairment significantly decreased daily living skills, communication & language, recognition/communication. Visual impairment did not cause more self-absorbed and withdrawn behaviour or anxiety. Peculiar looking habits correlated with visual impairment and not with ID. In the groups with moderate and severe ID this effect seems stronger than in the group with profound ID. Although ID alone impairs daily functioning, visual impairment diminishes the daily functioning even more. Timely detection and treatment or rehabilitation of visual impairment may positively influence daily functioning, language development, initiative and persistence, social skills, communication skills and insecure movement.

  4. Photoreceptor change and visual outcome after idiopathic epiretinal membrane removal with or without additional internal limiting membrane peeling.

    Science.gov (United States)

    Ahn, Seong Joon; Ahn, Jeeyun; Woo, Se Joon; Park, Kyu Hyung

    2014-01-01

    To compare the postoperative photoreceptor status and visual outcome after epiretinal membrane removal with or without additional internal limiting membrane (ILM) peeling. Medical records of 40 eyes from 37 patients undergoing epiretinal membrane removal with residual ILM peeling (additional ILM peeling group) and 69 eyes from 65 patients undergoing epiretinal membrane removal without additional ILM peeling (no additional peeling group) were reviewed. The length of defects in cone outer segment tips, inner segment/outer segment junction, and external limiting membrane line were measured using spectral domain optical coherence tomography images of the fovea before and at 1, 3, 6, and 12 months after the surgery. Cone outer segment tips and inner segment/outer segment junction line defects were most severe at postoperative 1 month and gradually restored at 12 months postoperatively. The cone outer segment tips line defect in the additional ILM peeling group was significantly greater than that in the no additional peeling group at postoperative 1 month (P = 0.006), and best-corrected visual acuity was significantly worse in the former group at the same month (P = 0.001). There was no significant difference in the defect size and best-corrected visual acuity at subsequent visits and recurrence rates between the two groups. Patients who received epiretinal membrane surgery without additional ILM peeling showed better visual and anatomical outcome than those with additional ILM peeling at postoperative 1 month. However, surgical outcomes were comparable between the two groups, thereafter. In terms of visual outcome and photoreceptor integrity, additional ILM peeling may not be an essential procedure.

  5. Does visual impairment lead to additional disability in adults with intellectual disabilities?

    NARCIS (Netherlands)

    Sjoukes, L.; Koot, H. M.; Kooijman, A. C.; Evenhuis, H.

    This study addresses the question to what extent visual impairment leads to additional disability in adults with intellectual disabilities (ID). In a multi-centre cross-sectional study of 269 adults with mild to profound ID, social and behavioural functioning was assessed with observant-based

  6. Does visual impairment lead to additional disability in adults with intellectual disabilities?

    NARCIS (Netherlands)

    Evenhuis, H.M.; Sjoukes, L.; Koot, H.M.; Kooijman, A.C.

    2009-01-01

    Background: This study addresses the question to what extent visual impairment leads to additional disability in adults with intellectual disabilities (ID). Method: In a multi-centre cross-sectional study of 269 adults with mild to profound ID, social and behavioural functioning was assessed with

  7. A Review of Research on the Literacy of Students with Visual Impairments and Additional Disabilities

    Science.gov (United States)

    Parker, Amy T.; Pogrund, Rona L.

    2009-01-01

    Research on the development of literacy in children with visual impairments and additional disabilities is minimal even though these children make up approximately 65% of the population of children with visual impairments. This article reports on emerging themes that were explored after a review of the literature revealed nine literacy studies…

  8. Energy Data Visualization Requires Additional Approaches to Continue to be Relevant in a World with Greater Low-Carbon Generation

    International Nuclear Information System (INIS)

    Grant Wilson, I. A.

    2016-01-01

    The hypothesis described in this article proposes that energy visualization diagrams commonly used need additional changes to continue to be relevant in a world with greater low-carbon generation. The diagrams that display national energy data are influenced by the properties of the type of energy being displayed, which in most cases has historically meant fossil fuels, nuclear fuels, or hydro. As many energy systems throughout the world increase their use of electricity from wind- or solar-based renewables, a more granular display of energy data in the time domain is required. This article also introduces the shared axes energy diagram that provides a simple and powerful way to compare the scale and seasonality of the demands and supplies of an energy system. This aims to complement, rather than replace existing diagrams, and has an additional benefit of promoting a whole systems approach to energy systems, as differing energy vectors, such as natural gas, transport fuels, and electricity, can all be displayed together. This, in particular, is useful to both policy makers and to industry, to build a visual foundation for a whole systems narrative, which provides a basis for discussion of the synergies and opportunities across and between different energy vectors and demands. The diagram’s ability to wrap a sense of scale around a whole energy system in a simple way is thought to explain its growing popularity.

  9. Energy Data Visualization Requires Additional Approaches to Continue to be Relevant in a World with Greater Low-Carbon Generation

    Energy Technology Data Exchange (ETDEWEB)

    Grant Wilson, I. A., E-mail: grant.wilson@sheffield.ac.uk [Environmental and Energy Engineering Group, Department of Chemical and Biological Engineering, The University of Sheffield, Sheffield (United Kingdom)

    2016-08-31

    The hypothesis described in this article proposes that energy visualization diagrams commonly used need additional changes to continue to be relevant in a world with greater low-carbon generation. The diagrams that display national energy data are influenced by the properties of the type of energy being displayed, which in most cases has historically meant fossil fuels, nuclear fuels, or hydro. As many energy systems throughout the world increase their use of electricity from wind- or solar-based renewables, a more granular display of energy data in the time domain is required. This article also introduces the shared axes energy diagram that provides a simple and powerful way to compare the scale and seasonality of the demands and supplies of an energy system. This aims to complement, rather than replace existing diagrams, and has an additional benefit of promoting a whole systems approach to energy systems, as differing energy vectors, such as natural gas, transport fuels, and electricity, can all be displayed together. This, in particular, is useful to both policy makers and to industry, to build a visual foundation for a whole systems narrative, which provides a basis for discussion of the synergies and opportunities across and between different energy vectors and demands. The diagram’s ability to wrap a sense of scale around a whole energy system in a simple way is thought to explain its growing popularity.

  10. An interdisciplinary visual team in an acute and sub-acute stroke unit: Providing assessment and early rehabilitation.

    Science.gov (United States)

    Norup, Anne; Guldberg, Anne-Mette; Friis, Claus Radmer; Deurell, Eva Maria; Forchhammer, Hysse Birgitte

    2016-07-15

    To describe the work of an interdisciplinary visual team in a stroke unit providing early identification and assessment of patients with visual symptoms, and secondly to investigate frequency, type of visual deficits after stroke and self-evaluated impact on everyday life after stroke. For a period of three months, all stroke patients with visual or visuo-attentional deficits were registered, and data concerning etiology, severity and localization of the stroke and initial visual symptoms were registered. One month after discharge patients were contacted for follow-up. Of 349 acute stroke admissions, 84 (24.1%) had visual or visuo-attentional deficits initially. Of these 84 patients, informed consent was obtained from 22 patients with a mean age of 67.7 years(SD 10.1), and the majority was female (59.1%). Based on the initial neurological examination, 45.4% had some kind of visual field defect, 27.2% had some kind of oculomotor nerve palsy, and about 31.8% had some kind of inattention or visual neglect. The patients were contacted for a phone-based follow-up one month after discharge, where 85.7% reported changes in their vision since their stroke. In this consecutive sample, a quarter of all stroke patients had visual or visuo-attentional deficits initially. This emphasizes how professionals should have increased awareness of the existence of such deficits after stroke in order to provide the necessary interdisciplinary assessment and rehabilitation.

  11. JNC's experience of complementary accesses provided by the additional protocol

    International Nuclear Information System (INIS)

    Miura, Yasushi

    2001-01-01

    JNC (Japan Nuclear Cycle Development Institute) examined problems on implementation of the Additional Protocol to Japan/IAEA Safeguards Agreement with the Government of Japan and International Atomic Energy Agency through trials performed at Oarai Engineering Center before it entered into force. On December 16th 1999, the Additional Protocol entered into force, and in last January JNC provided the first JNC site information to STA. Then our Government provided it of all Japan to IAEA in last June. Also in this January, we sent the additional information changed from old one to MEXT (Ministry of Education, Culture, Sports, Science and Technology). The first Complementary Access of not only JNC but also Japan was implemented on JNC Ningyo-Toge Environmental Engineering Center on the end of last November. Since then, we have had over 10 times experience of Complementary Accesses for about one year especially on Tokai works and Ningyo-Toge. JNC's experience of Complementary Accesses will be introduced. (author)

  12. Visual Multipoles And The Assessment Of Visual Sensitivity To Displayed Images

    Science.gov (United States)

    Klein, Stanley A.

    1989-08-01

    The contrast sensitivity function (CSF) is widely used to specify the sensitivity of the visual system. Each point of the CSF specifies the amount of contrast needed to detect a sinusoidal grating of a given spatial frequency. This paper describes a set of five mathematically related visual patterns, called "multipoles," that should replace the CSF for measuring visual performance. The five patterns (ramp, edge, line, dipole and quadrupole) are localized in space rather than being spread out as sinusoidal gratings. The multipole sensitivity of the visual system provides an alternative characterization that complements the CSF in addition to offering several advantages. This paper provides an overview of the properties and uses of the multipole stimuli. This paper is largely a summary of several unpublished manuscripts with excerpts from them. Derivations and full references are omitted here. Please write me if you would like the full manuscripts.

  13. Mental Imagery and Visual Working Memory

    OpenAIRE

    Keogh, Rebecca; Pearson, Joel

    2011-01-01

    Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory - but not iconic visual memory - can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance ...

  14. Does visual impairment lead to additional disability in adults with intellectual disabilities? A cross-sectional study

    NARCIS (Netherlands)

    Evenhuis, H.M.; Sjoukes, L.; Koot, H.M.; Kooijman, A.C.

    2009-01-01

    Background: This study addresses the question to what extent visual impairment leads to additional disability in adults with intellectual disabilities (ID). Method: In a multi-centre cross-sectional study of 269 adults with mild to profound ID, social and behavioural functioning was assessed with

  15. Build platform that provides mechanical engagement with additive manufacturing prints

    Science.gov (United States)

    Elliott, Amelia M.

    2018-03-06

    A build platform and methods of fabricating an article with such a platform in an extrusion-type additive manufacturing machine are provided. A platform body 202 includes features 204 that extend outward from the body 202. The features 204 define protrusive areas 206 and recessive areas 208 that cooperate to mechanically engage the extruded material that forms the initial layers 220 of an article when the article is being fabricated by a nozzle 12 of the additive manufacturing machine 10.

  16. Interactions between visual working memory and visual attention

    NARCIS (Netherlands)

    Olivers, C.N.L.

    2008-01-01

    Visual attention is the collection of mechanisms by which relevant visual information is selected, and irrelevant visual information is ignored. Visual working memory is the mechanism by which relevant visual information is retained, and irrelevant information is suppressed. In addition to this

  17. Sonification and haptic feedback in addition to visual feedback enhances complex motor task learning.

    Science.gov (United States)

    Sigrist, Roland; Rauter, Georg; Marchal-Crespo, Laura; Riener, Robert; Wolf, Peter

    2015-03-01

    Concurrent augmented feedback has been shown to be less effective for learning simple motor tasks than for complex tasks. However, as mostly artificial tasks have been investigated, transfer of results to tasks in sports and rehabilitation remains unknown. Therefore, in this study, the effect of different concurrent feedback was evaluated in trunk-arm rowing. It was then investigated whether multimodal audiovisual and visuohaptic feedback are more effective for learning than visual feedback only. Naïve subjects (N = 24) trained in three groups on a highly realistic virtual reality-based rowing simulator. In the visual feedback group, the subject's oar was superimposed to the target oar, which continuously became more transparent when the deviation between the oars decreased. Moreover, a trace of the subject's trajectory emerged if deviations exceeded a threshold. The audiovisual feedback group trained with oar movement sonification in addition to visual feedback to facilitate learning of the velocity profile. In the visuohaptic group, the oar movement was inhibited by path deviation-dependent braking forces to enhance learning of spatial aspects. All groups significantly decreased the spatial error (tendency in visual group) and velocity error from baseline to the retention tests. Audiovisual feedback fostered learning of the velocity profile significantly more than visuohaptic feedback. The study revealed that well-designed concurrent feedback fosters complex task learning, especially if the advantages of different modalities are exploited. Further studies should analyze the impact of within-feedback design parameters and the transferability of the results to other tasks in sports and rehabilitation.

  18. The colorful brain: Visualization of EEG background patterns

    NARCIS (Netherlands)

    van Putten, Michel Johannes Antonius Maria

    2008-01-01

    This article presents a method to transform routine clinical EEG recordings to an alternative visual domain. The method is intended to support the classic visual interpretation of the EEG background pattern and to facilitate communication about relevant EEG characteristics. In addition, it provides

  19. Creativity, visualization abilities, and visual cognitive style.

    Science.gov (United States)

    Kozhevnikov, Maria; Kozhevnikov, Michael; Yu, Chen Jiao; Blazhenkova, Olesya

    2013-06-01

    Despite the recent evidence for a multi-component nature of both visual imagery and creativity, there have been no systematic studies on how the different dimensions of creativity and imagery might interrelate. The main goal of this study was to investigate the relationship between different dimensions of creativity (artistic and scientific) and dimensions of visualization abilities and styles (object and spatial). In addition, we compared the contributions of object and spatial visualization abilities versus corresponding styles to scientific and artistic dimensions of creativity. Twenty-four undergraduate students (12 females) were recruited for the first study, and 75 additional participants (36 females) were recruited for an additional experiment. Participants were administered a number of object and spatial visualization abilities and style assessments as well as a number of artistic and scientific creativity tests. The results show that object visualization relates to artistic creativity and spatial visualization relates to scientific creativity, while both are distinct from verbal creativity. Furthermore, our findings demonstrate that style predicts corresponding dimension of creativity even after removing shared variance between style and visualization ability. The results suggest that styles might be a more ecologically valid construct in predicting real-life creative behaviour, such as performance in different professional domains. © 2013 The British Psychological Society.

  20. Perceived visual informativeness (PVI): construct and scale development to assess visual information in printed materials.

    Science.gov (United States)

    King, Andy J; Jensen, Jakob D; Davis, LaShara A; Carcioppolo, Nick

    2014-01-01

    There is a paucity of research on the visual images used in health communication messages and campaign materials. Even though many studies suggest further investigation of these visual messages and their features, few studies provide specific constructs or assessment tools for evaluating the characteristics of visual messages in health communication contexts. The authors conducted 2 studies to validate a measure of perceived visual informativeness (PVI), a message construct assessing visual messages presenting statistical or indexical information. In Study 1, a 7-item scale was created that demonstrated good internal reliability (α = .91), as well as convergent and divergent validity with related message constructs such as perceived message quality, perceived informativeness, and perceived attractiveness. PVI also converged with a preference for visual learning but was unrelated to a person's actual vision ability. In addition, PVI exhibited concurrent validity with a number of important constructs including perceived message effectiveness, decisional satisfaction, and three key public health theory behavior predictors: perceived benefits, perceived barriers, and self-efficacy. Study 2 provided more evidence that PVI is an internally reliable measure and demonstrates that PVI is a modifiable message feature that can be tested in future experimental work. PVI provides an initial step to assist in the evaluation and testing of visual messages in campaign and intervention materials promoting informed decision making and behavior change.

  1. Visualizing dipole radiation

    International Nuclear Information System (INIS)

    Girwidz, Raimund V

    2016-01-01

    The Hertzian dipole is fundamental to the understanding of dipole radiation. It provides basic insights into the genesis of electromagnetic waves and lays the groundwork for an understanding of half-wave antennae and other types. Equations for the electric and magnetic fields of such a dipole can be derived mathematically. However these are very abstract descriptions. Interpreting these equations and understanding travelling electromagnetic waves are highly limited in that sense. Visualizations can be a valuable supplement that vividly present properties of electromagnetic fields and their propagation. The computer simulation presented below provides additional instructive illustrations for university lectures on electrodynamics, broadening the experience well beyond what is possible with abstract equations. This paper refers to a multimedia program for PCs, tablets and smartphones, and introduces and discusses several animated illustrations. Special features of multiple representations and combined illustrations will be used to provide insight into spatial and temporal characteristics of field distributions—which also draw attention to the flow of energy. These visualizations offer additional information, including the relationships between different representations that promote deeper understanding. Finally, some aspects are also illustrated that often remain unclear in lectures. (paper)

  2. Visualization of the NASA ICON mission in 3d

    Science.gov (United States)

    Mendez, R. A., Jr.; Immel, T. J.; Miller, N.

    2016-12-01

    The ICON Explorer mission (http://icon.ssl.berkeley.edu) will provide several data products for the atmosphere and ionosphere after its launch in 2017. This project will support the mission by investigating the capability of these tools for visualization of current and predicted observatory characteristics and data acquisition. Visualization of this mission can be accomplished using tools like Google Earth or CesiumJS, as well assistance from Java or Python. Ideally we will bring this visualization into the homes of people without the need of additional software. The path of launching a standalone website, building this environment, and a full toolkit will be discussed. Eventually, the initial work could lead to the addition of a downloadable visualization packages for mission demonstration or science visualization.

  3. Exploring the potential of neurophysiological measures for user-adaptive visualization

    OpenAIRE

    Tak, S.; Brouwer, A.M.; Toet, A.; Erp, J.B.F. van

    2013-01-01

    User-adaptive visualization aims to adapt visualized information to the needs and characteristics of the individual user. Current approaches deploy user personality factors, user behavior and preferences, and visual scanning behavior to achieve this goal. We argue that neurophysiological data provide valuable additional input for user-adaptive visualization systems since they contain a wealth of objective information about user characteristics. The combination of neurophysiological data with ...

  4. Independent and additive repetition priming of motion direction and color in visual search.

    Science.gov (United States)

    Kristjánsson, Arni

    2009-03-01

    Priming of visual search for Gabor patch stimuli, varying in color and local drift direction, was investigated. The task relevance of each feature varied between the different experimental conditions compared. When the target defining dimension was color, a large effect of color repetition was seen as well as a smaller effect of the repetition of motion direction. The opposite priming pattern was seen when motion direction defined the target--the effect of motion direction repetition was this time larger than for color repetition. Finally, when neither was task relevant, and the target defining dimension was the spatial frequency of the Gabor patch, priming was seen for repetition of both color and motion direction, but the effects were smaller than in the previous two conditions. These results show that features do not necessarily have to be task relevant for priming to occur. There is little interaction between priming following repetition of color and motion, these two features show independent and additive priming effects, most likely reflecting that the two features are processed at separate processing sites in the nervous system, consistent with previous findings from neuropsychology & neurophysiology. The implications of the findings for theoretical accounts of priming in visual search are discussed.

  5. An Empirical Study on Using Visual Embellishments in Visualization.

    Science.gov (United States)

    Borgo, R; Abdul-Rahman, A; Mohamed, F; Grant, P W; Reppa, I; Floridi, L; Chen, Min

    2012-12-01

    In written and spoken communications, figures of speech (e.g., metaphors and synecdoche) are often used as an aid to help convey abstract or less tangible concepts. However, the benefits of using rhetorical illustrations or embellishments in visualization have so far been inconclusive. In this work, we report an empirical study to evaluate hypotheses that visual embellishments may aid memorization, visual search and concept comprehension. One major departure from related experiments in the literature is that we make use of a dual-task methodology in our experiment. This design offers an abstraction of typical situations where viewers do not have their full attention focused on visualization (e.g., in meetings and lectures). The secondary task introduces "divided attention", and makes the effects of visual embellishments more observable. In addition, it also serves as additional masking in memory-based trials. The results of this study show that visual embellishments can help participants better remember the information depicted in visualization. On the other hand, visual embellishments can have a negative impact on the speed of visual search. The results show a complex pattern as to the benefits of visual embellishments in helping participants grasp key concepts from visualization.

  6. Dynamic Data Visualization with Weave and Brain Choropleths.

    Directory of Open Access Journals (Sweden)

    Dianne Patterson

    Full Text Available This article introduces the neuroimaging community to the dynamic visualization workbench, Weave (https://www.oicweave.org/, and a set of enhancements to allow the visualization of brain maps. The enhancements comprise a set of brain choropleths and the ability to display these as stacked slices, accessible with a slider. For the first time, this allows the neuroimaging community to take advantage of the advanced tools already available for exploring geographic data. Our brain choropleths are modeled after widely used geographic maps but this mashup of brain choropleths with extant visualization software fills an important neuroinformatic niche. To date, most neuroinformatic tools have provided online databases and atlases of the brain, but not good ways to display the related data (e.g., behavioral, genetic, medical, etc. The extension of the choropleth to brain maps allows us to leverage general-purpose visualization tools for concurrent exploration of brain images and related data. Related data can be represented as a variety of tables, charts and graphs that are dynamically linked to each other and to the brain choropleths. We demonstrate that the simplified region-based analyses that underlay choropleths can provide insights into neuroimaging data comparable to those achieved by using more conventional methods. In addition, the interactive interface facilitates additional insights by allowing the user to filter, compare, and drill down into the visual representations of the data. This enhanced data visualization capability is useful during the initial phases of data analysis and the resulting visualizations provide a compelling way to publish data as an online supplement to journal articles.

  7. Understanding Visual Novel As Artwork of Visual Communication Design

    OpenAIRE

    Dendi Pratama; Winny Gunarti; Taufiq Akbar

    2017-01-01

    Visual Novel is a kind of audiovisual game that offers visual strength through the narrative and visual characters. The developer community of Visual Novel (VN) Project Indonesia indicated a limited local game developer that produces Visual Novel of Indonesia. In addition, Indonesian Visual Novel production was also more influenced by the style of anime or manga from Japan. Actually, Visual Novel is part of the potential of  creative industries products. The study is to formulate the problem,...

  8. Understanding Visual Novel as Artwork of Visual Communication Design

    OpenAIRE

    Pratama, Dendi

    2017-01-01

    Visual Novel is a kind of audiovisual game that offers visual strength through the narrative and visual characters. The developer community of Visual Novel (VN) Project Indonesia indicated a limited local game developer that produces Visual Novel of Indonesia. In addition, Indonesian Visual Novel production was also more influenced by the style of anime or manga from Japan. Actually, Visual Novel is part of the potential of creative industries products. The study is to formulate the problem,...

  9. "You Get to Be Yourself": Visual Arts Programs, Identity Construction and Learners of English as an Additional Language

    Science.gov (United States)

    Wielgosz, Meg; Molyneux, Paul

    2015-01-01

    Students learning English as an additional language (EAL) in Australian schools frequently struggle with the cultural and linguistic demands of the classroom while concurrently grappling with issues of identity and belonging. This article reports on an investigation of the role primary school visual arts programs, distinct programs with a…

  10. Development of Visual CINDER Code with Visual C⧣.NET

    International Nuclear Information System (INIS)

    Kim, Oyeon

    2016-01-01

    CINDER code, CINDER' 90 or CINDER2008 that is integrated with the Monte Carlo code, MCNPX, is widely used to calculate the inventory of nuclides in irradiated materials. The MCNPX code provides decay processes to the particle transport scheme that traditionally only covered prompt processes. The integration schemes serve not only the reactor community (MCNPX burnup) but also the accelerator community as well (residual production information). The big benefit for providing these options lies in the easy cross comparison of the transmutation codes since the calculations are based on exactly the same material, neutron flux and isotope production/destruction inputs. However, it is just frustratingly cumbersome to use. In addition, multiple human interventions may increase the possibility of making errors. The number of significant digits in the input data varies in steps, which may cause big errors for highly nonlinear problems. Thus, it is worthwhile to find a new way to wrap all the codes and procedures in one consistent package which can provide ease of use. The visual CINDER code development is underway with visual C .NET framework. It provides a few benefits for the atomic transmutation simulation with CINDER code. A few interesting and useful properties of visual C .NET framework are introduced. We also showed that the wrapper could make the simulation accurate for highly nonlinear transmutation problems and also increase the possibility of direct combination a radiation transport code MCNPX with CINDER code. Direct combination of CINDER with MCNPX in a wrapper will provide more functionalities for the radiation shielding and prevention study

  11. Development of Visual CINDER Code with Visual C⧣.NET

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Oyeon [Institute for Modeling and Simulation Convergence, Daegu (Korea, Republic of)

    2016-10-15

    CINDER code, CINDER' 90 or CINDER2008 that is integrated with the Monte Carlo code, MCNPX, is widely used to calculate the inventory of nuclides in irradiated materials. The MCNPX code provides decay processes to the particle transport scheme that traditionally only covered prompt processes. The integration schemes serve not only the reactor community (MCNPX burnup) but also the accelerator community as well (residual production information). The big benefit for providing these options lies in the easy cross comparison of the transmutation codes since the calculations are based on exactly the same material, neutron flux and isotope production/destruction inputs. However, it is just frustratingly cumbersome to use. In addition, multiple human interventions may increase the possibility of making errors. The number of significant digits in the input data varies in steps, which may cause big errors for highly nonlinear problems. Thus, it is worthwhile to find a new way to wrap all the codes and procedures in one consistent package which can provide ease of use. The visual CINDER code development is underway with visual C .NET framework. It provides a few benefits for the atomic transmutation simulation with CINDER code. A few interesting and useful properties of visual C .NET framework are introduced. We also showed that the wrapper could make the simulation accurate for highly nonlinear transmutation problems and also increase the possibility of direct combination a radiation transport code MCNPX with CINDER code. Direct combination of CINDER with MCNPX in a wrapper will provide more functionalities for the radiation shielding and prevention study.

  12. Perceptual learning in children with visual impairment improves near visual acuity.

    Science.gov (United States)

    Huurneman, Bianca; Boonstra, F Nienke; Cox, Ralf F A; van Rens, Ger; Cillessen, Antonius H N

    2013-09-17

    This study investigated whether visual perceptual learning can improve near visual acuity and reduce foveal crowding effects in four- to nine-year-old children with visual impairment. Participants were 45 children with visual impairment and 29 children with normal vision. Children with visual impairment were divided into three groups: a magnifier group (n = 12), a crowded perceptual learning group (n = 18), and an uncrowded perceptual learning group (n = 15). Children with normal vision also were divided in three groups, but were measured only at baseline. Dependent variables were single near visual acuity (NVA), crowded NVA, LH line 50% crowding NVA, number of trials, accuracy, performance time, amount of small errors, and amount of large errors. Children with visual impairment trained during six weeks, two times per week, for 30 minutes (12 training sessions). After training, children showed significant improvement of NVA in addition to specific improvements on the training task. The crowded perceptual learning group showed the largest acuity improvements (1.7 logMAR lines on the crowded chart, P children in the crowded perceptual learning group showed improvements on all NVA charts. Children with visual impairment benefit from perceptual training. While task-specific improvements were observed in all training groups, transfer to crowded NVA was largest in the crowded perceptual learning group. To our knowledge, this is the first study to provide evidence for the improvement of NVA by perceptual learning in children with visual impairment. (http://www.trialregister.nl number, NTR2537.).

  13. Development of driver’s assistant system of additional visual information of blind areas for Gazelle Next

    Science.gov (United States)

    Makarov, V.; Korelin, O.; Koblyakov, D.; Kostin, S.; Komandirov, A.

    2018-02-01

    The article is devoted to the development of the Advanced Driver Assistance Systems (ADAS) for the GAZelle NEXT car. This project is aimed at developing a visual information system for the driver integrated into the windshield racks. The developed system implements the following functions: assistance in maneuvering and parking; Recognition of road signs; Warning the driver about the possibility of a frontal collision; Control of "blind" zones; "Transparent" vision in the windshield racks, widening the field of view, behind them; Visual and sound information about the traffic situation; Control and descent from the lane of the vehicle; Monitoring of the driver’s condition; navigation system; All-round review. The scheme of action of sensors of the developed system of visual information of the driver is provided. The moments of systems on a prototype of a vehicle are considered. Possible changes in the interior and dashboard of the car are given. The results of the implementation are aimed at the implementation of the system - improved informing of the driver about the environment and the development of an ergonomic interior for this system within the new Functional Salon of the Gazelle Next vehicle equipped with a visual information system for the driver.

  14. Functional magnetic resonance imaging by visual stimulation

    International Nuclear Information System (INIS)

    Nishimura, Yukiko; Negoro, Kiyoshi; Morimatsu, Mitsunori; Hashida, Masahiro

    1996-01-01

    We evaluated functional magnetic resonance images obtained in 8 healthy subjects in response to visual stimulation using a conventional clinical magnetic resonance imaging system with multi-slice spin-echo echo planar imaging. Activation in the visual cortex was clearly demonstrated by the multi-slice experiment with a task-related change in signal intensity. In addition to the primary visual cortex, other areas were also activated by a complicated visual task. Multi-slice spin-echo echo planar imaging offers high temporal resolution and allows the three-dimensional analysis of brain function. Functional magnetic resonance imaging provides a useful noninvasive method of mapping brain function. (author)

  15. Audio-visual onset differences are used to determine syllable identity for ambiguous audio-visual stimulus pairs.

    Science.gov (United States)

    Ten Oever, Sanne; Sack, Alexander T; Wheat, Katherine L; Bien, Nina; van Atteveldt, Nienke

    2013-01-01

    Content and temporal cues have been shown to interact during audio-visual (AV) speech identification. Typically, the most reliable unimodal cue is used more strongly to identify specific speech features; however, visual cues are only used if the AV stimuli are presented within a certain temporal window of integration (TWI). This suggests that temporal cues denote whether unimodal stimuli belong together, that is, whether they should be integrated. It is not known whether temporal cues also provide information about the identity of a syllable. Since spoken syllables have naturally varying AV onset asynchronies, we hypothesize that for suboptimal AV cues presented within the TWI, information about the natural AV onset differences can aid in speech identification. To test this, we presented low-intensity auditory syllables concurrently with visual speech signals, and varied the stimulus onset asynchronies (SOA) of the AV pair, while participants were instructed to identify the auditory syllables. We revealed that specific speech features (e.g., voicing) were identified by relying primarily on one modality (e.g., auditory). Additionally, we showed a wide window in which visual information influenced auditory perception, that seemed even wider for congruent stimulus pairs. Finally, we found a specific response pattern across the SOA range for syllables that were not reliably identified by the unimodal cues, which we explained as the result of the use of natural onset differences between AV speech signals. This indicates that temporal cues not only provide information about the temporal integration of AV stimuli, but additionally convey information about the identity of AV pairs. These results provide a detailed behavioral basis for further neuro-imaging and stimulation studies to unravel the neurofunctional mechanisms of the audio-visual-temporal interplay within speech perception.

  16. Fine-scale features on bioreplicated decoys of the emerald ash borer provide necessary visual verisimilitude

    Science.gov (United States)

    Domingue, Michael J.; Pulsifer, Drew P.; Narkhede, Mahesh S.; Engel, Leland G.; Martín-Palma, Raúl J.; Kumar, Jayant; Baker, Thomas C.; Lakhtakia, Akhlesh

    2014-03-01

    The emerald ash borer (EAB), Agrilus planipennis, is an invasive tree-killing pest in North America. Like other buprestid beetles, it has an iridescent coloring, produced by a periodically layered cuticle whose reflectance peaks at 540 nm wavelength. The males perform a visually mediated ritualistic mating flight directly onto females poised on sunlit leaves. We attempted to evoke this behavior using artificial visual decoys of three types. To fabricate decoys of the first type, a polymer sheet coated with a Bragg-stack reflector was loosely stamped by a bioreplicating die. For decoys of the second type, a polymer sheet coated with a Bragg-stack reflector was heavily stamped by the same die and then painted green. Every decoy of these two types had an underlying black absorber layer. Decoys of the third type were produced by a rapid prototyping machine and painted green. Fine-scale features were absent on the third type. Experiments were performed in an American ash forest infested with EAB, and a European oak forest home to a similar pest, the two-spotted oak borer (TSOB), Agrilus biguttatus. When pinned to leaves, dead EAB females, dead TSOB females, and bioreplicated decoys of both types often evoked the complete ritualized flight behavior. Males also initiated approaches to the rapidly prototyped decoy, but would divert elsewhere without making contact. The attraction of the bioreplicated decoys was also demonstrated by providing a high dc voltage across the decoys that stunned and killed approaching beetles. Thus, true bioreplication with fine-scale features is necessary to fully evoke ritualized visual responses in insects, and provides an opportunity for developing insecttrapping technologies.

  17. Addition of visual noise boosts evoked potential-based brain-computer interface.

    Science.gov (United States)

    Xie, Jun; Xu, Guanghua; Wang, Jing; Zhang, Sicong; Zhang, Feng; Li, Yeping; Han, Chengcheng; Li, Lili

    2014-05-14

    Although noise has a proven beneficial role in brain functions, there have not been any attempts on the dedication of stochastic resonance effect in neural engineering applications, especially in researches of brain-computer interfaces (BCIs). In our study, a steady-state motion visual evoked potential (SSMVEP)-based BCI with periodic visual stimulation plus moderate spatiotemporal noise can achieve better offline and online performance due to enhancement of periodic components in brain responses, which was accompanied by suppression of high harmonics. Offline results behaved with a bell-shaped resonance-like functionality and 7-36% online performance improvements can be achieved when identical visual noise was adopted for different stimulation frequencies. Using neural encoding modeling, these phenomena can be explained as noise-induced input-output synchronization in human sensory systems which commonly possess a low-pass property. Our work demonstrated that noise could boost BCIs in addressing human needs.

  18. Evidence-Based Communication Practices for Children with Visual Impairments and Additional Disabilities: An Examination of Single-Subject Design Studies

    Science.gov (United States)

    Parker, Amy T.; Grimmett, Eric S.; Summers, Sharon

    2008-01-01

    This review examines practices for building effective communication strategies for children with visual impairments, including those with additional disabilities, that have been tested by single-subject design methodology. The authors found 30 studies that met the search criteria and grouped intervention strategies to align any evidence of the…

  19. Mental Imagery and Visual Working Memory

    Science.gov (United States)

    Keogh, Rebecca; Pearson, Joel

    2011-01-01

    Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory - but not iconic visual memory - can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage. PMID:22195024

  20. Mental imagery and visual working memory.

    Directory of Open Access Journals (Sweden)

    Rebecca Keogh

    Full Text Available Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory--but not iconic visual memory--can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage.

  1. Mental imagery and visual working memory.

    Science.gov (United States)

    Keogh, Rebecca; Pearson, Joel

    2011-01-01

    Visual working memory provides an essential link between past and future events. Despite recent efforts, capacity limits, their genesis and the underlying neural structures of visual working memory remain unclear. Here we show that performance in visual working memory--but not iconic visual memory--can be predicted by the strength of mental imagery as assessed with binocular rivalry in a given individual. In addition, for individuals with strong imagery, modulating the background luminance diminished performance on visual working memory and imagery tasks, but not working memory for number strings. This suggests that luminance signals were disrupting sensory-based imagery mechanisms and not a general working memory system. Individuals with poor imagery still performed above chance in the visual working memory task, but their performance was not affected by the background luminance, suggesting a dichotomy in strategies for visual working memory: individuals with strong mental imagery rely on sensory-based imagery to support mnemonic performance, while those with poor imagery rely on different strategies. These findings could help reconcile current controversy regarding the mechanism and location of visual mnemonic storage.

  2. Assignment about providing of substitute haptic interface for visually disabled persons

    OpenAIRE

    浅川, 貴史

    2013-01-01

    [Abstract] This paper is described about an assignment of haptic interface. We have made a proposal for a music baton system for visually disabled persons. The system is constituted by an acceleration sensor, a radio module, and a haptic interface device. We have carried out an experiment of comparing the visual and the haptic interface. The assignments are declared by the results that are rise-time of a motor and pre-motion. In the paper, we make a proposal for new method of the voltage cont...

  3. Visual Communications for Heterogeneous Networks/Visually Optimized Scalable Image Compression. Final Report for September 1, 1995 - February 28, 2002

    Energy Technology Data Exchange (ETDEWEB)

    Hemami, S. S.

    2003-06-03

    The authors developed image and video compression algorithms that provide scalability, reconstructibility, and network adaptivity, and developed compression and quantization strategies that are visually optimal at all bit rates. The goal of this research is to enable reliable ''universal access'' to visual communications over the National Information Infrastructure (NII). All users, regardless of their individual network connection bandwidths, qualities-of-service, or terminal capabilities, should have the ability to access still images, video clips, and multimedia information services, and to use interactive visual communications services. To do so requires special capabilities for image and video compression algorithms: scalability, reconstructibility, and network adaptivity. Scalability allows an information service to provide visual information at many rates, without requiring additional compression or storage after the stream has been compressed the first time. Reconstructibility allows reliable visual communications over an imperfect network. Network adaptivity permits real-time modification of compression parameters to adjust to changing network conditions. Furthermore, to optimize the efficiency of the compression algorithms, they should be visually optimal, where each bit expended reduces the visual distortion. Visual optimality is achieved through first extensive experimentation to quantify human sensitivity to supra-threshold compression artifacts and then incorporation of these experimental results into quantization strategies and compression algorithms.

  4. Promoting Visualization Skills through Deconstruction Using Physical Models and a Visualization Activity Intervention

    Science.gov (United States)

    Schiltz, Holly Kristine

    Visualization skills are important in learning chemistry, as these skills have been shown to correlate to high ability in problem solving. Students' understanding of visual information and their problem-solving processes may only ever be accessed indirectly: verbalization, gestures, drawings, etc. In this research, deconstruction of complex visual concepts was aligned with the promotion of students' verbalization of visualized ideas to teach students to solve complex visual tasks independently. All instructional tools and teaching methods were developed in accordance with the principles of the theoretical framework, the Modeling Theory of Learning: deconstruction of visual representations into model components, comparisons to reality, and recognition of students' their problemsolving strategies. Three physical model systems were designed to provide students with visual and tangible representations of chemical concepts. The Permanent Reflection Plane Demonstration provided visual indicators that students used to support or invalidate the presence of a reflection plane. The 3-D Coordinate Axis system provided an environment that allowed students to visualize and physically enact symmetry operations in a relevant molecular context. The Proper Rotation Axis system was designed to provide a physical and visual frame of reference to showcase multiple symmetry elements that students must identify in a molecular model. Focus groups of students taking Inorganic chemistry working with the physical model systems demonstrated difficulty documenting and verbalizing processes and descriptions of visual concepts. Frequently asked student questions were classified, but students also interacted with visual information through gestures and model manipulations. In an effort to characterize how much students used visualization during lecture or recitation, we developed observation rubrics to gather information about students' visualization artifacts and examined the effect instructors

  5. Visualization rhetoric: framing effects in narrative visualization.

    Science.gov (United States)

    Hullman, Jessica; Diakopoulos, Nicholas

    2011-12-01

    Narrative visualizations combine conventions of communicative and exploratory information visualization to convey an intended story. We demonstrate visualization rhetoric as an analytical framework for understanding how design techniques that prioritize particular interpretations in visualizations that "tell a story" can significantly affect end-user interpretation. We draw a parallel between narrative visualization interpretation and evidence from framing studies in political messaging, decision-making, and literary studies. Devices for understanding the rhetorical nature of narrative information visualizations are presented, informed by the rigorous application of concepts from critical theory, semiotics, journalism, and political theory. We draw attention to how design tactics represent additions or omissions of information at various levels-the data, visual representation, textual annotations, and interactivity-and how visualizations denote and connote phenomena with reference to unstated viewing conventions and codes. Classes of rhetorical techniques identified via a systematic analysis of recent narrative visualizations are presented, and characterized according to their rhetorical contribution to the visualization. We describe how designers and researchers can benefit from the potentially positive aspects of visualization rhetoric in designing engaging, layered narrative visualizations and how our framework can shed light on how a visualization design prioritizes specific interpretations. We identify areas where future inquiry into visualization rhetoric can improve understanding of visualization interpretation. © 2011 IEEE

  6. CMS tracker visualization tools

    Energy Technology Data Exchange (ETDEWEB)

    Mennea, M.S. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy); Osborne, I. [Northeastern University, 360 Huntington Avenue, Boston, MA 02115 (United States); Regano, A. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy); Zito, G. [Dipartimento Interateneo di Fisica ' Michelangelo Merlin' e INFN sezione di Bari, Via Amendola 173 - 70126 Bari (Italy)]. E-mail: giuseppe.zito@ba.infn.it

    2005-08-21

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking.

  7. CMS tracker visualization tools

    CERN Document Server

    Zito, G; Osborne, I; Regano, A

    2005-01-01

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking.

  8. CMS tracker visualization tools

    International Nuclear Information System (INIS)

    Mennea, M.S.; Osborne, I.; Regano, A.; Zito, G.

    2005-01-01

    This document will review the design considerations, implementations and performance of the CMS Tracker Visualization tools. In view of the great complexity of this sub-detector (more than 50 millions channels organized in 16540 modules each one of these being a complete detector), the standard CMS visualization tools (IGUANA and IGUANACMS) that provide basic 3D capabilities and integration within CMS framework, respectively, have been complemented with additional 2D graphics objects. Based on the experience acquired using this software to debug and understand both hardware and software during the construction phase, we propose possible future improvements to cope with online monitoring and event analysis during data taking

  9. The development of organized visual search

    Science.gov (United States)

    Woods, Adam J.; Goksun, Tilbe; Chatterjee, Anjan; Zelonis, Sarah; Mehta, Anika; Smith, Sabrina E.

    2013-01-01

    Visual search plays an important role in guiding behavior. Children have more difficulty performing conjunction search tasks than adults. The present research evaluates whether developmental differences in children's ability to organize serial visual search (i.e., search organization skills) contribute to performance limitations in a typical conjunction search task. We evaluated 134 children between the ages of 2 and 17 on separate tasks measuring search for targets defined by a conjunction of features or by distinct features. Our results demonstrated that children organize their visual search better as they get older. As children's skills at organizing visual search improve they become more accurate at locating targets with conjunction of features amongst distractors, but not for targets with distinct features. Developmental limitations in children's abilities to organize their visual search of the environment are an important component of poor conjunction search in young children. In addition, our findings provide preliminary evidence that, like other visuospatial tasks, exposure to reading may influence children's spatial orientation to the visual environment when performing a visual search. PMID:23584560

  10. Understanding Visual Novel As Artwork of Visual Communication Design

    Directory of Open Access Journals (Sweden)

    Dendi Pratama

    2017-10-01

    Full Text Available Visual Novel is a kind of audiovisual game that offers visual strength through the narrative and visual characters. The developer community of Visual Novel (VN Project Indonesia indicated a limited local game developer that produces Visual Novel of Indonesia. In addition, Indonesian Visual Novel production was also more influenced by the style of anime or manga from Japan. Actually, Visual Novel is part of the potential of  creative industries products. The study is to formulate the problem, how to understand Visual Novel as artwork of visual communication design, especially among students? This research is a case study conducted on visual communication design student at the University Indraprasta PGRI Jakarta. The results showed low levels of knowledge, understanding, and experience of  the Visual Novel game, which is below 50%. Qualitative and quantitative methods combined with structural semiotic approach is used to describe the elements of the design and the signs structure at the Visual Novel. This research can be a scientific reference for further introduce and encourage an understanding of Visual Novel as artwork of Visual Communication Design. In addition, the results may add to the knowledge of  society, and encourage the development of Visual Novel artwork that  reflect the culture of Indonesia. Visual Novel adalah sejenis permainan audiovisual yang menawarkan kekuatan visual melalui narasi dan karakter visual. Data dari komunitas pengembang Visual Novel (VN Project Indonesia menunjukkan masih terbatasnya pengembang game lokal yang memproduksi Visual Novel Indonesia. Selain itu, produksi Visual Novel Indonesia juga lebih banyak dipengaruhi oleh gaya anime dan manga dari Jepang. Padahal Visual Novel adalah bagian dari produk industri kreatif yang potensial. Studi ini merumuskan masalah, bagaimana memahami Visual Novel sebagai karya seni desain komunikasi visual, khususnya di kalangan mahasiswa? Penelitian ini merupakan studi kasus

  11. Teach yourself visually PowerPoint 2013

    CERN Document Server

    Wood, William

    2013-01-01

    A straightforward, visual approach to learning the new PowerPoint 2013! PowerPoint 2013 boasts updated features and new possibilities; this highly visual tutorial provides step-by-step instructions to help you learn all the capabilities of PowerPoint 2013. It covers the basics, as well as all the exciting new changes and additions in a series of easy-to-follow, full-color, two-page tutorials. Learn how to create slides, dress them up using templates and graphics, add sound and animation, and more. This book is the ideal ""show me, don't tell me"" guide to PowerPoint 2013.De

  12. Contralateral delay activity provides a neural measure of the number of representations in visual working memory.

    Science.gov (United States)

    Ikkai, Akiko; McCollough, Andrew W; Vogel, Edward K

    2010-04-01

    Visual working memory (VWM) helps to temporarily represent information from the visual environment and is severely limited in capacity. Recent work has linked various forms of neural activity to the ongoing representations in VWM. One piece of evidence comes from human event-related potential studies, which find a sustained contralateral negativity during the retention period of VWM tasks. This contralateral delay activity (CDA) has previously been shown to increase in amplitude as the number of memory items increases, up to the individual's working memory capacity limit. However, significant alternative hypotheses remain regarding the true nature of this activity. Here we test whether the CDA is modulated by the perceptual requirements of the memory items as well as whether it is determined by the number of locations that are being attended within the display. Our results provide evidence against these two alternative accounts and instead strongly support the interpretation that this activity reflects the current number of objects that are being represented in VWM.

  13. Auditory recognition memory is inferior to visual recognition memory.

    Science.gov (United States)

    Cohen, Michael A; Horowitz, Todd S; Wolfe, Jeremy M

    2009-04-07

    Visual memory for scenes is surprisingly robust. We wished to examine whether an analogous ability exists in the auditory domain. Participants listened to a variety of sound clips and were tested on their ability to distinguish old from new clips. Stimuli ranged from complex auditory scenes (e.g., talking in a pool hall) to isolated auditory objects (e.g., a dog barking) to music. In some conditions, additional information was provided to help participants with encoding. In every situation, however, auditory memory proved to be systematically inferior to visual memory. This suggests that there exists either a fundamental difference between auditory and visual stimuli, or, more plausibly, an asymmetry between auditory and visual processing.

  14. Audio-visual speech timing sensitivity is enhanced in cluttered conditions.

    Directory of Open Access Journals (Sweden)

    Warrick Roseboom

    2011-04-01

    Full Text Available Events encoded in separate sensory modalities, such as audition and vision, can seem to be synchronous across a relatively broad range of physical timing differences. This may suggest that the precision of audio-visual timing judgments is inherently poor. Here we show that this is not necessarily true. We contrast timing sensitivity for isolated streams of audio and visual speech, and for streams of audio and visual speech accompanied by additional, temporally offset, visual speech streams. We find that the precision with which synchronous streams of audio and visual speech are identified is enhanced by the presence of additional streams of asynchronous visual speech. Our data suggest that timing perception is shaped by selective grouping processes, which can result in enhanced precision in temporally cluttered environments. The imprecision suggested by previous studies might therefore be a consequence of examining isolated pairs of audio and visual events. We argue that when an isolated pair of cross-modal events is presented, they tend to group perceptually and to seem synchronous as a consequence. We have revealed greater precision by providing multiple visual signals, possibly allowing a single auditory speech stream to group selectively with the most synchronous visual candidate. The grouping processes we have identified might be important in daily life, such as when we attempt to follow a conversation in a crowded room.

  15. Data on the effect of conductive hearing loss on auditory and visual cortex activity revealed by intrinsic signal imaging.

    Science.gov (United States)

    Teichert, Manuel; Bolz, Jürgen

    2017-10-01

    This data article provides additional data related to the research article entitled "Simultaneous intrinsic signal imaging of auditory and visual cortex reveals profound effects of acute hearing loss on visual processing" (Teichert and Bolz, 2017) [1]. The primary auditory and visual cortex (A1 and V1) of adult male C57BL/6J mice (P120-P240) were mapped simultaneously using intrinsic signal imaging (Kalatsky and Stryker, 2003) [2]. A1 and V1 activity evoked by combined auditory and visual stimulation were measured before and after conductive hearing loss (CHL) induced by bilateral malleus removal. We provide data showing that A1 responsiveness evoked by sounds of different sound pressure levels (SPL) decreased after CHL whereas visually evoked V1 activity increased after this intervention. In addition, we also provide imaging data on percentage of V1 activity increases after CHL compared to pre-CHL.

  16. Is one enough? The case for non-additive influences of visual features on crossmodal Stroop interference

    Directory of Open Access Journals (Sweden)

    Lawrence Gregory Appelbaum

    2013-10-01

    Full Text Available When different perceptual signals arising from the same physical entity are integrated, they form a more reliable sensory estimate. When such repetitive sensory signals are pitted against other competing stimuli, such as in a Stroop Task, this redundancy may lead to stronger processing that biases behavior towards reporting the redundant stimuli. This bias would therefore be expected to evoke greater incongruency effects than if these stimuli did not contain redundant sensory features. In the present paper we report that this is not the case for a set of three crossmodal, auditory-visual Stroop tasks. In these tasks participants attended to, and reported, either the visual or the auditory stimulus (in separate blocks while ignoring the other, unattended modality. The visual component of these stimuli could be purely semantic (words, purely perceptual (colors, or the combination of both. Based on previous work showing enhanced crossmodal integration and visual search gains for redundantly coded stimuli, we had expected that relative to the single features, redundant visual features would have induced both greater visual distracter incongruency effects for attended auditory targets, and been less influenced by auditory distracters for attended visual targets. Overall, reaction time were faster for visual targets and were dominated by behavioral facilitation for the cross-modal interactions (relative to interference, but showed surprisingly little influence of visual feature redundancy. Post hoc analyses revealed modest and trending evidence for possible increases in behavioral interference for redundant visual distracters on auditory targets, however, these effects were substantially smaller than anticipated and were not accompanied by redundancy effect for behavioral facilitation or for attended visual targets.

  17. [Nursing Experience of Using Mirror Visual Feedback for a Schizophrenia Patient With Visual Hallucinations].

    Science.gov (United States)

    Lan, Shu-Ling; Chen, Yu-Chi; Chang, Hsiu-Ju

    2018-06-01

    The aim of this paper was to describe the nursing application of mirror visual feedback in a patient suffering from long-term visual hallucinations. The intervention period was from May 15th to October 19th, 2015. Using the five facets of psychiatric nursing assessment, several health problems were observed, including disturbed sensory perceptions (prominent visual hallucinations) and poor self-care (e.g. limited abilities to self-bathe and put on clothing). Furthermore, "caregiver role strain" due to the related intense care burden was noted. After building up a therapeutic interpersonal relationship, the technique of brain plasticity and mirror visual feedback were performed using multiple nursing care methods in order to help the patient suppress her visual hallucinations by enhancing a different visual stimulus. We also taught her how to cope with visual hallucinations in a proper manner. The frequency and content of visual hallucinations were recorded to evaluate the effects of management. The therapeutic plan was formulated together with the patient in order to boost her self-confidence, and a behavior contract was implemented in order to improve her personal hygiene. In addition, psychoeducation on disease-related topics was provided to the patient's family, and they were encouraged to attend relevant therapeutic activities. As a result, her family became less passive and negative and more engaged in and positive about her future. The crisis of "caregiver role strain" was successfully resolved. The current experience is hoped to serve as a model for enhancing communication and cooperation between family and staff in similar medical settings.

  18. Generalized Framework and Algorithms for Illustrative Visualization of Time-Varying Data on Unstructured Meshes

    Energy Technology Data Exchange (ETDEWEB)

    Alexander S. Rattner; Donna Post Guillen; Alark Joshi

    2012-12-01

    Photo- and physically-realistic techniques are often insufficient for visualization of simulation results, especially for 3D and time-varying datasets. Substantial research efforts have been dedicated to the development of non-photorealistic and illustration-inspired visualization techniques for compact and intuitive presentation of such complex datasets. While these efforts have yielded valuable visualization results, a great deal of work has been reproduced in studies as individual research groups often develop purpose-built platforms. Additionally, interoperability between illustrative visualization software is limited due to specialized processing and rendering architectures employed in different studies. In this investigation, a generalized framework for illustrative visualization is proposed, and implemented in marmotViz, a ParaView plugin, enabling its use on variety of computing platforms with various data file formats and mesh geometries. Detailed descriptions of the region-of-interest identification and feature-tracking algorithms incorporated into this tool are provided. Additionally, implementations of multiple illustrative effect algorithms are presented to demonstrate the use and flexibility of this framework. By providing a framework and useful underlying functionality, the marmotViz tool can act as a springboard for future research in the field of illustrative visualization.

  19. Comparing visualization techniques for learning second language prosody

    DEFF Research Database (Denmark)

    Niebuhr, Oliver; Alm, Maria Helena; Schümchen, Nathalie

    2017-01-01

    We tested the usability of prosody visualization techniques for second language (L2) learners. Eighteen Danish learners realized target sentences in German based on different visualization techniques. The sentence realizations were annotated by means of the phonological Kiel Intonation Model...... and then analyzed in terms of (a) prosodic-pattern consistency and (b) correctness of the prosodic patterns. In addition, the participants rated the usability of the visualization techniques. The results from the phonological analysis converged with the usability ratings in showing that iconic techniques......, in particular the stylized “hat pattern” visualization, performed better than symbolic techniques, and that marking prosodic information beyond intonation can be more confusing than instructive. In discussing our findings, we also provide a description of the new Danish-German learner corpus we created: DANGER...

  20. From Visual Exploration to Storytelling and Back Again.

    Science.gov (United States)

    Gratzl, S; Lex, A; Gehlenborg, N; Cosgrove, N; Streit, M

    2016-06-01

    The primary goal of visual data exploration tools is to enable the discovery of new insights. To justify and reproduce insights, the discovery process needs to be documented and communicated. A common approach to documenting and presenting findings is to capture visualizations as images or videos. Images, however, are insufficient for telling the story of a visual discovery, as they lack full provenance information and context. Videos are difficult to produce and edit, particularly due to the non-linear nature of the exploratory process. Most importantly, however, neither approach provides the opportunity to return to any point in the exploration in order to review the state of the visualization in detail or to conduct additional analyses. In this paper we present CLUE (Capture, Label, Understand, Explain), a model that tightly integrates data exploration and presentation of discoveries. Based on provenance data captured during the exploration process, users can extract key steps, add annotations, and author "Vistories", visual stories based on the history of the exploration. These Vistories can be shared for others to view, but also to retrace and extend the original analysis. We discuss how the CLUE approach can be integrated into visualization tools and provide a prototype implementation. Finally, we demonstrate the general applicability of the model in two usage scenarios: a Gapminder-inspired visualization to explore public health data and an example from molecular biology that illustrates how Vistories could be used in scientific journals. (see Figure 1 for visual abstract).

  1. Differential effects of visual feedback on subjective visual vertical accuracy and precision.

    Directory of Open Access Journals (Sweden)

    Daniel Bjasch

    Full Text Available The brain constructs an internal estimate of the gravitational vertical by integrating multiple sensory signals. In darkness, systematic head-roll dependent errors in verticality estimates, as measured by the subjective visual vertical (SVV, occur. We hypothesized that visual feedback after each trial results in increased accuracy, as physiological adjustment errors (A-/E-effect are likely based on central computational mechanisms and investigated whether such improvements were related to adaptational shifts of perceived vertical or to a higher cognitive strategy. We asked 12 healthy human subjects to adjust a luminous arrow to vertical in various head-roll positions (0 to 120deg right-ear down, 15deg steps. After each adjustment visual feedback was provided (lights on, display of previous adjustment and of an earth-vertical cross. Control trials consisted of SVV adjustments without feedback. At head-roll angles with the largest A-effect (90, 105, and 120deg, errors were reduced significantly (p0.05 influenced. In seven subjects an additional session with two consecutive blocks (first with, then without visual feedback was completed at 90, 105 and 120deg head-roll. In these positions the error-reduction by the previous visual feedback block remained significant over the consecutive 18-24 min (post-feedback block, i.e., was still significantly (p<0.002 different from the control trials. Eleven out of 12 subjects reported having consciously added a bias to their perceived vertical based on visual feedback in order to minimize errors. We conclude that improvements of SVV accuracy by visual feedback, which remained effective after removal of feedback for ≥18 min, rather resulted from a cognitive strategy than by adapting the internal estimate of the gravitational vertical. The mechanisms behind the SVV therefore, remained stable, which is also supported by the fact that SVV precision - depending mostly on otolith input - was not affected by visual

  2. Learning Reverse Engineering and Simulation with Design Visualization

    Science.gov (United States)

    Hemsworth, Paul J.

    2018-01-01

    The Design Visualization (DV) group supports work at the Kennedy Space Center by utilizing metrology data with Computer-Aided Design (CAD) models and simulations to provide accurate visual representations that aid in decision-making. The capability to measure and simulate objects in real time helps to predict and avoid potential problems before they become expensive in addition to facilitating the planning of operations. I had the opportunity to work on existing and new models and simulations in support of DV and NASA’s Exploration Ground Systems (EGS).

  3. Modeling, analysis, and visualization of anisotropy

    CERN Document Server

    Özarslan, Evren; Hotz, Ingrid

    2017-01-01

    This book focuses on the modeling, processing and visualization of anisotropy, irrespective of the context in which it emerges, using state-of-the-art mathematical tools. As such, it differs substantially from conventional reference works, which are centered on a particular application. It covers the following topics: (i) the geometric structure of tensors, (ii) statistical methods for tensor field processing, (iii) challenges in mapping neural connectivity and structural mechanics, (iv) processing of uncertainty, and (v) visualizing higher-order representations. In addition to original research contributions, it provides insightful reviews. This multidisciplinary book is the sixth in a series that aims to foster scientific exchange between communities employing tensors and other higher-order representations of directionally dependent data. A significant number of the chapters were co-authored by the participants of the workshop titled Multidisciplinary Approaches to Multivalued Data: Modeling, Visualization,...

  4. Magnetic stimulation of the dorsolateral prefrontal cortex dissociates fragile visual short-term memory from visual working memory.

    Science.gov (United States)

    Sligte, Ilja G; Wokke, Martijn E; Tesselaar, Johannes P; Scholte, H Steven; Lamme, Victor A F

    2011-05-01

    To guide our behavior in successful ways, we often need to rely on information that is no longer in view, but maintained in visual short-term memory (VSTM). While VSTM is usually broken down into iconic memory (brief and high-capacity store) and visual working memory (sustained, yet limited-capacity store), recent studies have suggested the existence of an additional and intermediate form of VSTM that depends on activity in extrastriate cortex. In previous work, we have shown that this fragile form of VSTM can be dissociated from iconic memory. In the present study, we provide evidence that fragile VSTM is different from visual working memory as magnetic stimulation of the right dorsolateral prefrontal cortex (DLPFC) disrupts visual working memory, while leaving fragile VSTM intact. In addition, we observed that people with high DLPFC activity had superior working memory capacity compared to people with low DLPFC activity, and only people with high DLPFC activity really showed a reduction in working memory capacity in response to magnetic stimulation. Altogether, this study shows that VSTM consists of three stages that have clearly different characteristics and rely on different neural structures. On the methodological side, we show that it is possible to predict individual susceptibility to magnetic stimulation based on functional MRI activity. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.

  5. Audio-Visual and Meaningful Semantic Context Enhancements in Older and Younger Adults.

    Directory of Open Access Journals (Sweden)

    Kirsten E Smayda

    Full Text Available Speech perception is critical to everyday life. Oftentimes noise can degrade a speech signal; however, because of the cues available to the listener, such as visual and semantic cues, noise rarely prevents conversations from continuing. The interaction of visual and semantic cues in aiding speech perception has been studied in young adults, but the extent to which these two cues interact for older adults has not been studied. To investigate the effect of visual and semantic cues on speech perception in older and younger adults, we recruited forty-five young adults (ages 18-35 and thirty-three older adults (ages 60-90 to participate in a speech perception task. Participants were presented with semantically meaningful and anomalous sentences in audio-only and audio-visual conditions. We hypothesized that young adults would outperform older adults across SNRs, modalities, and semantic contexts. In addition, we hypothesized that both young and older adults would receive a greater benefit from a semantically meaningful context in the audio-visual relative to audio-only modality. We predicted that young adults would receive greater visual benefit in semantically meaningful contexts relative to anomalous contexts. However, we predicted that older adults could receive a greater visual benefit in either semantically meaningful or anomalous contexts. Results suggested that in the most supportive context, that is, semantically meaningful sentences presented in the audiovisual modality, older adults performed similarly to young adults. In addition, both groups received the same amount of visual and meaningful benefit. Lastly, across groups, a semantically meaningful context provided more benefit in the audio-visual modality relative to the audio-only modality, and the presence of visual cues provided more benefit in semantically meaningful contexts relative to anomalous contexts. These results suggest that older adults can perceive speech as well as younger

  6. A survey on the knowledge and attitudes of anaesthesia providers in the United States of America, United Kingdom and Singapore on visual experiences during cataract surgery.

    Science.gov (United States)

    Tan, C S H; Kumar, C M; Fanning, G L; Lai, Y C; Au Eong, K G

    2006-04-01

    To assess the knowledge, beliefs and attitudes of anaesthesia providers on the patients' possible intraoperative visual experiences during cataract surgery under local anaesthesia. Anaesthesia providers from the Ophthalmic Anaesthesia Society (USA); British Ophthalmic Anaesthesia Society (UK); Alexandra Hospital, National University Hospital, Tan Tock Seng Hospital, Singapore General Hospital and Changi General Hospital (Singapore) were surveyed using a structured questionnaire. A total of 146 anaesthesiologists (81.6%), 10 ophthalmologists (5.6%) and 23 nurse anaesthetists (12.8%) responded to the survey. Most respondents believed that patients would experience light perception and many also felt that patients might encounter other visual sensations such as movements, flashes, colours, surgical instruments, hands/fingers and the surgeon during the surgery. A significantly higher proportion of anaesthesia providers with previous experience of monitoring patients under topical anaesthesia believed that patients might experience the various visual sensations compared to those who have not previously monitored. For both topical and regional anaesthesia, anaesthesia providers who routinely counsel their patients are (1) more likely to believe that preoperative counselling helps or (2) were previously told by patients that they could see intraoperatively and/or that they were frightened by their visual sensations. These findings were statistically significant. The majority of anaesthesia providers in the USA, UK and Singapore are aware that patients may experience a variety of visual sensations during cataract surgery under regional or topical anaesthesia. Those who have previously managed patients undergoing cataract surgery under topical anaesthesia are more likely to believe this compared to those who have not.

  7. Which visual functions depend on intermediate visual regions? Insights from a case of developmental visual form agnosia.

    Science.gov (United States)

    Gilaie-Dotan, Sharon

    2016-03-01

    A key question in visual neuroscience is the causal link between specific brain areas and perceptual functions; which regions are necessary for which visual functions? While the contribution of primary visual cortex and high-level visual regions to visual perception has been extensively investigated, the contribution of intermediate visual areas (e.g. V2/V3) to visual processes remains unclear. Here I review more than 20 visual functions (early, mid, and high-level) of LG, a developmental visual agnosic and prosopagnosic young adult, whose intermediate visual regions function in a significantly abnormal fashion as revealed through extensive fMRI and ERP investigations. While expectedly, some of LG's visual functions are significantly impaired, some of his visual functions are surprisingly normal (e.g. stereopsis, color, reading, biological motion). During the period of eight-year testing described here, LG trained on a perceptual learning paradigm that was successful in improving some but not all of his visual functions. Following LG's visual performance and taking into account additional findings in the field, I propose a framework for how different visual areas contribute to different visual functions, with an emphasis on intermediate visual regions. Thus, although rewiring and plasticity in the brain can occur during development to overcome and compensate for hindering developmental factors, LG's case seems to indicate that some visual functions are much less dependent on strict hierarchical flow than others, and can develop normally in spite of abnormal mid-level visual areas, thereby probably less dependent on intermediate visual regions. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. Geoscience data visualization and analysis using GeoMapApp

    Science.gov (United States)

    Ferrini, Vicki; Carbotte, Suzanne; Ryan, William; Chan, Samantha

    2013-04-01

    Increased availability of geoscience data resources has resulted in new opportunities for developing visualization and analysis tools that not only promote data integration and synthesis, but also facilitate quantitative cross-disciplinary access to data. Interdisciplinary investigations, in particular, frequently require visualizations and quantitative access to specialized data resources across disciplines, which has historically required specialist knowledge of data formats and software tools. GeoMapApp (www.geomapapp.org) is a free online data visualization and analysis tool that provides direct quantitative access to a wide variety of geoscience data for a broad international interdisciplinary user community. While GeoMapApp provides access to online data resources, it can also be packaged to work offline through the deployment of a small portable hard drive. This mode of operation can be particularly useful during field programs to provide functionality and direct access to data when a network connection is not possible. Hundreds of data sets from a variety of repositories are directly accessible in GeoMapApp, without the need for the user to understand the specifics of file formats or data reduction procedures. Available data include global and regional gridded data, images, as well as tabular and vector datasets. In addition to basic visualization and data discovery functionality, users are provided with simple tools for creating customized maps and visualizations and to quantitatively interrogate data. Specialized data portals with advanced functionality are also provided for power users to further analyze data resources and access underlying component datasets. Users may import and analyze their own geospatial datasets by loading local versions of geospatial data and can access content made available through Web Feature Services (WFS) and Web Map Services (WMS). Once data are loaded in GeoMapApp, a variety options are provided to export data and/or 2D/3D

  9. HI-VISUAL: A language supporting visual interaction in programming

    International Nuclear Information System (INIS)

    Monden, N.; Yoshino, Y.; Hirakawa, M.; Tanaka, M.; Ichikawa, T.

    1984-01-01

    This paper presents a language named HI-VISUAL which supports visual interaction in programming. Following a brief description of the language concept, the icon semantics and language primitives characterizing HI-VISUAL are extensively discussed. HI-VISUAL also shows a system extensively discussed. HI-VISUAL also shows a system extendability providing the possibility of organizing a high level application system as an integration of several existing subsystems, and will serve to developing systems in various fields of applications supporting simple and efficient interactions between programmer and computer. In this paper, the authors have presented a language named HI-VISUAL. Following a brief description of the language concept, the icon semantics and language primitives characterizing HI-VISUAL were extensively discussed

  10. Accuracy of the visual estimation method as a predictor of food intake in Alzheimer's patients provided with different types of food.

    Science.gov (United States)

    Amano, Nobuko; Nakamura, Tomiyo

    2018-02-01

    The visual estimation method is commonly used in hospitals and other care facilities to evaluate food intake through estimation of plate waste. In Japan, no previous studies have investigated the validity and reliability of this method under the routine conditions of a hospital setting. The present study aimed to evaluate the validity and reliability of the visual estimation method, in long-term inpatients with different levels of eating disability caused by Alzheimer's disease. The patients were provided different therapeutic diets presented in various food types. This study was performed between February and April 2013, and 82 patients with Alzheimer's disease were included. Plate waste was evaluated for the 3 main daily meals, for a total of 21 days, 7 consecutive days during each of the 3 months, originating a total of 4851 meals, from which 3984 were included. Plate waste was measured by the nurses through the visual estimation method, and by the hospital's registered dietitians through the actual measurement method. The actual measurement method was first validated to serve as a reference, and the level of agreement between both methods was then determined. The month, time of day, type of food provided, and patients' physical characteristics were considered for analysis. For the 3984 meals included in the analysis, the level of agreement between the measurement methods was 78.4%. Disagreement of measurements consisted of 3.8% of underestimation and 17.8% of overestimation. Cronbach's α (0.60, P visual estimation method was within the acceptable range. The visual estimation method was found to be a valid and reliable method for estimating food intake in patients with different levels of eating impairment. The successful implementation and use of the method depends upon adequate training and motivation of the nurses and care staff involved. Copyright © 2017 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.

  11. The Decreasing Prevalence of Nonrefractive Visual Impairment in Older Europeans

    DEFF Research Database (Denmark)

    Delcourt, Cécile; Le Goff, Mélanie; von Hanno, Therese

    2018-01-01

    TOPIC: To estimate the prevalence of nonrefractive visual impairment and blindness in European persons 55 years of age and older. CLINICAL RELEVANCE: Few visual impairment and blindness prevalence estimates are available for the European population. In addition, many of the data collected...... in European population-based studies currently are unpublished and have not been included in previous estimates. METHODS: Fourteen European population-based studies participating in the European Eye Epidemiology Consortium (n = 70 723) were included. Each study provided nonrefractive visual impairment...... and blindness prevalence estimates stratified by age (10-year strata) and gender. Nonrefractive visual impairment and blindness were defined as best-corrected visual acuity worse than 20/60 and 20/400 in the better eye, respectively. Using random effects meta-analysis, prevalence rates were estimated according...

  12. STRING 3: An Advanced Groundwater Flow Visualization Tool

    Science.gov (United States)

    Schröder, Simon; Michel, Isabel; Biedert, Tim; Gräfe, Marius; Seidel, Torsten; König, Christoph

    2016-04-01

    The visualization of 3D groundwater flow is a challenging task. Previous versions of our software STRING [1] solely focused on intuitive visualization of complex flow scenarios for non-professional audiences. STRING, developed by Fraunhofer ITWM (Kaiserslautern, Germany) and delta h Ingenieurgesellschaft mbH (Witten, Germany), provides the necessary means for visualization of both 2D and 3D data on planar and curved surfaces. In this contribution we discuss how to extend this approach to a full 3D tool and its challenges in continuation of Michel et al. [2]. This elevates STRING from a post-production to an exploration tool for experts. In STRING moving pathlets provide an intuition of velocity and direction of both steady-state and transient flows. The visualization concept is based on the Lagrangian view of the flow. To capture every detail of the flow an advanced method for intelligent, time-dependent seeding is used building on the Finite Pointset Method (FPM) developed by Fraunhofer ITWM. Lifting our visualization approach from 2D into 3D provides many new challenges. With the implementation of a seeding strategy for 3D one of the major problems has already been solved (see Schröder et al. [3]). As pathlets only provide an overview of the velocity field other means are required for the visualization of additional flow properties. We suggest the use of Direct Volume Rendering and isosurfaces for scalar features. In this regard we were able to develop an efficient approach for combining the rendering through raytracing of the volume and regular OpenGL geometries. This is achieved through the use of Depth Peeling or A-Buffers for the rendering of transparent geometries. Animation of pathlets requires a strict boundary of the simulation domain. Hence, STRING needs to extract the boundary, even from unstructured data, if it is not provided. In 3D we additionally need a good visualization of the boundary itself. For this the silhouette based on the angle of

  13. WebVis: a hierarchical web homepage visualizer

    Science.gov (United States)

    Renteria, Jose C.; Lodha, Suresh K.

    2000-02-01

    WebVis, the Hierarchical Web Home Page Visualizer, is a tool for managing home web pages. The user can access this tool via the WWW and obtain a hierarchical visualization of one's home web pages. WebVis is a real time interactive tool that supports many different queries on the statistics of internal files such as sizes, age, and type. In addition, statistics on embedded information such as VRML files, Java applets, images and sound files can be extracted and queried. Results of these queries are visualized using color, shape and size of different nodes of the hierarchy. The visualization assists the user in a variety of task, such as quickly finding outdated information or locate large files. WebVIs is one solution to the growing web space maintenance problem. Implementation of WebVis is realized with Perl and Java. Perl pattern matching and file handling routines are used to collect and process web space linkage information and web document information. Java utilizes the collected information to produce visualization of the web space. Java also provides WebVis with real time interactivity, while running off the WWW. Some WebVis examples of home web page visualization are presented.

  14. Hierarchical Sets: Analyzing Pangenome Structure through Scalable Set Visualizations

    DEFF Research Database (Denmark)

    Pedersen, Thomas Lin

    2017-01-01

    of hierarchical sets by applying it to a pangenome based on 113 Escherichia and Shigella genomes and find it provides a powerful addition to pangenome analysis. The described clustering algorithm and visualizations are implemented in the hierarchicalSets R package available from CRAN (https...

  15. Visual comparison for information visualization

    KAUST Repository

    Gleicher, M.; Albers, D.; Walker, R.; Jusufi, I.; Hansen, C. D.; Roberts, J. C.

    2011-01-01

    Data analysis often involves the comparison of complex objects. With the ever increasing amounts and complexity of data, the demand for systems to help with these comparisons is also growing. Increasingly, information visualization tools support such comparisons explicitly, beyond simply allowing a viewer to examine each object individually. In this paper, we argue that the design of information visualizations of complex objects can, and should, be studied in general, that is independently of what those objects are. As a first step in developing this general understanding of comparison, we propose a general taxonomy of visual designs for comparison that groups designs into three basic categories, which can be combined. To clarify the taxonomy and validate its completeness, we provide a survey of work in information visualization related to comparison. Although we find a great diversity of systems and approaches, we see that all designs are assembled from the building blocks of juxtaposition, superposition and explicit encodings. This initial exploration shows the power of our model, and suggests future challenges in developing a general understanding of comparative visualization and facilitating the development of more comparative visualization tools. © The Author(s) 2011.

  16. Visual comparison for information visualization

    KAUST Repository

    Gleicher, M.

    2011-09-07

    Data analysis often involves the comparison of complex objects. With the ever increasing amounts and complexity of data, the demand for systems to help with these comparisons is also growing. Increasingly, information visualization tools support such comparisons explicitly, beyond simply allowing a viewer to examine each object individually. In this paper, we argue that the design of information visualizations of complex objects can, and should, be studied in general, that is independently of what those objects are. As a first step in developing this general understanding of comparison, we propose a general taxonomy of visual designs for comparison that groups designs into three basic categories, which can be combined. To clarify the taxonomy and validate its completeness, we provide a survey of work in information visualization related to comparison. Although we find a great diversity of systems and approaches, we see that all designs are assembled from the building blocks of juxtaposition, superposition and explicit encodings. This initial exploration shows the power of our model, and suggests future challenges in developing a general understanding of comparative visualization and facilitating the development of more comparative visualization tools. © The Author(s) 2011.

  17. Diagnosing cerebral visual impairment in children with good visual acuity.

    Science.gov (United States)

    van Genderen, Maria; Dekker, Marjoke; Pilon, Florine; Bals, Irmgard

    2012-06-01

    To identify elements that could facilitate the diagnosis of cerebral visual impairment (CVI) in children with good visual acuity in the general ophthalmic clinic. We retrospectively investigated the clinical characteristics of 30 children with good visual acuity and CVI and compared them with those of 23 children who were referred with a suspicion of CVI, but proved to have a different diagnosis. Clinical characteristics included medical history, MRI findings, visual acuity, crowding ratio (CR), visual field assessment, and the results of ophthalmologic and orthoptic examination. We also evaluated the additional value of a short CVI questionnaire. Eighty-three percent of the children with an abnormal medical history (mainly prematurity and perinatal hypoxia) had CVI, in contrast with none of the children with a normal medical history. Cerebral palsy, visual field defects, and partial optic atrophy only occurred in the CVI group. 41% of the children with CVI had a CR ≥2.0, which may be related to dorsal stream dysfunction. All children with CVI, but also 91% of the children without CVI gave ≥3 affirmative answers on the CVI questionnaire. An abnormal pre- or perinatal medical history is the most important risk factor for CVI in children, and therefore in deciding which children should be referred for further multidisciplinary assessment. Additional symptoms of cerebral damage, i.e., cerebral palsy, visual field defects, partial optic atrophy, and a CR ≥2 may support the diagnosis. CVI questionnaires should not be used for screening purposes as they yield too many false positives.

  18. Out of mind, but not out of sight: intentional control of visual memory.

    Science.gov (United States)

    Yotsumoto, Yuko; Sekuler, Robert

    2006-06-01

    Does visual information enjoy automatic, obligatory entry into memory, or, after such information has been seen, can it still be actively excluded? To characterize the process by which visual information could be excluded from memory, we used Sternberg's (1966, 1975) recognition paradigm, measuring visual episodic memory for compound grating stimuli. Because recognition declines as additional study items enter memory, episodic recognition performance provides a sensitive index of memory's contents. Three experiments showed that an item occupying a fixed serial position in a series of study items could be intentionally excluded from memory. In addition, exclusion does not depend on low-level information, such as the stimulus's spatial location, orientation, or spatial frequency, and does not depend on the precise timing of irrelevant information, which suggests that the exclusionprocess is triggered by some event during a trial. The results, interpreted within the framework of a summed similarity model for visual recognition, suggest that exclusion operates after considerable visual processing of the to-be-excluded item.

  19. Relationship between Vision and Visual Perception in Hong Kong Preschoolers.

    Science.gov (United States)

    Ho, Wing-Cheung; Tang, Minny Mei-Miu; Fu, Ching-Wah; Leung, Ka-Yan; Pang, Peter Chi-Kong; Cheong, Allen Ming-Yan

    2015-05-01

    Although superior performance in visual motor and visual perceptual skills of preschool children has been documented in the Chinese population, a normative database is only available for the US population. This study aimed to determine the normative values for these visuomotor and visual perceptual tests for preschool children in the Hong Kong Chinese population and to investigate the effect of fundamental visual functions on visuomotor and visual perceptual skills. One hundred seventy-four children from six different kindergartens in Hong Kong were recruited. Distance visual acuity, near visual acuity, and stereopsis were tested, along with two measures of visual perception (VP): Visual-Motor Integration (VMI) and Test of Visual-Perceptual Skills (TVPS). Raw VMI and TVPS scores were converted into standard/scaled scores. The impact of basic visual functions on VP (VMI and TVPS) was examined using multiple regression. Visual functions were generally good: only 9.2 and 4.6% of subjects had unilateral and bilateral reduced habitual vision, respectively (distance visual acuity in the better eye >0.3 logMAR [logarithm of the minimum angle of resolution]). Performance in the VMI and in the visual memory and spatial relationships subtests of the TVPS exceeded that reported for age-matched children from the United States. Multiple regression analysis provided evidence that age had the strongest predictive value for the VMI and VP skills. In addition, near visual acuity was weakly associated with performance in the VMI and the visual discrimination and spatial relationships subtests of the TVPS, accounting for a limited proportion of the intersubject variability (R memory/spatial relationships of TVPS subtests, perhaps attributed to greater exposure to such material during their preschool home education. This study provided normality data for VMI and four subtests of the TVPS for Hong Kong Chinese preschool children as a reference for future studies.

  20. The multiple sclerosis visual pathway cohort: understanding neurodegeneration in MS.

    Science.gov (United States)

    Martínez-Lapiscina, Elena H; Fraga-Pumar, Elena; Gabilondo, Iñigo; Martínez-Heras, Eloy; Torres-Torres, Ruben; Ortiz-Pérez, Santiago; Llufriu, Sara; Tercero, Ana; Andorra, Magi; Roca, Marc Figueras; Lampert, Erika; Zubizarreta, Irati; Saiz, Albert; Sanchez-Dalmau, Bernardo; Villoslada, Pablo

    2014-12-15

    Multiple Sclerosis (MS) is an immune-mediated disease of the Central Nervous System with two major underlying etiopathogenic processes: inflammation and neurodegeneration. The latter determines the prognosis of this disease. MS is the main cause of non-traumatic disability in middle-aged populations. The MS-VisualPath Cohort was set up to study the neurodegenerative component of MS using advanced imaging techniques by focusing on analysis of the visual pathway in a middle-aged MS population in Barcelona, Spain. We started the recruitment of patients in the early phase of MS in 2010 and it remains permanently open. All patients undergo a complete neurological and ophthalmological examination including measurements of physical and disability (Expanded Disability Status Scale; Multiple Sclerosis Functional Composite and neuropsychological tests), disease activity (relapses) and visual function testing (visual acuity, color vision and visual field). The MS-VisualPath protocol also assesses the presence of anxiety and depressive symptoms (Hospital Anxiety and Depression Scale), general quality of life (SF-36) and visual quality of life (25-Item National Eye Institute Visual Function Questionnaire with the 10-Item Neuro-Ophthalmic Supplement). In addition, the imaging protocol includes both retinal (Optical Coherence Tomography and Wide-Field Fundus Imaging) and brain imaging (Magnetic Resonance Imaging). Finally, multifocal Visual Evoked Potentials are used to perform neurophysiological assessment of the visual pathway. The analysis of the visual pathway with advance imaging and electrophysilogical tools in parallel with clinical information will provide significant and new knowledge regarding neurodegeneration in MS and provide new clinical and imaging biomarkers to help monitor disease progression in these patients.

  1. Data visualization

    CERN Document Server

    Azzam, Tarek

    2013-01-01

    Do you communicate data and information to stakeholders? In Part 1, we introduce recent developments in the quantitative and qualitative data visualization field and provide a historical perspective on data visualization, its potential role in evaluation practice, and future directions. Part 2 delivers concrete suggestions for optimally using data visualization in evaluation, as well as suggestions for best practices in data visualization design. It focuses on specific quantitative and qualitative data visualization approaches that include data dashboards, graphic recording, and geographic information systems (GIS). Readers will get a step-by-step process for designing an effective data dashboard system for programs and organizations, and various suggestions to improve their utility.

  2. The visual communication of risk.

    Science.gov (United States)

    Lipkus, I M; Hollands, J G

    1999-01-01

    This paper 1) provides reasons why graphics should be effective aids to communicate risk; 2) reviews the use of visuals, especially graphical displays, to communicate risk; 3) discusses issues to consider when designing graphs to communicate risk; and 4) provides suggestions for future research. Key articles and materials were obtained from MEDLINE(R) and PsychInfo(R) databases, from reference article citations, and from discussion with experts in risk communication. Research has been devoted primarily to communicating risk magnitudes. Among the various graphical displays, the risk ladder appears to be a promising tool for communicating absolute and relative risks. Preliminary evidence suggests that people understand risk information presented in histograms and pie charts. Areas that need further attention include 1) applying theoretical models to the visual communication of risk, 2) testing which graphical displays can be applied best to different risk communication tasks (e.g., which graphs best convey absolute or relative risks), 3) communicating risk uncertainty, and 4) testing whether the lay public's perceptions and understanding of risk varies by graphical format and whether the addition of graphical displays improves comprehension substantially beyond numerical or narrative translations of risk and, if so, by how much. There is a need to ascertain the extent to which graphics and other visuals enhance the public's understanding of disease risk to facilitate decision-making and behavioral change processes. Nine suggestions are provided to help achieve these ends.

  3. The influence of attention, learning, and motivation on visual search.

    Science.gov (United States)

    Dodd, Michael D; Flowers, John H

    2012-01-01

    The 59th Annual Nebraska Symposium on Motivation (The Influence of Attention, Learning, and Motivation on Visual Search) took place April 7-8, 2011, on the University of Nebraska-Lincoln campus. The symposium brought together leading scholars who conduct research related to visual search at a variety levels for a series of talks, poster presentations, panel discussions, and numerous additional opportunities for intellectual exchange. The Symposium was also streamed online for the first time in the history of the event, allowing individuals from around the world to view the presentations and submit questions. The present volume is intended to both commemorate the event itself and to allow our speakers additional opportunity to address issues and current research that have since arisen. Each of the speakers (and, in some cases, their graduate students and post docs) has provided a chapter which both summarizes and expands on their original presentations. In this chapter, we sought to a) provide additional context as to how the Symposium came to be, b) discuss why we thought that this was an ideal time to organize a visual search symposium, and c) to briefly address recent trends and potential future directions in the field. We hope you find the volume both enjoyable and informative, and we thank the authors who have contributed a series of engaging chapters.

  4. Harnessing the web information ecosystem with wiki-based visualization dashboards.

    Science.gov (United States)

    McKeon, Matt

    2009-01-01

    We describe the design and deployment of Dashiki, a public website where users may collaboratively build visualization dashboards through a combination of a wiki-like syntax and interactive editors. Our goals are to extend existing research on social data analysis into presentation and organization of data from multiple sources, explore new metaphors for these activities, and participate more fully in the web!s information ecology by providing tighter integration with real-time data. To support these goals, our design includes novel and low-barrier mechanisms for editing and layout of dashboard pages and visualizations, connection to data sources, and coordinating interaction between visualizations. In addition to describing these technologies, we provide a preliminary report on the public launch of a prototype based on this design, including a description of the activities of our users derived from observation and interviews.

  5. Information processing in the primate visual system - An integrated systems perspective

    Science.gov (United States)

    Van Essen, David C.; Anderson, Charles H.; Felleman, Daniel J.

    1992-01-01

    The primate visual system contains dozens of distinct areas in the cerebral cortex and several major subcortical structures. These subdivisions are extensively interconnected in a distributed hierarchical network that contains several intertwined processing streams. A number of strategies are used for efficient information processing within this hierarchy. These include linear and nonlinear filtering, passage through information bottlenecks, and coordinated use of multiple types of information. In addition, dynamic regulation of information flow within and between visual areas may provide the computational flexibility needed for the visual system to perform a broad spectrum of tasks accurately and at high resolution.

  6. Information Processing in the Primate Visual System: An Integrated Systems Perspective

    Science.gov (United States)

    van Essen, David C.; Anderson, Charles H.; Felleman, Daniel J.

    1992-01-01

    The primate visual system contains dozens of distinct areas in the cerebral cortex and several major subcortical structures. These subdivisions are extensively interconnected in a distributed hierarchical network that contains several intertwined processing streams. A number of strategies are used for efficient information processing within this hierarchy. These include linear and nonlinear filtering, passage through information bottlenecks, and coordinated use of multiple types of information. In addition, dynamic regulation of information flow within and between visual areas may provide the computational flexibility needed for the visual system to perform a broad spectrum of tasks accurately and at high resolution.

  7. A Visual Analytics Approach for Station-Based Air Quality Data

    Directory of Open Access Journals (Sweden)

    Yi Du

    2016-12-01

    Full Text Available With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.

  8. A Visual Analytics Approach for Station-Based Air Quality Data.

    Science.gov (United States)

    Du, Yi; Ma, Cuixia; Wu, Chao; Xu, Xiaowei; Guo, Yike; Zhou, Yuanchun; Li, Jianhui

    2016-12-24

    With the deployment of multi-modality and large-scale sensor networks for monitoring air quality, we are now able to collect large and multi-dimensional spatio-temporal datasets. For these sensed data, we present a comprehensive visual analysis approach for air quality analysis. This approach integrates several visual methods, such as map-based views, calendar views, and trends views, to assist the analysis. Among those visual methods, map-based visual methods are used to display the locations of interest, and the calendar and the trends views are used to discover the linear and periodical patterns. The system also provides various interaction tools to combine the map-based visualization, trends view, calendar view and multi-dimensional view. In addition, we propose a self-adaptive calendar-based controller that can flexibly adapt the changes of data size and granularity in trends view. Such a visual analytics system would facilitate big-data analysis in real applications, especially for decision making support.

  9. ADDITIVE VALUE OF TRANSESOPHAGEAL ECHOCARDIOGRAPHY IN THE VISUALIZATION OF CARCINOID HEART-DISEASE

    NARCIS (Netherlands)

    VANVELDHUISEN, DJ; HAMER, JPM; ANDRIESSEN, MPHM; DEVRIES, EGE; LIE, KI

    A 65-yr-old woman with atypical complaints and a tricuspid insufficiency murmur underwent transthoracic echocardiography, which showed right-sided abnormalities, but did not allow clear visualization of the valves. Subsequent transoesophageal imaging, however, raised the suspicion of carcinoid heart

  10. Visual cues and listening effort: individual variability.

    Science.gov (United States)

    Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y

    2011-10-01

    To investigate the effect of visual cues on listening effort as well as whether predictive variables such as working memory capacity (WMC) and lipreading ability affect the magnitude of listening effort. Twenty participants with normal hearing were tested using a paired-associates recall task in 2 conditions (quiet and noise) and 2 presentation modalities (audio only [AO] and auditory-visual [AV]). Signal-to-noise ratios were adjusted to provide matched speech recognition across audio-only and AV noise conditions. Also measured were subjective perceptions of listening effort and 2 predictive variables: (a) lipreading ability and (b) WMC. Objective and subjective results indicated that listening effort increased in the presence of noise, but on average the addition of visual cues did not significantly affect the magnitude of listening effort. Although there was substantial individual variability, on average participants who were better lipreaders or had larger WMCs demonstrated reduced listening effort in noise in AV conditions. Overall, the results support the hypothesis that integrating auditory and visual cues requires cognitive resources in some participants. The data indicate that low lipreading ability or low WMC is associated with relatively effortful integration of auditory and visual information in noise.

  11. Using the Visualization Software Evaluation Rubric to explore six freely available visualization applications

    Directory of Open Access Journals (Sweden)

    Thea P. Atwood

    2018-01-01

    Full Text Available Objective: As a variety of visualization tools become available to librarians and researchers, it can be challenging to select a tool that is robust and flexible enough to provide the desired visualization outcomes for work or personal use. In this article, the authors provide guidance on several freely available tools, and offer a rubric for use in evaluating visualization tools. Methods: A rubric was generated to assist the authors in assessing the selected six freely available visualization tools. Each author analyzed three tools, and discussed the differences, similarities, challenges, and successes of each. Results: Of the six visualization tools, two tools emerged with high marks. The authors found that the rubric was a successful evaluation tool, and facilitated discussion on the strengths and weaknesses of the six selected visualization pieces of software. Conclusions: Of the six different visualization tools analyzed, all had different functions and features available to best meet the needs of users. In a situation where there are many options available, and it is difficult at first glance to determine a clear winner, a rubric can be useful in providing a method to quickly assess and communicate the effectiveness of a tool.

  12. Traffic Visualization

    DEFF Research Database (Denmark)

    Picozzi, Matteo; Verdezoto, Nervo; Pouke, Matti

    2013-01-01

    In this paper, we present a space-time visualization to provide city's decision-makers the ability to analyse and uncover important "city events" in an understandable manner for city planning activities. An interactive Web mashup visualization is presented that integrates several visualization...... techniques to give a rapid overview of traffic data. We illustrate our approach as a case study for traffic visualization systems, using datasets from the city of Oulu that can be extended to other city planning activities. We also report the feedback of real users (traffic management employees, traffic police...

  13. User-Centered Evaluation of Visual Analytics

    Energy Technology Data Exchange (ETDEWEB)

    Scholtz, Jean C.

    2017-10-01

    Visual analytics systems are becoming very popular. More domains now use interactive visualizations to analyze the ever-increasing amount and heterogeneity of data. More novel visualizations are being developed for more tasks and users. We need to ensure that these systems can be evaluated to determine that they are both useful and usable. A user-centered evaluation for visual analytics needs to be developed for these systems. While many of the typical human-computer interaction (HCI) evaluation methodologies can be applied as is, others will need modification. Additionally, new functionality in visual analytics systems needs new evaluation methodologies. There is a difference between usability evaluations and user-centered evaluations. Usability looks at the efficiency, effectiveness, and user satisfaction of users carrying out tasks with software applications. User-centered evaluation looks more specifically at the utility provided to the users by the software. This is reflected in the evaluations done and in the metrics used. In the visual analytics domain this is very challenging as users are most likely experts in a particular domain, the tasks they do are often not well defined, the software they use needs to support large amounts of different kinds of data, and often the tasks last for months. These difficulties are discussed more in the section on User-centered Evaluation. Our goal is to provide a discussion of user-centered evaluation practices for visual analytics, including existing practices that can be carried out and new methodologies and metrics that need to be developed and agreed upon by the visual analytics community. The material provided here should be of use for both researchers and practitioners in the field of visual analytics. Researchers and practitioners in HCI and interested in visual analytics will find this information useful as well as a discussion on changes that need to be made to current HCI practices to make them more suitable to

  14. Visualizing uncertainties in a storm surge ensemble data assimilation and forecasting system

    KAUST Repository

    Hollt, Thomas; Altaf, Muhammad; Mandli, Kyle T.; Hadwiger, Markus; Dawson, Clint N.; Hoteit, Ibrahim

    2015-01-01

    allows the user to browse through the simulation ensembles in real time, view specific parameter settings or simulation models and move between different spatial and temporal regions without delay. In addition, our system provides advanced visualizations

  15. Network Physics - the only company to provide physics-based network management - secures additional funding and new executives

    CERN Multimedia

    2003-01-01

    "Network Physics, the only provider of physics-based network management products, today announced an additional venture round of $6 million in funding, as well as the addition of David Jones as president and CEO and Tom Dunn as vice president of sales and business development" (1 page).

  16. Particle Track Visualization using the MCNP Visual Editor

    International Nuclear Information System (INIS)

    Schwarz, Randolph A.; Carter, Lee; Brown, Wendi A.

    2001-01-01

    The Monte Carlo N-Particle (MCNP) visual editor1,2,3 is used throughout the world for displaying and creating complex MCNP geometries. The visual editor combines the Los Alamos MCNP Fortran code with a C front end to provide a visual interface. A big advantage of this approach is that the particle transport routines for MCNP are available to the visual front end. The latest release of the visual editor by Pacific Northwest National Laboratory enables the user to plot transport data points on top of a two-dimensional geometry plot. The user can plot source points, collisions points, surface crossings, and tally contributions. This capability can be used to show where particle collisions are occurring, verify the effectiveness of the particle biasing, or show which collisions contribute to a tally. For a KCODE (criticality source) calculation, the visual editor can be used to plot the source points for specific cycles

  17. A study for providing additional storage spaces to ET-RR-1 spent fuel

    International Nuclear Information System (INIS)

    El-Kady, A.; Ashoub, N.; Saleh, H.G.

    1995-01-01

    The ET-RR-1 reactor spent fuel storage pool is a trapezoidal aluminum tank concrete shield and of capacity 10 m 3 . It can hold up to 60 fuel assemblies. The long operation history of the ET-RR-1 reactor resulted in a partially filled spent fuel storage with the remaining spaces not enough to host a complete load from the reactor. This work have been initiated to evaluate possible alternative solutions for providing additional storage spaces to host the available EK-10 fuel elements after irradiation and any foreseen fuel in case of reactor upgrading. Several alternate solutions have been reviewed and decision on the most suitable one is under study. These studies include criticality calculation of some suggested alternatives like reracking the present spent fuel storage pool and double tiering by the addition of a second level storage rack above the existing rack. The two levels may have different factor. Criticality calculation of the double tiering possible accident was also studied. (author)

  18. Novel Scientific Visualization Interfaces for Interactive Information Visualization and Sharing

    Science.gov (United States)

    Demir, I.; Krajewski, W. F.

    2012-12-01

    As geoscientists are confronted with increasingly massive datasets from environmental observations to simulations, one of the biggest challenges is having the right tools to gain scientific insight from the data and communicate the understanding to stakeholders. Recent developments in web technologies make it easy to manage, visualize and share large data sets with general public. Novel visualization techniques and dynamic user interfaces allow users to interact with data, and modify the parameters to create custom views of the data to gain insight from simulations and environmental observations. This requires developing new data models and intelligent knowledge discovery techniques to explore and extract information from complex computational simulations or large data repositories. Scientific visualization will be an increasingly important component to build comprehensive environmental information platforms. This presentation provides an overview of the trends and challenges in the field of scientific visualization, and demonstrates information visualization and communication tools in the Iowa Flood Information System (IFIS), developed within the light of these challenges. The IFIS is a web-based platform developed by the Iowa Flood Center (IFC) to provide access to and visualization of flood inundation maps, real-time flood conditions, flood forecasts both short-term and seasonal, and other flood-related data for communities in Iowa. The key element of the system's architecture is the notion of community. Locations of the communities, those near streams and rivers, define basin boundaries. The IFIS provides community-centric watershed and river characteristics, weather (rainfall) conditions, and streamflow data and visualization tools. Interactive interfaces allow access to inundation maps for different stage and return period values, and flooding scenarios with contributions from multiple rivers. Real-time and historical data of water levels, gauge heights, and

  19. 34 CFR 645.13 - What additional services do Upward Bound Math and Science Centers provide and how are they...

    Science.gov (United States)

    2010-07-01

    ... 34 Education 3 2010-07-01 2010-07-01 false What additional services do Upward Bound Math and... Program? § 645.13 What additional services do Upward Bound Math and Science Centers provide and how are... provided under § 645.11(b), an Upward Bound Math and Science Center must provide— (1) Intensive instruction...

  20. Why do pictures, but not visual words, reduce older adults' false memories?

    Science.gov (United States)

    Smith, Rebekah E; Hunt, R Reed; Dunlap, Kathryn R

    2015-09-01

    Prior work shows that false memories resulting from the study of associatively related lists are reduced for both young and older adults when the auditory presentation of study list words is accompanied by related pictures relative to when auditory word presentation is combined with visual presentation of the word. In contrast, young adults, but not older adults, show a reduction in false memories when presented with the visual word along with the auditory word relative to hearing the word only. In both cases of pictures relative to visual words and visual words relative to auditory words alone, the benefit of picture and visual words in reducing false memories has been explained in terms of monitoring for perceptual information. In our first experiment, we provide the first simultaneous comparison of all 3 study presentation modalities (auditory only, auditory plus visual word, and auditory plus picture). Young and older adults show a reduction in false memories in the auditory plus picture condition, but only young adults show a reduction in the visual word condition relative to the auditory only condition. A second experiment investigates whether older adults fail to show a reduction in false memory in the visual word condition because they do not encode perceptual information in the visual word condition. In addition, the second experiment provides evidence that the failure of older adults to show the benefits of visual word presentation is related to reduced cognitive resources. (PsycINFO Database Record (c) 2015 APA, all rights reserved).

  1. Using Typography to Expand the Design Space of Data Visualization

    Directory of Open Access Journals (Sweden)

    Richard Brath

    Full Text Available This article is a systematic exploration and expansion of the data visualization design space focusing on the role of text. A critical analysis of text usage in data visualizations reveals gaps in existing frameworks and practice. A cross-disciplinary review including the fields of typography, cartography, and coding interfaces yields various typographic techniques to encode data into text, and provides scope for an expanded design space. Mapping new attributes back to well understood principles frames the expanded design space and suggests potential areas of application. From ongoing research created with our framework, we show the design, implementation, and evaluation of six new visualization techniques. Finally, a broad evaluation of a number of visualizations, including critiques from several disciplinary experts, reveals opportunities as well as areas of concern, and points towards additional research with our framework.

  2. Learning semantic and visual similarity for endomicroscopy video retrieval.

    Science.gov (United States)

    Andre, Barbara; Vercauteren, Tom; Buchner, Anna M; Wallace, Michael B; Ayache, Nicholas

    2012-06-01

    . In our resulting retrieval system, we decide to use visual signatures for perceived similarity learning and retrieval, and semantic signatures for the output of an additional information, expressed in the endoscopist own language, which provides a relevant semantic translation of the visual retrieval outputs.

  3. Scientific visualization as an expressive medium for project science inquiry

    Science.gov (United States)

    Gordin, Douglas Norman

    Scientists' external representations can help science education by providing powerful tools for students' inquiry. Scientific visualization is particularly well suited for this as it uses color patterns, rather than algebraic notation. Nonetheless, visualization must be adapted so it better fits with students' interests, goals, and abilities. I describe how visualization was adapted for students' expressive use and provide a case study where students successfully used visualization. The design process began with scientists' tools, data sets, and activities which were then adapted for students' use. I describe the design through scenarios where students create and analyze visualizations and present the software's functionality through visualization's sub-representations of data; color; scale, resolution, and projection; and examining the relationships between visualizations. I evaluate these designs through a "hot-house" study where a small group of students used visualization under near ideal circumstances for two weeks. Using videotapes of group interactions, software logs, and students' work I examine their representational and inquiry strategies. These inquiries were successful in that the group pursued their interest in world hunger by creating a visualization of daily per capita calorie consumption. Through creating the visualization the students engage in a process of meaning making where they interweave their prior experiences and beliefs with the representations they are using. This interweaving and other processes of collaborative visualization are shown when the students (a) computed values, (b) created a new color scheme, (c) cooperated to create the visualization, and (d) presented their work to other students. I also discuss problems that arose when students (a) used units without considering their meaning, (b) chose inappropriate comparisons in case-based reasoning, (c) did not participate equally during group work, (d) were confused about additive

  4. Visual Memories Bypass Normalization.

    Science.gov (United States)

    Bloem, Ilona M; Watanabe, Yurika L; Kibbe, Melissa M; Ling, Sam

    2018-05-01

    How distinct are visual memory representations from visual perception? Although evidence suggests that briefly remembered stimuli are represented within early visual cortices, the degree to which these memory traces resemble true visual representations remains something of a mystery. Here, we tested whether both visual memory and perception succumb to a seemingly ubiquitous neural computation: normalization. Observers were asked to remember the contrast of visual stimuli, which were pitted against each other to promote normalization either in perception or in visual memory. Our results revealed robust normalization between visual representations in perception, yet no signature of normalization occurring between working memory stores-neither between representations in memory nor between memory representations and visual inputs. These results provide unique insight into the nature of visual memory representations, illustrating that visual memory representations follow a different set of computational rules, bypassing normalization, a canonical visual computation.

  5. Effects of visual attention on chromatic and achromatic detection sensitivities.

    Science.gov (United States)

    Uchikawa, Keiji; Sato, Masayuki; Kuwamura, Keiko

    2014-05-01

    Visual attention has a significant effect on various visual functions, such as response time, detection and discrimination sensitivity, and color appearance. It has been suggested that visual attention may affect visual functions in the early visual pathways. In this study we examined selective effects of visual attention on sensitivities of the chromatic and achromatic pathways to clarify whether visual attention modifies responses in the early visual system. We used a dual task paradigm in which the observer detected a peripheral test stimulus presented at 4 deg eccentricities while the observer concurrently carried out an attention task in the central visual field. In experiment 1, it was confirmed that peripheral spectral sensitivities were reduced more for short and long wavelengths than for middle wavelengths with the central attention task so that the spectral sensitivity function changed its shape by visual attention. This indicated that visual attention affected the chromatic response more strongly than the achromatic response. In experiment 2 it was obtained that the detection thresholds increased in greater degrees in the red-green and yellow-blue chromatic directions than in the white-black achromatic direction in the dual task condition. In experiment 3 we showed that the peripheral threshold elevations depended on the combination of color-directions of the central and peripheral stimuli. Since the chromatic and achromatic responses were separately processed in the early visual pathways, the present results provided additional evidence that visual attention affects responses in the early visual pathways.

  6. Optimization of Visual Information Presentation for Visual Prosthesis

    Directory of Open Access Journals (Sweden)

    Fei Guo

    2018-01-01

    Full Text Available Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis.

  7. Optimization of Visual Information Presentation for Visual Prosthesis

    Science.gov (United States)

    Gao, Yong

    2018-01-01

    Visual prosthesis applying electrical stimulation to restore visual function for the blind has promising prospects. However, due to the low resolution, limited visual field, and the low dynamic range of the visual perception, huge loss of information occurred when presenting daily scenes. The ability of object recognition in real-life scenarios is severely restricted for prosthetic users. To overcome the limitations, optimizing the visual information in the simulated prosthetic vision has been the focus of research. This paper proposes two image processing strategies based on a salient object detection technique. The two processing strategies enable the prosthetic implants to focus on the object of interest and suppress the background clutter. Psychophysical experiments show that techniques such as foreground zooming with background clutter removal and foreground edge detection with background reduction have positive impacts on the task of object recognition in simulated prosthetic vision. By using edge detection and zooming technique, the two processing strategies significantly improve the recognition accuracy of objects. We can conclude that the visual prosthesis using our proposed strategy can assist the blind to improve their ability to recognize objects. The results will provide effective solutions for the further development of visual prosthesis. PMID:29731769

  8. VAAPA: a web platform for visualization and analysis of alternative polyadenylation.

    Science.gov (United States)

    Guan, Jinting; Fu, Jingyi; Wu, Mingcheng; Chen, Longteng; Ji, Guoli; Quinn Li, Qingshun; Wu, Xiaohui

    2015-02-01

    Polyadenylation [poly(A)] is an essential process during the maturation of most mRNAs in eukaryotes. Alternative polyadenylation (APA) as an important layer of gene expression regulation has been increasingly recognized in various species. Here, a web platform for visualization and analysis of alternative polyadenylation (VAAPA) was developed. This platform can visualize the distribution of poly(A) sites and poly(A) clusters of a gene or a section of a chromosome. It can also highlight genes with switched APA sites among different conditions. VAAPA is an easy-to-use web-based tool that provides functions of poly(A) site query, data uploading, downloading, and APA sites visualization. It was designed in a multi-tier architecture and developed based on Smart GWT (Google Web Toolkit) using Java as the development language. VAAPA will be a valuable addition to the community for the comprehensive study of APA, not only by making the high quality poly(A) site data more accessible, but also by providing users with numerous valuable functions for poly(A) site analysis and visualization. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Data Visualization and Storytelling: Students Showcasing Innovative Work on the NASA Hyperwall

    Science.gov (United States)

    Hankin, E. R.; Hasan, M.; Williams, B. M.; Harwell, D. E.

    2017-12-01

    Visual storytelling can be used to quickly and effectively tell a story about data and scientific research, with powerful visuals driving a deeper level of engagement. In 2016, the American Geophysical Union (AGU) launched a pilot contest with a grant from NASA to fund students to travel to the AGU Fall Meeting to present innovative data visualizations with fascinating stories on the NASA Hyperwall. This presentation will discuss the purpose of the contest and provide highlights. Additionally, the presentation will feature Mejs Hasan, one of the 2016 contest grand prize winners, who will discuss her award-winning research utilizing Landsat visual data, MODIS Enhanced Vegetation Index data, and NOAA nightlight data to study the effects of both drought and war on the Middle East.

  10. Adaptation effects in static postural control by providing simultaneous visual feedback of center of pressure and center of gravity.

    Science.gov (United States)

    Takeda, Kenta; Mani, Hiroki; Hasegawa, Naoya; Sato, Yuki; Tanaka, Shintaro; Maejima, Hiroshi; Asaka, Tadayoshi

    2017-07-19

    The benefit of visual feedback of the center of pressure (COP) on quiet standing is still debatable. This study aimed to investigate the adaptation effects of visual feedback training using both the COP and center of gravity (COG) during quiet standing. Thirty-four healthy young adults were divided into three groups randomly (COP + COG, COP, and control groups). A force plate was used to calculate the coordinates of the COP in the anteroposterior (COP AP ) and mediolateral (COP ML ) directions. A motion analysis system was used to calculate the coordinates of the center of mass (COM) in both directions (COM AP and COM ML ). The coordinates of the COG in the AP direction (COG AP ) were obtained from the force plate signals. Augmented visual feedback was presented on a screen in the form of fluctuation circles in the vertical direction that moved upward as the COP AP and/or COG AP moved forward and vice versa. The COP + COG group received the real-time COP AP and COG AP feedback simultaneously, whereas the COP group received the real-time COP AP feedback only. The control group received no visual feedback. In the training session, the COP + COG group was required to maintain an even distance between the COP AP and COG AP and reduce the COG AP fluctuation, whereas the COP group was required to reduce the COP AP fluctuation while standing on a foam pad. In test sessions, participants were instructed to keep their standing posture as quiet as possible on the foam pad before (pre-session) and after (post-session) the training sessions. In the post-session, the velocity and root mean square of COM AP in the COP + COG group were lower than those in the control group. In addition, the absolute value of the sum of the COP - COM distances in the COP + COG group was lower than that in the COP group. Furthermore, positive correlations were found between the COM AP velocity and COP - COM parameters. The results suggest that the novel visual feedback

  11. Explanatory and illustrative visualization of special and general relativity.

    Science.gov (United States)

    Weiskopf, Daniel; Borchers, Marc; Ertl, Thomas; Falk, Martin; Fechtig, Oliver; Frank, Regine; Grave, Frank; King, Andreas; Kraus, Ute; Müller, Thomas; Nollert, Hans-Peter; Rica Mendez, Isabel; Ruder, Hanns; Schafhitzel, Tobias; Schär, Sonja; Zahn, Corvin; Zatloukal, Michael

    2006-01-01

    This paper describes methods for explanatory and illustrative visualizations used to communicate aspects of Einstein's theories of special and general relativity, their geometric structure, and of the related fields of cosmology and astrophysics. Our illustrations target a general audience of laypersons interested in relativity. We discuss visualization strategies, motivated by physics education and the didactics of mathematics, and describe what kind of visualization methods have proven to be useful for different types of media, such as still images in popular science magazines, film contributions to TV shows, oral presentations, or interactive museum installations. Our primary approach is to adopt an egocentric point of view: The recipients of a visualization participate in a visually enriched thought experiment that allows them to experience or explore a relativistic scenario. In addition, we often combine egocentric visualizations with more abstract illustrations based on an outside view in order to provide several presentations of the same phenomenon. Although our visualization tools often build upon existing methods and implementations, the underlying techniques have been improved by several novel technical contributions like image-based special relativistic rendering on GPUs, special relativistic 4D ray tracing for accelerating scene objects, an extension of general relativistic ray tracing to manifolds described by multiple charts, GPU-based interactive visualization of gravitational light deflection, as well as planetary terrain rendering. The usefulness and effectiveness of our visualizations are demonstrated by reporting on experiences with, and feedback from, recipients of visualizations and collaborators.

  12. Visual memory and visual mental imagery recruit common control and sensory regions of the brain.

    Science.gov (United States)

    Slotnick, Scott D; Thompson, William L; Kosslyn, Stephen M

    2012-01-01

    Separate lines of research have shown that visual memory and visual mental imagery are mediated by frontal-parietal control regions and can rely on occipital-temporal sensory regions of the brain. We used fMRI to assess the degree to which visual memory and visual mental imagery rely on the same neural substrates. During the familiarization/study phase, participants studied drawings of objects. During the test phase, words corresponding to old and new objects were presented. In the memory test, participants responded "remember," "know," or "new." In the imagery test, participants responded "high vividness," "moderate vividness," or "low vividness." Visual memory (old-remember) and visual imagery (old-high vividness) were commonly associated with activity in frontal-parietal control regions and occipital-temporal sensory regions. In addition, visual memory produced greater activity than visual imagery in parietal and occipital-temporal regions. The present results suggest that visual memory and visual imagery rely on highly similar--but not identical--cognitive processes.

  13. Improving visual skills: II-remote assessment via Internet.

    Science.gov (United States)

    Powers, Maureen K; Grisham, J David; Wurm, Janice K; Wurm, William C

    2009-02-01

    Even though poor readers often have poor visual skills, such as binocular coordination and oculomotor control, students' visual skills are rarely assessed. Computer assessments have the potential to assist in identifying students whose visual skills are deficient. This study compared assessments made by an Internet-based computer orthoptics program with those of an on-site vision therapist. Students (N = 41) in grades 1 through 8, reading at least 2 levels below grade, were assessed for visual skill dysfunction (including binocular fusion and tracking ability) by a vision therapist at their school in Wisconsin. The therapist determined whether the student had adequate visual skills based on clinical and behavioral observations. A "remote" investigator located in California determined the adequacy of accommodative facility, tracking, and vergence skills in the same students, based on quantitative progress through the modules of an Internet-based computer orthoptics training program during 3 assessment sessions. The on-site therapist made 33 referrals for possible visual skills training (80%). The remote investigator made 25 referrals (61%), all of which were consistent with referrals made by the on-site therapist; thus, no false-positives occurred when using the remote assessment technique. The 8 additional referrals by the therapist were attributed to the ability to observe student behavior during assessment. Remote assessment of visual skills via an Internet orthoptics program may provide a simple means to detect visual skill problems experienced by poor readers.

  14. Helping Educators Find Visualizations and Teaching Materials Just-in-Time

    Science.gov (United States)

    McDaris, J.; Manduca, C. A.; MacDonald, R. H.

    2005-12-01

    Major events and natural disasters like hurricanes and tsunamis provide geoscience educators with powerful teachable moments to engage their students with class content. In order to take advantage of these opportunities, educators need quality topical resources related to current earth science events. The web has become an excellent vehicle for disseminating this type of resource. In response to the 2004 Indian Ocean Earthquake and to Hurricane Katrina's devastating impact on the US Gulf Coast, the On the Cutting Edge professional development program developed collections of visualizations for use in teaching. (serc.carleton.edu/NAGTWorkshops/visualization/collections/ tsunami.html,serc.carleton.edu/NAGTWorkshops/visualization/ collections/hurricanes.html). These sites are collections of links to visualizations and other materials that can support the efforts of faculty, teachers, and those engaged in public outreach. They bring together resources created by researchers, government agencies and respected media sources and organize them for easy use by educators. Links are selected to provide a variety of different types of visualizations (e.g photographic images, animations, satellite imagery) and to assist educators in teaching about the geologic event reported in the news, associated Earth science concepts, and related topics of high interest. The cited links are selected from quality sources and are reviewed by SERC staff before being included on the page. Geoscience educators are encouraged to recommend links and supporting materials and to comment on the available resources. In this way the collection becomes more complete and its quality is enhanced.. These sites have received substantial use (Tsunami - 77,000 visitors in the first 3 months, Hurricanes - 2500 visitors in the first week) indicating that in addition to use by educators, they are being used by the general public seeking information about the events. Thus they provide an effective mechanism for

  15. The Molecule Cloud - compact visualization of large collections of molecules

    Directory of Open Access Journals (Sweden)

    Ertl Peter

    2012-07-01

    Full Text Available Abstract Background Analysis and visualization of large collections of molecules is one of the most frequent challenges cheminformatics experts in pharmaceutical industry are facing. Various sophisticated methods are available to perform this task, including clustering, dimensionality reduction or scaffold frequency analysis. In any case, however, viewing and analyzing large tables with molecular structures is necessary. We present a new visualization technique, providing basic information about the composition of molecular data sets at a single glance. Summary A method is presented here allowing visual representation of the most common structural features of chemical databases in a form of a cloud diagram. The frequency of molecules containing particular substructure is indicated by the size of respective structural image. The method is useful to quickly perceive the most prominent structural features present in the data set. This approach was inspired by popular word cloud diagrams that are used to visualize textual information in a compact form. Therefore we call this approach “Molecule Cloud”. The method also supports visualization of additional information, for example biological activity of molecules containing this scaffold or the protein target class typical for particular scaffolds, by color coding. Detailed description of the algorithm is provided, allowing easy implementation of the method by any cheminformatics toolkit. The layout algorithm is available as open source Java code. Conclusions Visualization of large molecular data sets using the Molecule Cloud approach allows scientists to get information about the composition of molecular databases and their most frequent structural features easily. The method may be used in the areas where analysis of large molecular collections is needed, for example processing of high throughput screening results, virtual screening or compound purchasing. Several example visualizations of large

  16. Visual-auditory integration for visual search: a behavioral study in barn owls

    Directory of Open Access Journals (Sweden)

    Yael eHazan

    2015-02-01

    Full Text Available Barn owls are nocturnal predators that rely on both vision and hearing for survival. The optic tectum of barn owls, a midbrain structure involved in selective attention, has been used as a model for studying visual- auditory integration at the neuronal level. However, behavioral data on visual- auditory integration in barn owls are lacking. The goal of this study was to examine if the integration of visual and auditory signals contributes to the process of guiding attention towards salient stimuli. We attached miniature wireless video cameras on barn owls' heads (OwlCam to track their target of gaze. We first provide evidence that the area centralis (a retinal area with a maximal density of photoreceptors is used as a functional fovea in barn owls. Thus, by mapping the projection of the area centralis on the OwlCam's video frame, it is possible to extract the target of gaze. For the experiment, owls were positioned on a high perch and four food items were scattered in a large arena on the floor. In addition, a hidden loudspeaker was positioned in the arena. The positions of the food items and speaker were changed every session. Video sequences from the OwlCam were saved for offline analysis while the owls spontaneously scanned the room and the food items with abrupt gaze shifts (head saccades. From time to time during the experiment, a brief sound was emitted from the speaker. The fixation points immediately following the sounds were extracted and the distances between the gaze position and the nearest items and loudspeaker were measured. The head saccades were rarely towards the location of the sound source but to salient visual features in the room, such as the door knob or the food items. However, among the food items, the one closest to the loudspeaker had the highest probability of attracting a gaze shift. This result supports the notion that auditory signals are integrated with visual information for the selection of the next visual search

  17. Health figures: an open source JavaScript library for health data visualization.

    Science.gov (United States)

    Ledesma, Andres; Al-Musawi, Mohammed; Nieminen, Hannu

    2016-03-22

    The way we look at data has a great impact on how we can understand it, particularly when the data is related to health and wellness. Due to the increased use of self-tracking devices and the ongoing shift towards preventive medicine, better understanding of our health data is an important part of improving the general welfare of the citizens. Electronic Health Records, self-tracking devices and mobile applications provide a rich variety of data but it often becomes difficult to understand. We implemented the hFigures library inspired on the hGraph visualization with additional improvements. The purpose of the library is to provide a visual representation of the evolution of health measurements in a complete and useful manner. We researched the usefulness and usability of the library by building an application for health data visualization in a health coaching program. We performed a user evaluation with Heuristic Evaluation, Controlled User Testing and Usability Questionnaires. In the Heuristics Evaluation the average response was 6.3 out of 7 points and the Cognitive Walkthrough done by usability experts indicated no design or mismatch errors. In the CSUQ usability test the system obtained an average score of 6.13 out of 7, and in the ASQ usability test the overall satisfaction score was 6.64 out of 7. We developed hFigures, an open source library for visualizing a complete, accurate and normalized graphical representation of health data. The idea is based on the concept of the hGraph but it provides additional key features, including a comparison of multiple health measurements over time. We conducted a usability evaluation of the library as a key component of an application for health and wellness monitoring. The results indicate that the data visualization library was helpful in assisting users in understanding health data and its evolution over time.

  18. Differential contribution of visual and auditory information to accurately predict the direction and rotational motion of a visual stimulus.

    Science.gov (United States)

    Park, Seoung Hoon; Kim, Seonjin; Kwon, MinHyuk; Christou, Evangelos A

    2016-03-01

    Vision and auditory information are critical for perception and to enhance the ability of an individual to respond accurately to a stimulus. However, it is unknown whether visual and auditory information contribute differentially to identify the direction and rotational motion of the stimulus. The purpose of this study was to determine the ability of an individual to accurately predict the direction and rotational motion of the stimulus based on visual and auditory information. In this study, we recruited 9 expert table-tennis players and used table-tennis service as our experimental model. Participants watched recorded services with different levels of visual and auditory information. The goal was to anticipate the direction of the service (left or right) and the rotational motion of service (topspin, sidespin, or cut). We recorded their responses and quantified the following outcomes: (i) directional accuracy and (ii) rotational motion accuracy. The response accuracy was the accurate predictions relative to the total number of trials. The ability of the participants to predict the direction of the service accurately increased with additional visual information but not with auditory information. In contrast, the ability of the participants to predict the rotational motion of the service accurately increased with the addition of auditory information to visual information but not with additional visual information alone. In conclusion, this finding demonstrates that visual information enhances the ability of an individual to accurately predict the direction of the stimulus, whereas additional auditory information enhances the ability of an individual to accurately predict the rotational motion of stimulus.

  19. Object attributes combine additively in visual search

    OpenAIRE

    Pramod, R. T.; Arun, S. P.

    2016-01-01

    We perceive objects as containing a variety of attributes: local features, relations between features, internal details, and global properties. But we know little about how they combine. Here, we report a remarkably simple additive rule that governs how these diverse object attributes combine in vision. The perceived dissimilarity between two objects was accurately explained as a sum of (a) spatially tuned local contour-matching processes modulated by part decomposition; (b) differences in in...

  20. The contributions of visual and central attention to visual working memory.

    Science.gov (United States)

    Souza, Alessandra S; Oberauer, Klaus

    2017-10-01

    We investigated the role of two kinds of attention-visual and central attention-for the maintenance of visual representations in working memory (WM). In Experiment 1 we directed attention to individual items in WM by presenting cues during the retention interval of a continuous delayed-estimation task, and instructing participants to think of the cued items. Attending to items improved recall commensurate with the frequency with which items were attended (0, 1, or 2 times). Experiments 1 and 3 further tested which kind of attention-visual or central-was involved in WM maintenance. We assessed the dual-task costs of two types of distractor tasks, one tapping sustained visual attention and one tapping central attention. Only the central attention task yielded substantial dual-task costs, implying that central attention substantially contributes to maintenance of visual information in WM. Experiment 2 confirmed that the visual-attention distractor task was demanding enough to disrupt performance in a task relying on visual attention. We combined the visual-attention and the central-attention distractor tasks with a multiple object tracking (MOT) task. Distracting visual attention, but not central attention, impaired MOT performance. Jointly, the three experiments provide a double dissociation between visual and central attention, and between visual WM and visual object tracking: Whereas tracking multiple targets across the visual filed depends on visual attention, visual WM depends mostly on central attention.

  1. Neural Anatomy of Primary Visual Cortex Limits Visual Working Memory.

    Science.gov (United States)

    Bergmann, Johanna; Genç, Erhan; Kohler, Axel; Singer, Wolf; Pearson, Joel

    2016-01-01

    Despite the immense processing power of the human brain, working memory storage is severely limited, and the neuroanatomical basis of these limitations has remained elusive. Here, we show that the stable storage limits of visual working memory for over 9 s are bound by the precise gray matter volume of primary visual cortex (V1), defined by fMRI retinotopic mapping. Individuals with a bigger V1 tended to have greater visual working memory storage. This relationship was present independently for both surface size and thickness of V1 but absent in V2, V3 and for non-visual working memory measures. Additional whole-brain analyses confirmed the specificity of the relationship to V1. Our findings indicate that the size of primary visual cortex plays a critical role in limiting what we can hold in mind, acting like a gatekeeper in constraining the richness of working mental function. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  2. Visual patient records

    NARCIS (Netherlands)

    Luu, M.D.

    2015-01-01

    Patient information is often complex and fragmented; visualization can help to obtain and communicate insights. To move from paper medical records to interactive and visual patient records is a big challenge. This project aims to move towards this ultimate goal by providing an interactive prototype

  3. A Novel Marking Reader for Progressive Addition Lenses Based on Gabor Holography.

    Science.gov (United States)

    Perucho, Beatriz; Picazo-Bueno, José Angel; Micó, Vicente

    2016-05-01

    Progressive addition lenses (PALs) are marked with permanent engraved marks (PEMs) at standardized locations. Permanent engraved marks are very useful through the manufacturing and mounting processes, act as locator marks to re-ink the removable marks, and contain useful information about the PAL. However, PEMs are often faint and weak, obscured by scratches, partially occluded, and difficult to recognize on tinted lenses or with antireflection or scratch-resistant coatings. The aim of this article is to present a new generation of portable marking reader based on an extremely simplified concept for visualization and identification of PEMs in PALs. Permanent engraved marks on different PALs are visualized using classical Gabor holography as underlying principle. Gabor holography allows phase sample visualization with adjustable magnification and can be implemented in either classical or digital versions. Here, visual Gabor holography is used to provide a magnified defocused image of the PEMs onto a translucent visualization screen where the PEM is clearly identified. Different types of PALs (conventional, personalized, old and scratched, sunglasses, etc.) have been tested to visualize PEMs with the proposed marking reader. The PEMs are visible in every case, and variable magnification factor can be achieved simply moving up and down the PAL in the instrument. In addition, a second illumination wavelength is also tested, showing the applicability of this novel marking reader for different illuminations. A new concept of marking reader ophthalmic instrument has been presented and validated in the laboratory. The configuration involves only a commercial-grade laser diode and a visualization screen for PEM identification. The instrument is portable, economic, and easy to use, and it can be used for identifying patient's current PAL model and for marking removable PALs again or finding test points regardless of the age of the PAL, its scratches, tints, or coatings.

  4. Visual explorer facilitator's guide

    CERN Document Server

    Palus, Charles J

    2010-01-01

    Grounded in research and practice, the Visual Explorer™ Facilitator's Guide provides a method for supporting collaborative, creative conversations about complex issues through the power of images. The guide is available as a component in the Visual Explorer Facilitator's Letter-sized Set, Visual Explorer Facilitator's Post card-sized Set, Visual Explorer Playing Card-sized Set, and is also available as a stand-alone title for purchase to assist multiple tool users in an organization.

  5. Big Data Visualization Tools

    OpenAIRE

    Bikakis, Nikos

    2018-01-01

    Data visualization is the presentation of data in a pictorial or graphical format, and a data visualization tool is the software that generates this presentation. Data visualization provides users with intuitive means to interactively explore and analyze data, enabling them to effectively identify interesting patterns, infer correlations and causalities, and supports sense-making activities.

  6. Audio-Visual Speech Recognition Using MPEG-4 Compliant Visual Features

    Directory of Open Access Journals (Sweden)

    Petar S. Aleksic

    2002-11-01

    Full Text Available We describe an audio-visual automatic continuous speech recognition system, which significantly improves speech recognition performance over a wide range of acoustic noise levels, as well as under clean audio conditions. The system utilizes facial animation parameters (FAPs supported by the MPEG-4 standard for the visual representation of speech. We also describe a robust and automatic algorithm we have developed to extract FAPs from visual data, which does not require hand labeling or extensive training procedures. The principal component analysis (PCA was performed on the FAPs in order to decrease the dimensionality of the visual feature vectors, and the derived projection weights were used as visual features in the audio-visual automatic speech recognition (ASR experiments. Both single-stream and multistream hidden Markov models (HMMs were used to model the ASR system, integrate audio and visual information, and perform a relatively large vocabulary (approximately 1000 words speech recognition experiments. The experiments performed use clean audio data and audio data corrupted by stationary white Gaussian noise at various SNRs. The proposed system reduces the word error rate (WER by 20% to 23% relatively to audio-only speech recognition WERs, at various SNRs (0–30 dB with additive white Gaussian noise, and by 19% relatively to audio-only speech recognition WER under clean audio conditions.

  7. Creating visual explanations improves learning.

    Science.gov (United States)

    Bobek, Eliza; Tversky, Barbara

    2016-01-01

    Many topics in science are notoriously difficult for students to learn. Mechanisms and processes outside student experience present particular challenges. While instruction typically involves visualizations, students usually explain in words. Because visual explanations can show parts and processes of complex systems directly, creating them should have benefits beyond creating verbal explanations. We compared learning from creating visual or verbal explanations for two STEM domains, a mechanical system (bicycle pump) and a chemical system (bonding). Both kinds of explanations were analyzed for content and learning assess by a post-test. For the mechanical system, creating a visual explanation increased understanding particularly for participants of low spatial ability. For the chemical system, creating both visual and verbal explanations improved learning without new teaching. Creating a visual explanation was superior and benefitted participants of both high and low spatial ability. Visual explanations often included crucial yet invisible features. The greater effectiveness of visual explanations appears attributable to the checks they provide for completeness and coherence as well as to their roles as platforms for inference. The benefits should generalize to other domains like the social sciences, history, and archeology where important information can be visualized. Together, the findings provide support for the use of learner-generated visual explanations as a powerful learning tool.

  8. Visual teaching and learning in the fields of engineering

    Directory of Open Access Journals (Sweden)

    Kyvete S. Shatri

    2015-11-01

    Full Text Available Engineering education today is faced with numerous demands that are closely connected with a globalized economy. One of these requirements is to draw the engineers of the future, who are characterized with: strong analytical skills, creativity, ingenuity, professionalism, intercultural communication and leadership. To achieve this effective teaching methods should be used to facilitate and enhance the learning of students and their performance in general, making them able to cope with market demands of a globalized economy. One of these methods is the visualization as a very important method that increases the learning of students. A visual approach in science and in engineering also increases communication, critical thinking and provides analytical approach to various problems. Therefore, this research is aimed to investigate the effect of the use of visualization in the process of teaching and learning in engineering fields and encourage teachers and students to use visual methods for teaching and learning. The results of this research highlight the positive effect that the use of visualization has in the learning process of students and their overall performance. In addition, innovative teaching methods have a good effect in the improvement of the situation. Visualization motivates students to learn, making them more cooperative and developing their communication skills.

  9. Stereoscopic visualization in curved spacetime: seeing deep inside a black hole

    International Nuclear Information System (INIS)

    Hamilton, Andrew J S; Polhemus, Gavin

    2010-01-01

    Stereoscopic visualization adds an additional dimension to the viewer's experience, giving them a sense of distance. In a general relativistic visualization, distance can be measured in a variety of ways. We argue that the affine distance, which matches the usual notion of distance in flat spacetime, is a natural distance to use in curved spacetime. As an example, we apply affine distance to the visualization of the interior of a black hole. Affine distance is not the distance perceived with normal binocular vision in curved spacetime. However, the failure of binocular vision is simply a limitation of animals that have evolved in flat spacetime, not a fundamental obstacle to depth perception in curved spacetime. Trinocular vision would provide superior depth perception.

  10. End-User Development of Information Visualization

    DEFF Research Database (Denmark)

    Pantazos, Kostas; Lauesen, Søren; Vatrapu, Ravi

    2013-01-01

    such as data manipulation, but no formal training in programming. 18 visualization tools were surveyed from an enduser developer perspective. The results of this survey study show that end-user developers need better tools to create and modify custom visualizations. A closer collaboration between End......This paper investigates End-User Development of Information Visualization. More specifically, we investigated how existing visualization tools allow end-user developers to construct visualizations. End-user developers have some developing or scripting skills to perform relatively advanced tasks......-User Development and Information Visualization researchers could contribute towards the development of better tools to support custom visualizations. In addition, as empirical evaluations of these tools are lacking both research communities should focus more on this aspect. The study serves as a starting point...

  11. Models Provide Specificity: Testing a Proposed Mechanism of Visual Working Memory Capacity Development

    Science.gov (United States)

    Simmering, Vanessa R.; Patterson, Rebecca

    2012-01-01

    Numerous studies have established that visual working memory has a limited capacity that increases during childhood. However, debate continues over the source of capacity limits and its developmental increase. Simmering (2008) adapted a computational model of spatial cognitive development, the Dynamic Field Theory, to explain not only the source…

  12. SeeDB: Efficient Data-Driven Visualization Recommendations to Support Visual Analytics.

    Science.gov (United States)

    Vartak, Manasi; Rahman, Sajjadur; Madden, Samuel; Parameswaran, Aditya; Polyzotis, Neoklis

    2015-09-01

    Data analysts often build visualizations as the first step in their analytical workflow. However, when working with high-dimensional datasets, identifying visualizations that show relevant or desired trends in data can be laborious. We propose SeeDB, a visualization recommendation engine to facilitate fast visual analysis: given a subset of data to be studied, SeeDB intelligently explores the space of visualizations, evaluates promising visualizations for trends, and recommends those it deems most "useful" or "interesting". The two major obstacles in recommending interesting visualizations are (a) scale : evaluating a large number of candidate visualizations while responding within interactive time scales, and (b) utility : identifying an appropriate metric for assessing interestingness of visualizations. For the former, SeeDB introduces pruning optimizations to quickly identify high-utility visualizations and sharing optimizations to maximize sharing of computation across visualizations. For the latter, as a first step, we adopt a deviation-based metric for visualization utility, while indicating how we may be able to generalize it to other factors influencing utility. We implement SeeDB as a middleware layer that can run on top of any DBMS. Our experiments show that our framework can identify interesting visualizations with high accuracy. Our optimizations lead to multiple orders of magnitude speedup on relational row and column stores and provide recommendations at interactive time scales. Finally, we demonstrate via a user study the effectiveness of our deviation-based utility metric and the value of recommendations in supporting visual analytics.

  13. Figure–ground organization and the emergence of proto-objects in the visual cortex

    OpenAIRE

    von der Heydt, Rüdiger

    2015-01-01

    A long history of studies of perception has shown that the visual system organizes the incoming information early on, interpreting the 2D image in terms of a 3D world and producing a structure that provides perceptual continuity and enables object-based attention. Recordings from monkey visual cortex show that many neurons, especially in area V2, are selective for border ownership. These neurons are edge selective and have ordinary classical receptive fields, but in addition their responses a...

  14. Visualizing the Computational Intelligence Field

    NARCIS (Netherlands)

    L. Waltman (Ludo); J.H. van den Berg (Jan); U. Kaymak (Uzay); N.J.P. van Eck (Nees Jan)

    2006-01-01

    textabstractIn this paper, we visualize the structure and the evolution of the computational intelligence (CI) field. Based on our visualizations, we analyze the way in which the CI field is divided into several subfields. The visualizations provide insight into the characteristics of each subfield

  15. Providing Access and Visualization to Global Cloud Properties from GEO Satellites

    Science.gov (United States)

    Chee, T.; Nguyen, L.; Minnis, P.; Spangenberg, D.; Palikonda, R.; Ayers, J. K.

    2015-12-01

    Providing public access to cloud macro and microphysical properties is a key concern for the NASA Langley Research Center Cloud and Radiation Group. This work describes a tool and method that allows end users to easily browse and access cloud information that is otherwise difficult to acquire and manipulate. The core of the tool is an application-programming interface that is made available to the public. One goal of the tool is to provide a demonstration to end users so that they can use the dynamically generated imagery as an input into their own work flows for both image generation and cloud product requisition. This project builds upon NASA Langley Cloud and Radiation Group's experience with making real-time and historical satellite cloud product imagery accessible and easily searchable. As we see the increasing use of virtual supply chains that provide additional value at each link there is value in making satellite derived cloud product information available through a simple access method as well as allowing users to browse and view that imagery as they need rather than in a manner most convenient for the data provider. Using the Open Geospatial Consortium's Web Processing Service as our access method, we describe a system that uses a hybrid local and cloud based parallel processing system that can return both satellite imagery and cloud product imagery as well as the binary data used to generate them in multiple formats. The images and cloud products are sourced from multiple satellites and also "merged" datasets created by temporally and spatially matching satellite sensors. Finally, the tool and API allow users to access information that spans the time ranges that our group has information available. In the case of satellite imagery, the temporal range can span the entire lifetime of the sensor.

  16. Efficient analysis using custom interactive visualization tools at a Superfund site

    International Nuclear Information System (INIS)

    Williams, G.; Durham, L.

    1992-01-01

    Custom visualization analysis programs were developed and used to analyze contaminant transport calculations from a three-dimensional numerical groundwater flow model developed for a Department of Energy Superfund site. The site hydrogeology, which is highly heterogenous, includes both fractured limestone and dolomite and alluvium deposits. Three-dimensional interactive visualization techniques were used to understand and analyze the three-dimensional, double-porosity modeling results. A graphical object oriented programming environment was applied to efficiently develop custom visualization programs in a coarse-grained data structure language. Comparisons were made, using the results from the three-dimensional, finite-difference model, between traditional two-dimensional analyses (contour and vector plots) and interactive three-dimensional techniques. Subjective comparison areas include the accuracy of analysis, the ability to understand the results of three-dimensional contaminant transport simulation, and the capability to transmit the results of the analysis to the project management. In addition, a quantitative comparison was made on the time required to develop a thorough analysis of the modeling results. The conclusions from the comparative study showed that the visualization analysis provided an increased awareness of the contaminant transport mechanisms, provided new insights into contaminant migration, and resulted in a significant time savings

  17. Road Vehicle Monitoring System Based on Intelligent Visual Internet of Things

    Directory of Open Access Journals (Sweden)

    Qingwu Li

    2015-01-01

    Full Text Available In recent years, with the rapid development of video surveillance infrastructure, more and more intelligent surveillance systems have employed computer vision and pattern recognition techniques. In this paper, we present a novel intelligent surveillance system used for the management of road vehicles based on Intelligent Visual Internet of Things (IVIoT. The system has the ability to extract the vehicle visual tags on the urban roads; in other words, it can label any vehicle by means of computer vision and therefore can easily recognize vehicles with visual tags. The nodes designed in the system can be installed not only on the urban roads for providing basic information but also on the mobile sensing vehicles for providing mobility support and improving sensing coverage. Visual tags mentioned in this paper consist of license plate number, vehicle color, and vehicle type and have several additional properties, such as passing spot and passing moment. Moreover, we present a fast and efficient image haze removal method to deal with haze weather condition. The experiment results show that the designed road vehicle monitoring system achieves an average real-time tracking accuracy of 85.80% under different conditions.

  18. Efficient analysis using custom interactive visualization tools at a Superfund site

    Energy Technology Data Exchange (ETDEWEB)

    Williams, G. [Northwestern Univ., Evanston, IL (United States); Durham, L. [Argonne National Lab., IL (United States)

    1992-12-01

    Custom visualization analysis programs were developed and used to analyze contaminant transport calculations from a three-dimensional numerical groundwater flow model developed for a Department of Energy Superfund site. The site hydrogeology, which is highly heterogenous, includes both fractured limestone and dolomite and alluvium deposits. Three-dimensional interactive visualization techniques were used to understand and analyze the three-dimensional, double-porosity modeling results. A graphical object oriented programming environment was applied to efficiently develop custom visualization programs in a coarse-grained data structure language. Comparisons were made, using the results from the three-dimensional, finite-difference model, between traditional two-dimensional analyses (contour and vector plots) and interactive three-dimensional techniques. Subjective comparison areas include the accuracy of analysis, the ability to understand the results of three-dimensional contaminant transport simulation, and the capability to transmit the results of the analysis to the project management. In addition, a quantitative comparison was made on the time required to develop a thorough analysis of the modeling results. The conclusions from the comparative study showed that the visualization analysis provided an increased awareness of the contaminant transport mechanisms, provided new insights into contaminant migration, and resulted in a significant time savings.

  19. Surface-specific additive manufacturing test artefacts

    Science.gov (United States)

    Townsend, Andrew; Racasan, Radu; Blunt, Liam

    2018-06-01

    Many test artefact designs have been proposed for use with additive manufacturing (AM) systems. These test artefacts have primarily been designed for the evaluation of AM form and dimensional performance. A series of surface-specific measurement test artefacts designed for use in the verification of AM manufacturing processes are proposed here. Surface-specific test artefacts can be made more compact because they do not require the large dimensions needed for accurate dimensional and form measurements. The series of three test artefacts are designed to provide comprehensive information pertaining to the manufactured surface. Measurement possibilities include deviation analysis, surface texture parameter data generation, sub-surface analysis, layer step analysis and build resolution comparison. The test artefacts are designed to provide easy access for measurement using conventional surface measurement techniques, for example, focus variation microscopy, stylus profilometry, confocal microscopy and scanning electron microscopy. Additionally, the test artefacts may be simply visually inspected as a comparative tool, giving a fast indication of process variation between builds. The three test artefacts are small enough to be included in every build and include built-in manufacturing traceability information, making them a convenient physical record of the build.

  20. Temporal dynamics of visual working memory.

    Science.gov (United States)

    Sobczak-Edmans, M; Ng, T H B; Chan, Y C; Chew, E; Chuang, K H; Chen, S H A

    2016-01-01

    The involvement of the human cerebellum in working memory has been well established in the last decade. However, the cerebro-cerebellar network for visual working memory is not as well defined. Our previous fMRI study showed superior and inferior cerebellar activations during a block design visual working memory task, but specific cerebellar contributions to cognitive processes in encoding, maintenance and retrieval have not yet been established. The current study examined cerebellar contributions to each of the components of visual working memory and presence of cerebellar hemispheric laterality was investigated. 40 young adults performed a Sternberg visual working memory task during fMRI scanning using a parametric paradigm. The contrast between high and low memory load during each phase was examined. We found that the most prominent activation was observed in vermal lobule VIIIb and bilateral lobule VI during encoding. Using a quantitative laterality index, we found that left-lateralized activation of lobule VIIIa was present in the encoding phase. In the maintenance phase, there was bilateral lobule VI and right-lateralized lobule VIIb activity. Changes in activation in right lobule VIIIa were present during the retrieval phase. The current results provide evidence that superior and inferior cerebellum contributes to visual working memory, with a tendency for left-lateralized activations in the inferior cerebellum during encoding and right-lateralized lobule VIIb activations during maintenance. The results of the study are in agreement with Baddeley's multi-component working memory model, but also suggest that stored visual representations are additionally supported by maintenance mechanisms that may employ verbal coding. Copyright © 2015 Elsevier Inc. All rights reserved.

  1. Confidence, Visual Research, and the Aesthetic Function

    Directory of Open Access Journals (Sweden)

    Stan Ruecker

    2007-05-01

    Full Text Available The goal of this article is to identify and describe one of the primary purposes of aesthetic quality in the design of computer interfaces and visualization tools. We suggest that humanists can derive advantages in visual research by acknowledging by their efforts to advance aesthetic quality that a significant function of aesthetics in this context is to inspire the user’s confidence. This confidence typically serves to create a sense of trust in the provider of the interface or tool. In turn, this increased trust may result in an increased willingness to engage with the object, on the basis that it demonstrates an attention to detail that promises to reward increased engagement. In addition to confidence, the aesthetic may also contribute to a heightened degree of satisfaction with having spent time using or investigating the object. In the realm of interface design and visualization research, we propose that these aesthetic functions have implications not only for the quality of interactions, but also for the results of the standard measures of performance and preference.

  2. RCSB PDB Mobile: iOS and Android mobile apps to provide data access and visualization to the RCSB Protein Data Bank.

    Science.gov (United States)

    Quinn, Gregory B; Bi, Chunxiao; Christie, Cole H; Pang, Kyle; Prlić, Andreas; Nakane, Takanori; Zardecki, Christine; Voigt, Maria; Berman, Helen M; Bourne, Philip E; Rose, Peter W

    2015-01-01

    The Research Collaboratory for Structural Bioinformatics Protein Data Bank (RCSB PDB) resource provides tools for query, analysis and visualization of the 3D structures in the PDB archive. As the mobile Web is starting to surpass desktop and laptop usage, scientists and educators are beginning to integrate mobile devices into their research and teaching. In response, we have developed the RCSB PDB Mobile app for the iOS and Android mobile platforms to enable fast and convenient access to RCSB PDB data and services. Using the app, users from the general public to expert researchers can quickly search and visualize biomolecules, and add personal annotations via the RCSB PDB's integrated MyPDB service. RCSB PDB Mobile is freely available from the Apple App Store and Google Play (http://www.rcsb.org). © The Author 2014. Published by Oxford University Press.

  3. Visualization analysis and design

    CERN Document Server

    Munzner, Tamara

    2015-01-01

    Visualization Analysis and Design provides a systematic, comprehensive framework for thinking about visualization in terms of principles and design choices. The book features a unified approach encompassing information visualization techniques for abstract data, scientific visualization techniques for spatial data, and visual analytics techniques for interweaving data transformation and analysis with interactive visual exploration. It emphasizes the careful validation of effectiveness and the consideration of function before form. The book breaks down visualization design according to three questions: what data users need to see, why users need to carry out their tasks, and how the visual representations proposed can be constructed and manipulated. It walks readers through the use of space and color to visually encode data in a view, the trade-offs between changing a single view and using multiple linked views, and the ways to reduce the amount of data shown in each view. The book concludes with six case stu...

  4. Data Visualization within the Python ecosystem

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    Data analysis is integral to what we do at CERN. Data visualization is at the foundation of this workflow and is also an important part of the python stack. Python's plotting ecosystem offers numerous open source solutions. These solutions can offer ease of use, detailed configuration, interactivity and web readiness. This talk will cover three of the most robust and supported packages, matplotlib, bokeh, and plotly. It aims to provide an overview of these packages. In addition, give suggestions to where these tools might fit in an analysis workflow.

  5. Prey-predator dynamics with prey refuge providing additional food to predator

    International Nuclear Information System (INIS)

    Ghosh, Joydev; Sahoo, Banshidhar; Poria, Swarup

    2017-01-01

    Highlights: • The effects of interplay between prey refugia and additional food are reported. • Hopf bifurcation conditions are derived analytically. • Existence of unique limit cycle is shown analytically. • Predator extinction may be possible at very high prey refuge ecological systems. - Abstract: The impacts of additional food for predator on the dynamics of a prey-predator model with prey refuge are investigated. The equilibrium points and their stability behaviours are determined. Hopf bifurcation conditions are derived analytically. Most significantly, existence conditions for unique stable limit cycle in the phase plane are shown analytically. The analytical results are in well agreement with the numerical simulation results. Effects of variation of refuge level as well as the variation of quality and quantity of additional food on the dynamics are reported with the help of bifurcation diagrams. It is found that high quality and high quantity of additional food supports oscillatory coexistence of species. It is observed that predator extinction possibility in high prey refuge ecological systems may be removed by supplying additional food to predator population. The reported theoretical results may be useful to conservation biologist for species conservation in real world ecological systems.

  6. Hierarchically organized layout for visualization of biochemical pathways.

    Science.gov (United States)

    Tsay, Jyh-Jong; Wu, Bo-Liang; Jeng, Yu-Sen

    2010-01-01

    Many complex pathways are described as hierarchical structures in which a pathway is recursively partitioned into several sub-pathways, and organized hierarchically as a tree. The hierarchical structure provides a natural way to visualize the global structure of a complex pathway. However, none of the previous research on pathway visualization explores the hierarchical structures provided by many complex pathways. In this paper, we aim to develop algorithms that can take advantages of hierarchical structures, and give layouts that explore the global structures as well as local structures of pathways. We present a new hierarchically organized layout algorithm to produce layouts for hierarchically organized pathways. Our algorithm first decomposes a complex pathway into sub-pathway groups along the hierarchical organization, and then partition each sub-pathway group into basic components. It then applies conventional layout algorithms, such as hierarchical layout and force-directed layout, to compute the layout of each basic component. Finally, component layouts are joined to form a final layout of the pathway. Our main contribution is the development of algorithms for decomposing pathways and joining layouts. Experiment shows that our algorithm is able to give comprehensible visualization for pathways with hierarchies, cycles as well as complex structures. It clearly renders the global component structures as well as the local structure in each component. In addition, it runs very fast, and gives better visualization for many examples from previous related research. 2009 Elsevier B.V. All rights reserved.

  7. Interactive visual exploration of a trillion particles

    KAUST Repository

    Schatz, Karsten

    2017-03-10

    We present a method for the interactive exploration of tera-scale particle data sets. Such data sets arise from molecular dynamics, particle-based fluid simulation, and astrophysics. Our visualization technique provides a focus+context view of the data that runs interactively on commodity hardware. The method is based on a hybrid multi-scale rendering architecture, which renders the context as a hierarchical density volume. Fine details in the focus are visualized using direct particle rendering. In addition, clusters like dark matter halos can be visualized as semi-transparent spheres enclosing the particles. Since the detail data is too large to be stored in main memory, our approach uses an out-of-core technique that streams data on demand. Our technique is designed to take advantage of a dual-GPU configuration, in which the workload is split between the GPUs based on the type of data. Structural features in the data are visually enhanced using advanced rendering and shading techniques. To allow users to easily identify interesting locations even in overviews, both the focus and context view use color tables to show data attributes on the respective scale. We demonstrate that our technique achieves interactive performance on a one trillionpar-ticle data set from the DarkSky simulation.

  8. Sequence alignment visualization in HTML5 without Java.

    Science.gov (United States)

    Gille, Christoph; Birgit, Weyand; Gille, Andreas

    2014-01-01

    Java has been extensively used for the visualization of biological data in the web. However, the Java runtime environment is an additional layer of software with an own set of technical problems and security risks. HTML in its new version 5 provides features that for some tasks may render Java unnecessary. Alignment-To-HTML is the first HTML-based interactive visualization for annotated multiple sequence alignments. The server side script interpreter can perform all tasks like (i) sequence retrieval, (ii) alignment computation, (iii) rendering, (iv) identification of a homologous structural models and (v) communication with BioDAS-servers. The rendered alignment can be included in web pages and is displayed in all browsers on all platforms including touch screen tablets. The functionality of the user interface is similar to legacy Java applets and includes color schemes, highlighting of conserved and variable alignment positions, row reordering by drag and drop, interlinked 3D visualization and sequence groups. Novel features are (i) support for multiple overlapping residue annotations, such as chemical modifications, single nucleotide polymorphisms and mutations, (ii) mechanisms to quickly hide residue annotations, (iii) export to MS-Word and (iv) sequence icons. Alignment-To-HTML, the first interactive alignment visualization that runs in web browsers without additional software, confirms that to some extend HTML5 is already sufficient to display complex biological data. The low speed at which programs are executed in browsers is still the main obstacle. Nevertheless, we envision an increased use of HTML and JavaScript for interactive biological software. Under GPL at: http://www.bioinformatics.org/strap/toHTML/.

  9. RVA. 3-D Visualization and Analysis Software to Support Management of Oil and Gas Resources

    Energy Technology Data Exchange (ETDEWEB)

    Keefer, Donald A. [Univ. of Illinois, Champaign, IL (United States); Shaffer, Eric G. [Univ. of Illinois, Champaign, IL (United States); Storsved, Brynne [Univ. of Illinois, Champaign, IL (United States); Vanmoer, Mark [Univ. of Illinois, Champaign, IL (United States); Angrave, Lawrence [Univ. of Illinois, Champaign, IL (United States); Damico, James R. [Univ. of Illinois, Champaign, IL (United States); Grigsby, Nathan [Univ. of Illinois, Champaign, IL (United States)

    2015-12-01

    A free software application, RVA, has been developed as a plugin to the US DOE-funded ParaView visualization package, to provide support in the visualization and analysis of complex reservoirs being managed using multi-fluid EOR techniques. RVA, for Reservoir Visualization and Analysis, was developed as an open-source plugin to the 64 bit Windows version of ParaView 3.14. RVA was developed at the University of Illinois at Urbana-Champaign, with contributions from the Illinois State Geological Survey, Department of Computer Science and National Center for Supercomputing Applications. RVA was designed to utilize and enhance the state-of-the-art visualization capabilities within ParaView, readily allowing joint visualization of geologic framework and reservoir fluid simulation model results. Particular emphasis was placed on enabling visualization and analysis of simulation results highlighting multiple fluid phases, multiple properties for each fluid phase (including flow lines), multiple geologic models and multiple time steps. Additional advanced functionality was provided through the development of custom code to implement data mining capabilities. The built-in functionality of ParaView provides the capacity to process and visualize data sets ranging from small models on local desktop systems to extremely large models created and stored on remote supercomputers. The RVA plugin that we developed and the associated User Manual provide improved functionality through new software tools, and instruction in the use of ParaView-RVA, targeted to petroleum engineers and geologists in industry and research. The RVA web site (http://rva.cs.illinois.edu) provides an overview of functions, and the development web site (https://github.com/shaffer1/RVA) provides ready access to the source code, compiled binaries, user manual, and a suite of demonstration data sets. Key functionality has been included to support a range of reservoirs visualization and analysis needs, including

  10. Functional magnetic resonance imaging of the human primary visual cortex during visual stimulation

    International Nuclear Information System (INIS)

    Miki, Atsushi; Abe, Haruki; Nakajima, Takashi; Fujita, Motoi; Watanabe, Hiroyuki; Kuwabara, Takeo; Naruse, Shoji; Takagi, Mineo.

    1995-01-01

    Signal changes in the human primary visual cortex during visual stimulation were evaluated using non-invasive functional magnetic resonance imaging (fMRI). The experiments were performed on 10 normal human volunteers and 2 patients with homonymous hemianopsia, including one who was recovering from the exacerbation of multiple sclerosis. The visual stimuli were provided by a pattern generator using the checkerboard pattern for determining the visual evoked potential of full-field and hemifield stimulation. In normal volunteers, a signal increase was observed on the bilateral primary visual cortex during the full-field stimulation and on the contra-lateral cortex during hemifield stimulation. In the patient with homonymous hemianopsia after cerebral infarction, the signal change was clearly decreased on the affected side. In the other patient, the one recovering from multiple sclerosis with an almost normal visual field, the fMRI was within normal limits. These results suggest that it is possible to visualize the activation of the visual cortex during visual stimulation, and that there is a possibility of using this test as an objective method of visual field examination. (author)

  11. The primary visual cortex in the neural circuit for visual orienting

    Science.gov (United States)

    Zhaoping, Li

    The primary visual cortex (V1) is traditionally viewed as remote from influencing brain's motor outputs. However, V1 provides the most abundant cortical inputs directly to the sensory layers of superior colliculus (SC), a midbrain structure to command visual orienting such as shifting gaze and turning heads. I will show physiological, anatomical, and behavioral data suggesting that V1 transforms visual input into a saliency map to guide a class of visual orienting that is reflexive or involuntary. In particular, V1 receives a retinotopic map of visual features, such as orientation, color, and motion direction of local visual inputs; local interactions between V1 neurons perform a local-to-global computation to arrive at a saliency map that highlights conspicuous visual locations by higher V1 responses. The conspicuous location are usually, but not always, where visual input statistics changes. The population V1 outputs to SC, which is also retinotopic, enables SC to locate, by lateral inhibition between SC neurons, the most salient location as the saccadic target. Experimental tests of this hypothesis will be shown. Variations of the neural circuit for visual orienting across animal species, with more or less V1 involvement, will be discussed. Supported by the Gatsby Charitable Foundation.

  12. Making Memories: The Development of Long-Term Visual Knowledge in Children with Visual Agnosia

    Directory of Open Access Journals (Sweden)

    Tiziana Metitieri

    2013-01-01

    Full Text Available There are few reports about the effects of perinatal acquired brain lesions on the development of visual perception. These studies demonstrate nonseverely impaired visual-spatial abilities and preserved visual memory. Longitudinal data analyzing the effects of compromised perceptions on long-term visual knowledge in agnosics are limited to lesions having occurred in adulthood. The study of children with focal lesions of the visual pathways provides a unique opportunity to assess the development of visual memory when perceptual input is degraded. We assessed visual recognition and visual memory in three children with lesions to the visual cortex having occurred in early infancy. We then explored the time course of visual memory impairment in two of them at 2 years and 3.7 years from the initial assessment. All children exhibited apperceptive visual agnosia and visual memory impairment. We observed a longitudinal improvement of visual memory modulated by the structural properties of objects. Our findings indicate that processing of degraded perceptions from birth results in impoverished memories. The dynamic interaction between perception and memory during development might modulate the long-term construction of visual representations, resulting in less severe impairment.

  13. Making memories: the development of long-term visual knowledge in children with visual agnosia.

    Science.gov (United States)

    Metitieri, Tiziana; Barba, Carmen; Pellacani, Simona; Viggiano, Maria Pia; Guerrini, Renzo

    2013-01-01

    There are few reports about the effects of perinatal acquired brain lesions on the development of visual perception. These studies demonstrate nonseverely impaired visual-spatial abilities and preserved visual memory. Longitudinal data analyzing the effects of compromised perceptions on long-term visual knowledge in agnosics are limited to lesions having occurred in adulthood. The study of children with focal lesions of the visual pathways provides a unique opportunity to assess the development of visual memory when perceptual input is degraded. We assessed visual recognition and visual memory in three children with lesions to the visual cortex having occurred in early infancy. We then explored the time course of visual memory impairment in two of them at 2  years and 3.7  years from the initial assessment. All children exhibited apperceptive visual agnosia and visual memory impairment. We observed a longitudinal improvement of visual memory modulated by the structural properties of objects. Our findings indicate that processing of degraded perceptions from birth results in impoverished memories. The dynamic interaction between perception and memory during development might modulate the long-term construction of visual representations, resulting in less severe impairment.

  14. Alzheimer disease: functional abnormalities in the dorsal visual pathway.

    LENUS (Irish Health Repository)

    Bokde, Arun L W

    2012-02-01

    PURPOSE: To evaluate whether patients with Alzheimer disease (AD) have altered activation compared with age-matched healthy control (HC) subjects during a task that typically recruits the dorsal visual pathway. MATERIALS AND METHODS: The study was performed in accordance with the Declaration of Helsinki, with institutional ethics committee approval, and all subjects provided written informed consent. Two tasks were performed to investigate neural function: face matching and location matching. Twelve patients with mild AD and 14 age-matched HC subjects were included. Brain activation was measured by using functional magnetic resonance imaging. Group statistical analyses were based on a mixed-effects model corrected for multiple comparisons. RESULTS: Task performance was not statistically different between the two groups, and within groups there were no differences in task performance. In the HC group, the visual perception tasks selectively activated the visual pathways. Conversely in the AD group, there was no selective activation during performance of these same tasks. Along the dorsal visual pathway, the AD group recruited additional regions, primarily in the parietal and frontal lobes, for the location-matching task. There were no differences in activation between groups during the face-matching task. CONCLUSION: The increased activation in the AD group may represent a compensatory mechanism for decreased processing effectiveness in early visual areas of patients with AD. The findings support the idea that the dorsal visual pathway is more susceptible to putative AD-related neuropathologic changes than is the ventral visual pathway.

  15. A survey of visualization systems for network security.

    Science.gov (United States)

    Shiravi, Hadi; Shiravi, Ali; Ghorbani, Ali A

    2012-08-01

    Security Visualization is a very young term. It expresses the idea that common visualization techniques have been designed for use cases that are not supportive of security-related data, demanding novel techniques fine tuned for the purpose of thorough analysis. Significant amount of work has been published in this area, but little work has been done to study this emerging visualization discipline. We offer a comprehensive review of network security visualization and provide a taxonomy in the form of five use-case classes encompassing nearly all recent works in this area. We outline the incorporated visualization techniques and data sources and provide an informative table to display our findings. From the analysis of these systems, we examine issues and concerns regarding network security visualization and provide guidelines and directions for future researchers and visual system developers.

  16. Expert panel on additional cross subsidisation. Considering arguments and providing expert opinion

    International Nuclear Information System (INIS)

    Faber, J.; Nelissen, D.; Lowe, S.; Mason, A.

    2007-10-01

    In the period end 2005 till September 2006 MVA London in cooperation with SEO Amsterdam was commissioned by the Dutch Ministry of Transport to perform an analysis of the economic and competition effects of the different proposals from the European Commission to include aviation in the European Emission Trading System (ETS). Roughly at the same time CE Delft was commissioned to study the overall impacts of this inclusion for the European Commission. Both studies considered the possibility that inclusion of aviation in the ETS could lead to the distortion of competition between airlines through cross-subsidisation. The studies concluded differently on additional possibilities for cross-subsidisation. As a result, both parties have different views on the possible distortion of the competitive market on routes where EU-based carriers compete directly with carriers based outside the EU. CE Delft concluded that 'none of the policy options considered in this study will significantly damage the competitive position of EU airlines relative to non-EU airlines'. In contrast, MVA and SEO (2006) concluded that 'effective cross-subsidisation by non-EU carriers in the Departing EU scope of the ETS appears to be more probable than cross-subsidisation by EU network carriers in the Intra-EU scope of the ETS'. In July 2007, the Dutch Ministry of Transport, DGTL commissioned CE Delft, MVA and SEO to study the causes for their different opinions and to see whether a further investigation could shed more light on the likelihood of additional cross-subsidisation. Formally, the aim of the work currently carried out is: (1) To determine whether it is possible to assess the impacts on the competitive market between EU based carriers and non-EU based carriers based on sound economic reasoning and analysis of empirical data; and, if so, (2) to determine whether the inclusion of aviation in ETS as proposed by the European Commission will offer non-EU airlines the opportunity to increase their

  17. Neural Mechanisms of Selective Visual Attention.

    Science.gov (United States)

    Moore, Tirin; Zirnsak, Marc

    2017-01-03

    Selective visual attention describes the tendency of visual processing to be confined largely to stimuli that are relevant to behavior. It is among the most fundamental of cognitive functions, particularly in humans and other primates for whom vision is the dominant sense. We review recent progress in identifying the neural mechanisms of selective visual attention. We discuss evidence from studies of different varieties of selective attention and examine how these varieties alter the processing of stimuli by neurons within the visual system, current knowledge of their causal basis, and methods for assessing attentional dysfunctions. In addition, we identify some key questions that remain in identifying the neural mechanisms that give rise to the selective processing of visual information.

  18. Visualizer: 3D Gridded Data Visualization Software for Geoscience Education and Research

    Science.gov (United States)

    Harwood, C.; Billen, M. I.; Kreylos, O.; Jadamec, M.; Sumner, D. Y.; Kellogg, L. H.; Hamann, B.

    2008-12-01

    In both research and education learning is an interactive and iterative process of exploring and analyzing data or model results. However, visualization software often presents challenges on the path to learning because it assumes the user already knows the locations and types of features of interest, instead of enabling flexible and intuitive examination of results. We present examples of research and teaching using the software, Visualizer, specifically designed to create an effective and intuitive environment for interactive, scientific analysis of 3D gridded data. Visualizer runs in a range of 3D virtual reality environments (e.g., GeoWall, ImmersaDesk, or CAVE), but also provides a similar level of real-time interactivity on a desktop computer. When using Visualizer in a 3D-enabled environment, the software allows the user to interact with the data images as real objects, grabbing, rotating or walking around the data to gain insight and perspective. On the desktop, simple features, such as a set of cross-bars marking the plane of the screen, provide extra 3D spatial cues that allow the user to more quickly understand geometric relationships within the data. This platform portability allows the user to more easily integrate research results into classroom demonstrations and exercises, while the interactivity provides an engaging environment for self-directed and inquiry-based learning by students. Visualizer software is freely available for download (www.keckcaves.org) and runs on Mac OSX and Linux platforms.

  19. Attention biases visual activity in visual short-term memory.

    Science.gov (United States)

    Kuo, Bo-Cheng; Stokes, Mark G; Murray, Alexandra M; Nobre, Anna Christina

    2014-07-01

    In the current study, we tested whether representations in visual STM (VSTM) can be biased via top-down attentional modulation of visual activity in retinotopically specific locations. We manipulated attention using retrospective cues presented during the retention interval of a VSTM task. Retrospective cues triggered activity in a large-scale network implicated in attentional control and led to retinotopically specific modulation of activity in early visual areas V1-V4. Importantly, shifts of attention during VSTM maintenance were associated with changes in functional connectivity between pFC and retinotopic regions within V4. Our findings provide new insights into top-down control mechanisms that modulate VSTM representations for flexible and goal-directed maintenance of the most relevant memoranda.

  20. Characterization of Visual Scanning Patterns in Air Traffic Control.

    Science.gov (United States)

    McClung, Sarah N; Kang, Ziho

    2016-01-01

    Characterization of air traffic controllers' (ATCs') visual scanning strategies is a challenging issue due to the dynamic movement of multiple aircraft and increasing complexity of scanpaths (order of eye fixations and saccades) over time. Additionally, terminologies and methods are lacking to accurately characterize the eye tracking data into simplified visual scanning strategies linguistically expressed by ATCs. As an intermediate step to automate the characterization classification process, we (1) defined and developed new concepts to systematically filter complex visual scanpaths into simpler and more manageable forms and (2) developed procedures to map visual scanpaths with linguistic inputs to reduce the human judgement bias during interrater agreement. The developed concepts and procedures were applied to investigating the visual scanpaths of expert ATCs using scenarios with different aircraft congestion levels. Furthermore, oculomotor trends were analyzed to identify the influence of aircraft congestion on scan time and number of comparisons among aircraft. The findings show that (1) the scanpaths filtered at the highest intensity led to more consistent mapping with the ATCs' linguistic inputs, (2) the pattern classification occurrences differed between scenarios, and (3) increasing aircraft congestion caused increased scan times and aircraft pairwise comparisons. The results provide a foundation for better characterizing complex scanpaths in a dynamic task and automating the analysis process.

  1. Modelling individual difference in visual categorization.

    Science.gov (United States)

    Shen, Jianhong; Palmeri, Thomas J

    Recent years has seen growing interest in understanding, characterizing, and explaining individual differences in visual cognition. We focus here on individual differences in visual categorization. Categorization is the fundamental visual ability to group different objects together as the same kind of thing. Research on visual categorization and category learning has been significantly informed by computational modeling, so our review will focus both on how formal models of visual categorization have captured individual differences and how individual difference have informed the development of formal models. We first examine the potential sources of individual differences in leading models of visual categorization, providing a brief review of a range of different models. We then describe several examples of how computational models have captured individual differences in visual categorization. This review also provides a bit of an historical perspective, starting with models that predicted no individual differences, to those that captured group differences, to those that predict true individual differences, and to more recent hierarchical approaches that can simultaneously capture both group and individual differences in visual categorization. Via this selective review, we see how considerations of individual differences can lead to important theoretical insights into how people visually categorize objects in the world around them. We also consider new directions for work examining individual differences in visual categorization.

  2. Altered visual information processing systems in bipolar disorder: evidence from visual MMN and P3

    Directory of Open Access Journals (Sweden)

    Toshihiko eMaekawa

    2013-07-01

    Full Text Available Objective: Mismatch negativity (MMN and P3 are unique ERP components that provide objective indices of human cognitive functions such as short-term memory and prediction. Bipolar disorder (BD is an endogenous psychiatric disorder characterized by extreme shifts in mood, energy, and ability to function socially. BD patients usually show cognitive dysfunction, and the goal of this study was to access their altered visual information processing via visual MMN (vMMN and P3 using windmill pattern stimuli.Methods: Twenty patients with BD and 20 healthy controls matched for age, gender, and handedness participated in this study. Subjects were seated in front of a monitor and listened to a story via earphones. Two types of windmill patterns (standard and deviant and white circle (target stimuli were randomly presented on the monitor. All stimuli were presented in random order at 200-ms durations with an 800-ms inter-stimulus interval. Stimuli were presented at 80% (standard, 10% (deviant, and 10% (target probabilities. The participants were instructed to attend to the story and press a button as soon as possible when the target stimuli were presented. Event-related potentials were recorded throughout the experiment using 128-channel EEG equipment. vMMN was obtained by subtracting standard from deviant stimuli responses, and P3 was evoked from the target stimulus.Results: Mean reaction times for target stimuli in the BD group were significantly higher than those in the control group. Additionally, mean vMMN-amplitudes and peak P3-amplitudes were significantly lower in the BD group than in controls.Conclusions: Abnormal vMMN and P3 in patients indicate a deficit of visual information processing in bipolar disorder, which is consistent with their increased reaction time to visual target stimuli.Significance: Both bottom-up and top-down visual information processing are likely altered in BD.

  3. Graphics and visualization principles & algorithms

    CERN Document Server

    Theoharis, T; Platis, Nikolaos; Patrikalakis, Nicholas M

    2008-01-01

    Computer and engineering collections strong in applied graphics and analysis of visual data via computer will find Graphics & Visualization: Principles and Algorithms makes an excellent classroom text as well as supplemental reading. It integrates coverage of computer graphics and other visualization topics, from shadow geneeration and particle tracing to spatial subdivision and vector data visualization, and it provides a thorough review of literature from multiple experts, making for a comprehensive review essential to any advanced computer study.-California Bookw

  4. Visual memory in musicians and non-musicians.

    Science.gov (United States)

    Rodrigues, Ana Carolina; Loureiro, Maurício; Caramelli, Paulo

    2014-01-01

    Many investigations have reported structural, functional, and cognitive changes in the brains of musicians, which occur as a result of many years of musical practice. We aimed to investigate if intensive, long-term musical practice is associated with improved visual memory ability. Musicians and non-musicians, who were comparable in age, gender, and education, were submitted to a visual memory test. The test consisted of the presentation of four sets of stimuli, each one containing eight figures to be memorized. Each set was followed by individual figures and the subject was required to indicate if each figure was or was not present in the memorized set, by pressing the corresponding keys. We divided the test in two parts, in which the stimuli had greater or reduced semantic coding. Overall, musicians showed better performance on reaction times, but not on accuracy. An additional analysis revealed no significant interaction between group and any part of the test in the prediction of the outcomes. When simple reaction time was included as covariate, no significant difference between groups was found on reaction times. In the group of musicians, we found some significant correlations between variables related to musical practice and performance in the visual memory test. In summary, our data provide no evidence of enhanced visual memory ability in musicians, since there was no difference in accuracy between groups. Our results suggest that performance of musicians in the visual memory test may be associated with better sensorimotor integration, since although they have presented shorter reaction times, such effect disappeared when taken in consideration the simple reaction time test. However, given existing evidence of associations between simple reaction time and cognitive function, their performance in the visual memory test could also be related to enhanced visual attention ability, as has been suggested by previous studies, but this hypothesis deserves more

  5. MacBook Teach Yourself VISUALLY

    CERN Document Server

    Miser, Brad

    2010-01-01

    Like the MacBook itself, Teach Yourself VISUALLY MacBook, Second Edition is designed to be visually appealing, while providing excellent functionality at the same time. By using this book, MacBook users will be empowered to do everyday tasks quickly and easily. From such basic steps as powering on or shutting down the MacBook, working on the Mac desktop with the Dashboard and its widgets to running Windows applications, Teach Yourself VISUALLY MacBook, Second Edition covers all the vital information and provides the help and support a reader needs—in many ways it's like having a Mac Genius at

  6. Do Visual Aids Really Matter?

    Directory of Open Access Journals (Sweden)

    Kristine Fish

    2016-01-01

    Full Text Available Educational webcasts or video lectures as a teaching tool and a form of visual aid have become widely used with the rising prevalence of online and blended courses and with the increase of web-based video materials. Thus, research pertaining to factors enhancing the effectiveness of video lectures, such as number of visual aids, is critical. This study compared student evaluations before and after embedding additional visual aids throughout video lectures in an online course. Slide transitions occurred on average every 40 seconds for the pre-treatment group with approximately 600 visuals total, compared to slide transitions every 10 seconds for the post-treatment group with approximately 2,000 visuals total. All students received the same audio recordings. Research questions addressed are: (1 Are student perceptions of the effectiveness of examples used to illustrate concepts affected by number of visual aids? (2 Is the extent to which students feel engaged during the lectures affected by number of visual aids? (3 Are students’ perceived overall learning experiences affected by number of visual aids? Surprisingly, results indicate that for questions #1 and #3, student ratings of those who viewed videos with fewer visuals rated their experiences higher than students who viewed more visuals. There was no significant difference found for question #2. Conclusion: Although some visuals have been shown to enhance learning, too many visuals may be a deterrent to learning.

  7. Manipulations of attention dissociate fragile visual short-term memory from visual working memory.

    Science.gov (United States)

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Lamme, Victor A F

    2011-05-01

    People often rely on information that is no longer in view, but maintained in visual short-term memory (VSTM). Traditionally, VSTM is thought to operate on either a short time-scale with high capacity - iconic memory - or a long time scale with small capacity - visual working memory. Recent research suggests that in addition, an intermediate stage of memory in between iconic memory and visual working memory exists. This intermediate stage has a large capacity and a lifetime of several seconds, but is easily overwritten by new stimulation. We therefore termed it fragile VSTM. In previous studies, fragile VSTM has been dissociated from iconic memory by the characteristics of the memory trace. In the present study, we dissociated fragile VSTM from visual working memory by showing a differentiation in their dependency on attention. A decrease in attention during presentation of the stimulus array greatly reduced the capacity of visual working memory, while this had only a small effect on the capacity of fragile VSTM. We conclude that fragile VSTM is a separate memory store from visual working memory. Thus, a tripartite division of VSTM appears to be in place, comprising iconic memory, fragile VSTM and visual working memory. Copyright © 2011 Elsevier Ltd. All rights reserved.

  8. Visual exploration and analysis of human-robot interaction rules

    Science.gov (United States)

    Zhang, Hui; Boyles, Michael J.

    2013-01-01

    We present a novel interaction paradigm for the visual exploration, manipulation and analysis of human-robot interaction (HRI) rules; our development is implemented using a visual programming interface and exploits key techniques drawn from both information visualization and visual data mining to facilitate the interaction design and knowledge discovery process. HRI is often concerned with manipulations of multi-modal signals, events, and commands that form various kinds of interaction rules. Depicting, manipulating and sharing such design-level information is a compelling challenge. Furthermore, the closed loop between HRI programming and knowledge discovery from empirical data is a relatively long cycle. This, in turn, makes design-level verification nearly impossible to perform in an earlier phase. In our work, we exploit a drag-and-drop user interface and visual languages to support depicting responsive behaviors from social participants when they interact with their partners. For our principal test case of gaze-contingent HRI interfaces, this permits us to program and debug the robots' responsive behaviors through a graphical data-flow chart editor. We exploit additional program manipulation interfaces to provide still further improvement to our programming experience: by simulating the interaction dynamics between a human and a robot behavior model, we allow the researchers to generate, trace and study the perception-action dynamics with a social interaction simulation to verify and refine their designs. Finally, we extend our visual manipulation environment with a visual data-mining tool that allows the user to investigate interesting phenomena such as joint attention and sequential behavioral patterns from multiple multi-modal data streams. We have created instances of HRI interfaces to evaluate and refine our development paradigm. As far as we are aware, this paper reports the first program manipulation paradigm that integrates visual programming

  9. How visual working memory contents influence priming of visual attention.

    Science.gov (United States)

    Carlisle, Nancy B; Kristjánsson, Árni

    2017-04-12

    Recent evidence shows that when the contents of visual working memory overlap with targets and distractors in a pop-out search task, intertrial priming is inhibited (Kristjánsson, Sævarsson & Driver, Psychon Bull Rev 20(3):514-521, 2013, Experiment 2, Psychonomic Bulletin and Review). This may reflect an interesting interaction between implicit short-term memory-thought to underlie intertrial priming-and explicit visual working memory. Evidence from a non-pop-out search task suggests that it may specifically be holding distractors in visual working memory that disrupts intertrial priming (Cunningham & Egeth, Psychol Sci 27(4):476-485, 2016, Experiment 2, Psychological Science). We examined whether the inhibition of priming depends on whether feature values in visual working memory overlap with targets or distractors in the pop-out search, and we found that the inhibition of priming resulted from holding distractors in visual working memory. These results are consistent with separate mechanisms of target and distractor effects in intertrial priming, and support the notion that the impact of implicit short-term memory and explicit visual working memory can interact when each provides conflicting attentional signals.

  10. The four-meter confrontation visual field test.

    OpenAIRE

    Kodsi, S R; Younge, B R

    1992-01-01

    The 4-m confrontation visual field test has been successfully used at the Mayo Clinic for many years in addition to the standard 0.5-m confrontation visual field test. The 4-m confrontation visual field test is a test of macular function and can identify small central or paracentral scotomas that the examiner may not find when the patient is tested only at 0.5 m. Also, macular sparing in homonymous hemianopias and quadrantanopias may be identified with the 4-m confrontation visual field test....

  11. Object attributes combine additively in visual search.

    Science.gov (United States)

    Pramod, R T; Arun, S P

    2016-01-01

    We perceive objects as containing a variety of attributes: local features, relations between features, internal details, and global properties. But we know little about how they combine. Here, we report a remarkably simple additive rule that governs how these diverse object attributes combine in vision. The perceived dissimilarity between two objects was accurately explained as a sum of (a) spatially tuned local contour-matching processes modulated by part decomposition; (b) differences in internal details, such as texture; (c) differences in emergent attributes, such as symmetry; and (d) differences in global properties, such as orientation or overall configuration of parts. Our results elucidate an enduring question in object vision by showing that the whole object is not a sum of its parts but a sum of its many attributes.

  12. Visually observing comets

    CERN Document Server

    Seargent, David A J

    2017-01-01

    In these days of computers and CCD cameras, visual comet observers can still contribute scientifically useful data with the help of this handy reference for use in the field. Comets are one of the principal areas for productive pro-amateur collaboration in astronomy, but finding comets requires a different approach than the observing of more predictable targets. Principally directed toward amateur astronomers who prefer visual observing or who are interested in discovering a new comet or visually monitoring the behavior of known comets, it includes all the advice needed to thrive as a comet observer. After presenting a brief overview of the nature of comets and how we came to the modern understanding of comets, this book details the various types of observations that can usefully be carried out at the eyepiece of a telescope. Subjects range from how to search for new comets to visually estimating the brightness of comets and the length and orientation of tails, in addition to what to look for in comet heads a...

  13. An Internet-Based GIS Platform Providing Data for Visualization and Spatial Analysis of Urbanization in Major Asian and African Cities

    Directory of Open Access Journals (Sweden)

    Hao Gong

    2017-08-01

    Full Text Available Rapid urbanization in developing countries has been observed to be relatively high in the last two decades, especially in the Asian and African regions. Although many researchers have made efforts to improve the understanding of the urbanization trends of various cities in Asia and Africa, the absence of platforms where local stakeholders can visualize and obtain processed urbanization data for their specific needs or analysis, still remains a gap. In this paper, we present an Internet-based GIS platform called MEGA-WEB. The Platform was developed in view of the urban planning and management challenges in developing countries of Asia and Africa due to the limited availability of data resources, effective tools, and proficiency in data analysis. MEGA-WEB provides online access, visualization, spatial analysis, and data sharing services following a mashup framework of the MEGA-WEB Geo Web Services (GWS, with the third-party map services using HTML5/JavaScript techniques. Through the integration of GIS, remote sensing, geo-modelling, and Internet GIS, several indicators for analyzing urbanization are provided in MEGA-WEB to give diverse perspectives on the urbanization of not only the physical land surface condition, but also the relationships of population, energy use, and the environment. The design, architecture, system functions, and uses of MEGA-WEB are discussed in the paper. The MEGA-WEB project is aimed at contributing to sustainable urban development in developing countries of Asia and Africa.

  14. Coupling factors, visual rhythms, and synchronization ratios

    Directory of Open Access Journals (Sweden)

    Udo Will

    2012-07-01

    Full Text Available The inter-group entrainment study by Lucas, Clayton, and Leante (2011 is an interesting research report that advances studies in both empirical ethnomusicology and entrainment research in several ways, and provides an important addition to the much needed empirical case studies on musical entrainment. I submit that the authors’ analysis of an instant of resistance to entrainment is a key demonstration of the complementarity of analytical and ethnographic approaches in entrainment research. Further, I suggest that the evidence for the influence of visual information on entrainment supports the idea that there are two types of visuo-temporal information, each with different influence on the entrainment process, those derived from static and those from moving visual objects. As a final point, I argue that if we take into consideration the possibility of higher-order synchronization, some of the authors’ interpretations would need modification.

  15. Additional helmet and pack loading reduce situational awareness during the establishment of marksmanship posture.

    Science.gov (United States)

    Lim, Jongil; Palmer, Christopher J; Busa, Michael A; Amado, Avelino; Rosado, Luis D; Ducharme, Scott W; Simon, Darnell; Van Emmerik, Richard E A

    2017-06-01

    The pickup of visual information is critical for controlling movement and maintaining situational awareness in dangerous situations. Altered coordination while wearing protective equipment may impact the likelihood of injury or death. This investigation examined the consequences of load magnitude and distribution on situational awareness, segmental coordination and head gaze in several protective equipment ensembles. Twelve soldiers stepped down onto force plates and were instructed to quickly and accurately identify visual information while establishing marksmanship posture in protective equipment. Time to discriminate visual information was extended when additional pack and helmet loads were added, with the small increase in helmet load having the largest effect. Greater head-leading and in-phase trunk-head coordination were found with lighter pack loads, while trunk-leading coordination increased and head gaze dynamics were more disrupted in heavier pack loads. Additional armour load in the vest had no consequences for Time to discriminate, coordination or head dynamics. This suggests that the addition of head borne load be carefully considered when integrating new technology and that up-armouring does not necessarily have negative consequences for marksmanship performance. Practitioner Summary: Understanding the trade-space between protection and reductions in task performance continue to challenge those developing personal protective equipment. These methods provide an approach that can help optimise equipment design and loading techniques by quantifying changes in task performance and the emergent coordination dynamics that underlie that performance.

  16. behaviorism: a framework for dynamic data visualization.

    Science.gov (United States)

    Forbes, Angus Graeme; Höllerer, Tobias; Legrady, George

    2010-01-01

    While a number of information visualization software frameworks exist, creating new visualizations, especially those that involve novel visualization metaphors, interaction techniques, data analysis strategies, and specialized rendering algorithms, is still often a difficult process. To facilitate the creation of novel visualizations we present a new software framework, behaviorism, which provides a wide range of flexibility when working with dynamic information on visual, temporal, and ontological levels, but at the same time providing appropriate abstractions which allow developers to create prototypes quickly which can then easily be turned into robust systems. The core of the framework is a set of three interconnected graphs, each with associated operators: a scene graph for high-performance 3D rendering, a data graph for different layers of semantically linked heterogeneous data, and a timing graph for sophisticated control of scheduling, interaction, and animation. In particular, the timing graph provides a unified system to add behaviors to both data and visual elements, as well as to the behaviors themselves. To evaluate the framework we look briefly at three different projects all of which required novel visualizations in different domains, and all of which worked with dynamic data in different ways: an interactive ecological simulation, an information art installation, and an information visualization technique.

  17. Visual Localization by Place Recognition Based on Multifeature (D-λLBP++HOG

    Directory of Open Access Journals (Sweden)

    Yongliang Qiao

    2017-01-01

    Full Text Available Visual localization is widely used in the autonomous navigation system and Advanced Driver Assistance Systems (ADAS. This paper presents a visual localization method based on multifeature fusion and disparity information using stereo images. We integrate disparity information into complete center-symmetric local binary patterns (CSLBP to obtain a robust global image description (D-CSLBP. In order to represent the scene in depth, multifeature fusion of D-CSLBP and HOG features provides valuable information and permits decreasing the effect of some typical problems in place recognition such as perceptual aliasing. It improves visual recognition performance by taking advantage of depth, texture, and shape information. In addition, for real-time visual localization, local sensitive hashing method (LSH was used to compress the high-dimensional multifeature into binary vectors. It can thus speed up the process of image matching. To show its effectiveness, the proposed method is tested and evaluated using real datasets acquired in outdoor environments. Given the obtained results, our approach allows more effective visual localization compared with the state-of-the-art method FAB-MAP.

  18. Infants' visual and auditory communication when a partner is or is not visually attending.

    Science.gov (United States)

    Liszkowski, Ulf; Albrecht, Konstanze; Carpenter, Malinda; Tomasello, Michael

    2008-04-01

    In the current study we investigated infants' communication in the visual and auditory modalities as a function of the recipient's visual attention. We elicited pointing at interesting events from thirty-two 12-month olds and thirty-two 18-month olds in two conditions: when the recipient either was or was not visually attending to them before and during the point. The main result was that infants initiated more pointing when the recipient's visual attention was on them than when it was not. In addition, when the recipient did not respond by sharing interest in the designated event, infants initiated more repairs (repeated pointing) than when she did, again, especially when the recipient was visually attending to them. Interestingly, accompanying vocalizations were used intentionally and increased in both experimental conditions when the recipient did not share attention and interest. However, there was little evidence that infants used their vocalizations to direct attention to their gestures when the recipient was not attending to them.

  19. Helioviewer: A Web 2.0 Tool for Visualizing Heterogeneous Heliophysics Data

    Science.gov (United States)

    Hughitt, V. K.; Ireland, J.; Lynch, M. J.; Schmeidel, P.; Dimitoglou, G.; Müeller, D.; Fleck, B.

    2008-12-01

    Solar physics datasets are becoming larger, richer, more numerous and more distributed. Feature/event catalogs (describing objects of interest in the original data) are becoming important tools in navigating these data. In the wake of this increasing influx of data and catalogs there has been a growing need for highly sophisticated tools for accessing and visualizing this wealth of information. Helioviewer is a novel tool for integrating and visualizing disparate sources of solar and Heliophysics data. Taking advantage of the newly available power of modern web application frameworks, Helioviewer merges image and feature catalog data, and provides for Heliophysics data a familiar interface not unlike Google Maps or MapQuest. In addition to streamlining the process of combining heterogeneous Heliophysics datatypes such as full-disk images and coronagraphs, the inclusion of visual representations of automated and human-annotated features provides the user with an integrated and intuitive view of how different factors may be interacting on the Sun. Currently, Helioviewer offers images from The Extreme ultraviolet Imaging Telescope (EIT), The Large Angle and Spectrometric COronagraph experiment (LASCO) and the Michelson Doppler Imager (MDI) instruments onboard The Solar and Heliospheric Observatory (SOHO), as well as The Transition Region and Coronal Explorer (TRACE). Helioviewer also incorporates feature/event information from the LASCO CME List, NOAA Active Regions, CACTus CME and Type II Radio Bursts feature/event catalogs. The project is undergoing continuous development with many more data sources and additional functionality planned for the near future.

  20. Characterizing synaptic protein development in human visual cortex enables alignment of synaptic age with rat visual cortex

    Science.gov (United States)

    Pinto, Joshua G. A.; Jones, David G.; Williams, C. Kate; Murphy, Kathryn M.

    2015-01-01

    Although many potential neuroplasticity based therapies have been developed in the lab, few have translated into established clinical treatments for human neurologic or neuropsychiatric diseases. Animal models, especially of the visual system, have shaped our understanding of neuroplasticity by characterizing the mechanisms that promote neural changes and defining timing of the sensitive period. The lack of knowledge about development of synaptic plasticity mechanisms in human cortex, and about alignment of synaptic age between animals and humans, has limited translation of neuroplasticity therapies. In this study, we quantified expression of a set of highly conserved pre- and post-synaptic proteins (Synapsin, Synaptophysin, PSD-95, Gephyrin) and found that synaptic development in human primary visual cortex (V1) continues into late childhood. Indeed, this is many years longer than suggested by neuroanatomical studies and points to a prolonged sensitive period for plasticity in human sensory cortex. In addition, during childhood we found waves of inter-individual variability that are different for the four proteins and include a stage during early development (visual cortex and identified a simple linear equation that provides robust alignment of synaptic age between humans and rats. Alignment of synaptic ages is important for age-appropriate targeting and effective translation of neuroplasticity therapies from the lab to the clinic. PMID:25729353

  1. Towards a Sign-Based Indoor Navigation System for People with Visual Impairments.

    Science.gov (United States)

    Rituerto, Alejandro; Fusco, Giovanni; Coughlan, James M

    2016-10-01

    Navigation is a challenging task for many travelers with visual impairments. While a variety of GPS-enabled tools can provide wayfinding assistance in outdoor settings, GPS provides no useful localization information indoors. A variety of indoor navigation tools are being developed, but most of them require potentially costly physical infrastructure to be installed and maintained, or else the creation of detailed visual models of the environment. We report development of a new smartphone-based navigation aid, which combines inertial sensing, computer vision and floor plan information to estimate the user's location with no additional physical infrastructure and requiring only the locations of signs relative to the floor plan. A formative study was conducted with three blind volunteer participants demonstrating the feasibility of the approach and highlighting the areas needing improvement.

  2. Visualization of target inspection data at the National Ignition Facility

    Energy Technology Data Exchange (ETDEWEB)

    Potter, Daniel, E-mail: potter15@llnl.gov [Lawrence Livermore National Laboratory (United States); Antipa, Nick, E-mail: antipa1@llnl.gov [Lawrence Livermore National Laboratory (United States)

    2012-12-15

    Highlights: Black-Right-Pointing-Pointer Target surfaces are measured using a phase-shifting diffraction interferometer. Black-Right-Pointing-Pointer Datasets are several gigabytes that consist of tens to hundreds of files. Black-Right-Pointing-Pointer Software tools that provide a high-level overview of the entire dataset. Black-Right-Pointing-Pointer Single datasets loaded into the visualization session can be individually rotated. Black-Right-Pointing-Pointer Multiple datasets with common features are found then datasets can be aligned. - Abstract: As the National Ignition Facility continues its campaign to achieve ignition, new methods and tools will be required to measure the quality of the target capsules used to achieve this goal. Techniques have been developed to measure capsule surface features using a phase-shifting diffraction interferometer and Leica Microsystems confocal microscope. These instruments produce multi-gigabyte datasets which consist of tens to hundreds of files. Existing software can handle viewing a small subset of an entire dataset, but none can view a dataset in its entirety. Additionally, without an established mode of transport that keeps the target capsules properly aligned throughout the assembly process, a means of aligning the two dataset coordinate systems is needed. The goal of this project is to develop web based software utilizing WebGL which will provide high level overview visualization of an entire dataset, with the capability to retrieve finer details on demand, in addition to facilitating alignment of multiple datasets with one another based on common features that have been visually identified by users of the system.

  3. PRISMA-MAR: An Architecture Model for Data Visualization in Augmented Reality Mobile Devices

    Science.gov (United States)

    Gomes Costa, Mauro Alexandre Folha; Serique Meiguins, Bianchi; Carneiro, Nikolas S.; Gonçalves Meiguins, Aruanda Simões

    2013-01-01

    This paper proposes an extension to mobile augmented reality (MAR) environments--the addition of data charts to the more usual text, image and video components. To this purpose, we have designed a client-server architecture including the main necessary modules and services to provide an Information Visualization MAR experience. The server side…

  4. VisualRank: applying PageRank to large-scale image search.

    Science.gov (United States)

    Jing, Yushi; Baluja, Shumeet

    2008-11-01

    Because of the relative ease in understanding and processing text, commercial image-search systems often rely on techniques that are largely indistinguishable from text-search. Recently, academic studies have demonstrated the effectiveness of employing image-based features to provide alternative or additional signals. However, it remains uncertain whether such techniques will generalize to a large number of popular web queries, and whether the potential improvement to search quality warrants the additional computational cost. In this work, we cast the image-ranking problem into the task of identifying "authority" nodes on an inferred visual similarity graph and propose VisualRank to analyze the visual link structures among images. The images found to be "authorities" are chosen as those that answer the image-queries well. To understand the performance of such an approach in a real system, we conducted a series of large-scale experiments based on the task of retrieving images for 2000 of the most popular products queries. Our experimental results show significant improvement, in terms of user satisfaction and relevancy, in comparison to the most recent Google Image Search results. Maintaining modest computational cost is vital to ensuring that this procedure can be used in practice; we describe the techniques required to make this system practical for large scale deployment in commercial search engines.

  5. Visual Temporal Acuity Is Related to Auditory Speech Perception Abilities in Cochlear Implant Users.

    Science.gov (United States)

    Jahn, Kelly N; Stevenson, Ryan A; Wallace, Mark T

    Despite significant improvements in speech perception abilities following cochlear implantation, many prelingually deafened cochlear implant (CI) recipients continue to rely heavily on visual information to develop speech and language. Increased reliance on visual cues for understanding spoken language could lead to the development of unique audiovisual integration and visual-only processing abilities in these individuals. Brain imaging studies have demonstrated that good CI performers, as indexed by auditory-only speech perception abilities, have different patterns of visual cortex activation in response to visual and auditory stimuli as compared with poor CI performers. However, no studies have examined whether speech perception performance is related to any type of visual processing abilities following cochlear implantation. The purpose of the present study was to provide a preliminary examination of the relationship between clinical, auditory-only speech perception tests, and visual temporal acuity in prelingually deafened adult CI users. It was hypothesized that prelingually deafened CI users, who exhibit better (i.e., more acute) visual temporal processing abilities would demonstrate better auditory-only speech perception performance than those with poorer visual temporal acuity. Ten prelingually deafened adult CI users were recruited for this study. Participants completed a visual temporal order judgment task to quantify visual temporal acuity. To assess auditory-only speech perception abilities, participants completed the consonant-nucleus-consonant word recognition test and the AzBio sentence recognition test. Results were analyzed using two-tailed partial Pearson correlations, Spearman's rho correlations, and independent samples t tests. Visual temporal acuity was significantly correlated with auditory-only word and sentence recognition abilities. In addition, proficient CI users, as assessed via auditory-only speech perception performance, demonstrated

  6. Design for Visual Arts.

    Science.gov (United States)

    Skeries, Larry

    Experiences suggested within this visual arts packet provide high school students with awareness of visual expression in graphic design, product design, architecture, and crafts. The unit may be used in whole or in part and includes information about art careers and art-related jobs found in major occupational fields. Specific lesson topics…

  7. Short-term visual deprivation reduces interference effects of task-irrelevant facial expressions on affective prosody judgments

    Directory of Open Access Journals (Sweden)

    Ineke eFengler

    2015-04-01

    Full Text Available Several studies have suggested that neuroplasticity can be triggered by short-term visual deprivation in healthy adults. Specifically, these studies have provided evidence that visual deprivation reversibly affects basic perceptual abilities. The present study investigated the long-lasting effects of short-term visual deprivation on emotion perception. To this aim, we visually deprived a group of young healthy adults, age-matched with a group of non-deprived controls, for 3 hours and tested them before and after visual deprivation (i.e., after 8 h on average and at 4 week follow-up on an audio-visual (i.e., faces and voices emotion discrimination task. To observe changes at the level of basic perceptual skills, we additionally employed a simple audio-visual (i.e., tone bursts and light flashes discrimination task and two unimodal (one auditory and one visual perceptual threshold measures. During the 3 h period, both groups performed a series of auditory tasks. To exclude the possibility that changes in emotion discrimination may emerge as a consequence of the exposure to auditory stimulation during the 3 h stay in the dark, we visually deprived an additional group of age-matched participants who concurrently performed unrelated (i.e., tactile tasks to the later tested abilities. The two visually deprived groups showed enhanced affective prosodic discrimination abilities in the context of incongruent facial expressions following the period of visual deprivation; this effect was partially maintained until follow-up. By contrast, no changes were observed in affective facial expression discrimination and in the basic perception tasks in any group. These findings suggest that short-term visual deprivation per se triggers a reweighting of visual and auditory emotional cues, which seem to possibly prevail for longer durations.

  8. Patient DF's visual brain in action: Visual feedforward control in visual form agnosia.

    Science.gov (United States)

    Whitwell, Robert L; Milner, A David; Cavina-Pratesi, Cristiana; Barat, Masihullah; Goodale, Melvyn A

    2015-05-01

    Patient DF, who developed visual form agnosia following ventral-stream damage, is unable to discriminate the width of objects, performing at chance, for example, when asked to open her thumb and forefinger a matching amount. Remarkably, however, DF adjusts her hand aperture to accommodate the width of objects when reaching out to pick them up (grip scaling). While this spared ability to grasp objects is presumed to be mediated by visuomotor modules in her relatively intact dorsal stream, it is possible that it may rely abnormally on online visual or haptic feedback. We report here that DF's grip scaling remained intact when her vision was completely suppressed during grasp movements, and it still dissociated sharply from her poor perceptual estimates of target size. We then tested whether providing trial-by-trial haptic feedback after making such perceptual estimates might improve DF's performance, but found that they remained significantly impaired. In a final experiment, we re-examined whether DF's grip scaling depends on receiving veridical haptic feedback during grasping. In one condition, the haptic feedback was identical to the visual targets. In a second condition, the haptic feedback was of a constant intermediate width while the visual target varied trial by trial. Despite this incongruent feedback, DF still scaled her grip aperture to the visual widths of the target blocks, showing only normal adaptation to the false haptically-experienced width. Taken together, these results strengthen the view that DF's spared grasping relies on a normal mode of dorsal-stream functioning, based chiefly on visual feedforward processing. Copyright © 2014 Elsevier B.V. All rights reserved.

  9. The interplay of language and visual perception in working memory.

    Science.gov (United States)

    Souza, Alessandra S; Skóra, Zuzanna

    2017-09-01

    How do perception and language interact to form the representations that guide our thoughts and actions over the short-term? Here, we provide a first examination of this question by investigating the role of verbal labels in a continuous visual working memory (WM) task. Across four experiments, participants retained in memory the continuous color of a set of dots which were presented sequentially (Experiments 1-3) or simultaneously (Experiment 4). At test, they reproduced the colors of all dots using a color wheel. During stimulus presentation participants were required to either label the colors (color labeling) or to repeat "bababa" aloud (articulatory suppression), hence prompting or preventing verbal labeling, respectively. We tested four competing hypotheses of the labeling effect: (1) labeling generates a verbal representation that overshadows the visual representation; (2) labeling yields a verbal representation in addition to the visual one; (3) the labels function as a retrieval cue, adding distinctiveness to items in memory; and (4) labels activate visual categorical representations in long-term memory. Collectively, our experiments show that labeling does not overshadow the visual input; it augments it. Mixture modeling showed that labeling increased the quantity and quality of information in WM. Our findings are consistent with the hypothesis that labeling activates visual long-term categorical representations which help in reducing the noise in the internal representations of the visual stimuli in WM. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Rich Representations with Exposed Semantics for Deep Visual Reasoning

    Science.gov (United States)

    2016-06-01

    of a relationship between visual recognition, associative processing, and episodic memory and provides important clues into the neural mechanism...provides critical evidence of a relationship between visual recognition, associative processing, and episodic memory and provides important clues into...From - To) ;run.- ~01~ Final!Technical 4. TITLE AND SUBTITLE Sa. CONTRACT NUMBER Rich Representations with Exposed Semantics for Deep Visual

  11. Are Visual Peripheries Forever Young?

    Directory of Open Access Journals (Sweden)

    Kalina Burnat

    2015-01-01

    Full Text Available The paper presents a concept of lifelong plasticity of peripheral vision. Central vision processing is accepted as critical and irreplaceable for normal perception in humans. While peripheral processing chiefly carries information about motion stimuli features and redirects foveal attention to new objects, it can also take over functions typical for central vision. Here I review the data showing the plasticity of peripheral vision found in functional, developmental, and comparative studies. Even though it is well established that afferent projections from central and peripheral retinal regions are not established simultaneously during early postnatal life, central vision is commonly used as a general model of development of the visual system. Based on clinical studies and visually deprived animal models, I describe how central and peripheral visual field representations separately rely on early visual experience. Peripheral visual processing (motion is more affected by binocular visual deprivation than central visual processing (spatial resolution. In addition, our own experimental findings show the possible recruitment of coarse peripheral vision for fine spatial analysis. Accordingly, I hypothesize that the balance between central and peripheral visual processing, established in the course of development, is susceptible to plastic adaptations during the entire life span, with peripheral vision capable of taking over central processing.

  12. Are visual peripheries forever young?

    Science.gov (United States)

    Burnat, Kalina

    2015-01-01

    The paper presents a concept of lifelong plasticity of peripheral vision. Central vision processing is accepted as critical and irreplaceable for normal perception in humans. While peripheral processing chiefly carries information about motion stimuli features and redirects foveal attention to new objects, it can also take over functions typical for central vision. Here I review the data showing the plasticity of peripheral vision found in functional, developmental, and comparative studies. Even though it is well established that afferent projections from central and peripheral retinal regions are not established simultaneously during early postnatal life, central vision is commonly used as a general model of development of the visual system. Based on clinical studies and visually deprived animal models, I describe how central and peripheral visual field representations separately rely on early visual experience. Peripheral visual processing (motion) is more affected by binocular visual deprivation than central visual processing (spatial resolution). In addition, our own experimental findings show the possible recruitment of coarse peripheral vision for fine spatial analysis. Accordingly, I hypothesize that the balance between central and peripheral visual processing, established in the course of development, is susceptible to plastic adaptations during the entire life span, with peripheral vision capable of taking over central processing.

  13. Institutionalizing New Ideas Through Visualization

    DEFF Research Database (Denmark)

    Meyer, Renate; Jancsary, Dennis; Höllerer, Markus A.

    How do visualization and visual forms of communication influence the process of transforming a novel idea into established organizational practice? In this paper, we build theory with regard to the role of visuals in manifesting and giving form to an innovative idea as it proceeds through various...... stages of institutionalization. Ideas become institutionalized not merely through widespread diffusion in a cognitive-discursive form but eventually through their translation into concrete activities and transformation into specific patterns of organizational practice. We argue that visualization plays...... a pivotal and unique role in this process. Visualization bridges the ideational with the practical realm by providing representations of ideas, connecting them to existing knowledge, and illustrating the specific actions that instantiate them. Similar to verbal discourse, and often in tandem, visual...

  14. Effects of body lean and visual information on the equilibrium maintenance during stance.

    Science.gov (United States)

    Duarte, Marcos; Zatsiorsky, Vladimir M

    2002-09-01

    Maintenance of equilibrium was tested in conditions when humans assume different leaning postures during upright standing. Subjects ( n=11) stood in 13 different body postures specified by visual center of pressure (COP) targets within their base of support (BOS). Different types of visual information were tested: continuous presentation of visual target, no vision after target presentation, and with simultaneous visual feedback of the COP. The following variables were used to describe the equilibrium maintenance: the mean of the COP position, the area of the ellipse covering the COP sway, and the resultant median frequency of the power spectral density of the COP displacement. The variability of the COP displacement, quantified by the COP area variable, increased when subjects occupied leaning postures, irrespective of the kind of visual information provided. This variability also increased when vision was removed in relation to when vision was present. Without vision, drifts in the COP data were observed which were larger for COP targets farther away from the neutral position. When COP feedback was given in addition to the visual target, the postural control system did not control stance better than in the condition with only visual information. These results indicate that the visual information is used by the postural control system at both short and long time scales.

  15. A review of visual perception mechanisms that regulate rapid adaptive camouflage in cuttlefish.

    Science.gov (United States)

    Chiao, Chuan-Chin; Chubb, Charles; Hanlon, Roger T

    2015-09-01

    We review recent research on the visual mechanisms of rapid adaptive camouflage in cuttlefish. These neurophysiologically complex marine invertebrates can camouflage themselves against almost any background, yet their ability to quickly (0.5-2 s) alter their body patterns on different visual backgrounds poses a vexing challenge: how to pick the correct body pattern amongst their repertoire. The ability of cuttlefish to change appropriately requires a visual system that can rapidly assess complex visual scenes and produce the motor responses-the neurally controlled body patterns-that achieve camouflage. Using specifically designed visual backgrounds and assessing the corresponding body patterns quantitatively, we and others have uncovered several aspects of scene variation that are important in regulating cuttlefish patterning responses. These include spatial scale of background pattern, background intensity, background contrast, object edge properties, object contrast polarity, object depth, and the presence of 3D objects. Moreover, arm postures and skin papillae are also regulated visually for additional aspects of concealment. By integrating these visual cues, cuttlefish are able to rapidly select appropriate body patterns for concealment throughout diverse natural environments. This sensorimotor approach of studying cuttlefish camouflage thus provides unique insights into the mechanisms of visual perception in an invertebrate image-forming eye.

  16. Visualizing Dynamic Data with Maps.

    Science.gov (United States)

    Mashima, Daisuke; Kobourov, Stephen G; Hu, Yifan

    2012-09-01

    Maps offer a familiar way to present geographic data (continents, countries), and additional information (topography, geology), can be displayed with the help of contours and heat-map overlays. In this paper, we consider visualizing large-scale dynamic relational data by taking advantage of the geographic map metaphor. We describe a map-based visualization system which uses animation to convey dynamics in large data sets, and which aims to preserve the viewer's mental map while also offering readable views at all times. Our system is fully functional and has been used to visualize user traffic on the Internet radio station last.fm, as well as TV-viewing patterns from an IPTV service. All map images in this paper are available in high-resolution at [1] as are several movies illustrating the dynamic visualization.

  17. Quantized Visual Awareness

    Directory of Open Access Journals (Sweden)

    W Alexander Escobar

    2013-11-01

    Full Text Available The proposed model holds that, at its most fundamental level, visual awareness is quantized. That is to say that visual awareness arises as individual bits of awareness through the action of neural circuits with hundreds to thousands of neurons in at least the human striate cortex. Circuits with specific topologies will reproducibly result in visual awareness that correspond to basic aspects of vision like color, motion and depth. These quanta of awareness (qualia are produced by the feedforward sweep that occurs through the geniculocortical pathway but are not integrated into a conscious experience until recurrent processing from centers like V4 or V5 select the appropriate qualia being produced in V1 to create a percept. The model proposed here has the potential to shift the focus of the search for visual awareness to the level of microcircuits and these likely exist across the kingdom Animalia. Thus establishing qualia as the fundamental nature of visual awareness will not only provide a deeper understanding of awareness, but also allow for a more quantitative understanding of the evolution of visual awareness throughout the animal kingdom.

  18. MULTIDISCIPLINARY APPROACH TO THE CORRECTION OF ACCOMMODATION REFRACTION DISORDERS IN VISUALLY INTENSIVE LABOR PERSONS

    Directory of Open Access Journals (Sweden)

    I. G. Ovechkin

    2015-01-01

    Full Text Available Increased load on the visual analyzer of an operator, increase in everyday visual performance, universal introduction of information displaying on cathode-ray tubes result in temporary and stable visual disturbances. Accommodative refractive apparatus of an eye is one of the key points of application of visually intensive labor. Work associated with permanent eyestrain overloads oculomotor and accommodative apparatus thus provoking myopic shift, increase in dynamic refraction, exophoric or esophoric shift of initial visual equilibrium. Accommodation disorders are accompanied by changes in ciliary muscle blood supply, abnormalities of vegetative segment regulation, parasympathetic brain vascular dystonia due to the decreased tonus of sympathetic nervous system. Evaluation of certain kind of activity in terms of ergonomics includes examination of visual status and visual working capacity, development of visual professiograms and vision standards for certain professions, justification of methods and tools of visual work optimization. Visual disturbances in operators developing in the course of visually intensive occupational work should be considered from the viewpoint of traditional accommodation and refraction disorders as well as functional manifestations of general fatigue or thoracic cervical spine dysfunction. Symptoms of accommodative asthenopia can be regarded as a functional manifestation of general fatigue syndrome or functional neurosis. Development of multidisciplinary approach to the correction of accommodation refraction disorders in visually intensive labor persons is of scientific urgency and practical reasonability. There is a long-felt need in additional attraction of different specialists who use in their work physical factors for accommodative asthenopia correction. Development of multidisciplinary approach to accommodation refraction disorder correction in visually intensive labor persons is based on syndromic pathogenic

  19. TVA-based assessment of visual attentional functions in developmental dyslexia

    Science.gov (United States)

    Bogon, Johanna; Finke, Kathrin; Stenneken, Prisca

    2014-01-01

    There is an ongoing debate whether an impairment of visual attentional functions constitutes an additional or even an isolated deficit of developmental dyslexia (DD). Especially performance in tasks that require the processing of multiple visual elements in parallel has been reported to be impaired in DD. We review studies that used parameter-based assessment for identifying and quantifying impaired aspect(s) of visual attention that underlie this multi-element processing deficit in DD. These studies used the mathematical framework provided by the “theory of visual attention” (Bundesen, 1990) to derive quantitative measures of general attentional resources and attentional weighting aspects on the basis of behavioral performance in whole- and partial-report tasks. Based on parameter estimates in children and adults with DD, the reviewed studies support a slowed perceptual processing speed as an underlying primary deficit in DD. Moreover, a reduction in visual short term memory storage capacity seems to present a modulating component, contributing to difficulties in written language processing. Furthermore, comparing the spatial distributions of attentional weights in children and adults suggests that having limited reading and writing skills might impair the development of a slight leftward bias, that is typical for unimpaired adult readers. PMID:25360129

  20. TVA-based assessment of visual attentional functions in developmental dyslexia

    Directory of Open Access Journals (Sweden)

    Johanna eBogon

    2014-10-01

    Full Text Available There is an ongoing debate whether an impairment of visual attentional functions constitutes an additional or even an isolated deficit of developmental dyslexia (DD. Especially performance in tasks that require the processing of multiple visual elements in parallel has been reported to be impaired in DD. We review studies that used parameter-based assessment for identifying and quantifying impaired aspect(s of visual attention that underlie this multi-element-processing deficit in DD. These studies used the mathematical framework provided by the ‘theory of visual attention’ (TVA; Bundesen, 1990 to derive quantitative measures of general attentional resources and attentional weighting aspects on the basis of behavioral performance in whole- and partial-report tasks. Based on parameter estimates in children and adults with DD, the reviewed studies support a slowed perceptual processing speed as an underlying primary deficit in DD. Moreover, a reduction in visual short term memory storage capacity seems to present a modulating component, contributing to difficulties in written language processing. Furthermore, comparing the spatial distributions of attentional weights in children and adults suggest that having limited reading and writing skills might impair the development of a slight leftward bias, that is typical for unimpaired adult readers.

  1. GVS - GENERAL VISUALIZATION SYSTEM

    Science.gov (United States)

    Keith, S. R.

    1994-01-01

    The primary purpose of GVS (General Visualization System) is to support scientific visualization of data output by the panel method PMARC_12 (inventory number ARC-13362) on the Silicon Graphics Iris computer. GVS allows the user to view PMARC geometries and wakes as wire frames or as light shaded objects. Additionally, geometries can be color shaded according to phenomena such as pressure coefficient or velocity. Screen objects can be interactively translated and/or rotated to permit easy viewing. Keyframe animation is also available for studying unsteady cases. The purpose of scientific visualization is to allow the investigator to gain insight into the phenomena they are examining, therefore GVS emphasizes analysis, not artistic quality. GVS uses existing IRIX 4.0 image processing tools to allow for conversion of SGI RGB files to other formats. GVS is a self-contained program which contains all the necessary interfaces to control interaction with PMARC data. This includes 1) the GVS Tool Box, which supports color histogram analysis, lighting control, rendering control, animation, and positioning, 2) GVS on-line help, which allows the user to access control elements and get information about each control simultaneously, and 3) a limited set of basic GVS data conversion filters, which allows for the display of data requiring simpler data formats. Specialized controls for handling PMARC data include animation and wakes, and visualization of off-body scan volumes. GVS is written in C-language for use on SGI Iris series computers running IRIX. It requires 28Mb of RAM for execution. Two separate hardcopy documents are available for GVS. The basic document price for ARC-13361 includes only the GVS User's Manual, which outlines major features of the program and provides a tutorial on using GVS with PMARC_12 data. Programmers interested in modifying GVS for use with data in formats other than PMARC_12 format may purchase a copy of the draft GVS 3.1 Software Maintenance

  2. Learning Building Layouts with Non-geometric Visual Information: The Effects of Visual Impairment and Age

    Science.gov (United States)

    Kalia, Amy A.; Legge, Gordon E.; Giudice, Nicholas A.

    2009-01-01

    Previous studies suggest that humans rely on geometric visual information (hallway structure) rather than non-geometric visual information (e.g., doors, signs and lighting) for acquiring cognitive maps of novel indoor layouts. This study asked whether visual impairment and age affect reliance on non-geometric visual information for layout learning. We tested three groups of participants—younger (sighted, older (50–70 years) normally sighted, and low vision (people with heterogeneous forms of visual impairment ranging in age from 18–67). Participants learned target locations in building layouts using four presentation modes: a desktop virtual environment (VE) displaying only geometric cues (Sparse VE), a VE displaying both geometric and non-geometric cues (Photorealistic VE), a Map, and a Real building. Layout knowledge was assessed by map drawing and by asking participants to walk to specified targets in the real space. Results indicate that low-vision and older normally-sighted participants relied on additional non-geometric information to accurately learn layouts. In conclusion, visual impairment and age may result in reduced perceptual and/or memory processing that makes it difficult to learn layouts without non-geometric visual information. PMID:19189732

  3. Creative Approaches to School Counseling: Using the Visual Expressive Arts as an Intervention

    Science.gov (United States)

    Chibbaro, Julia S.; Camacho, Heather

    2011-01-01

    This paper examines the use of creative arts in school counseling. There is a specific focus on the use of visual arts, particularly such methods as drawing and painting. Existing literature, which supports the use of art in school counseling, provides the paper's rationale. In addition, the paper explores different art techniques that school…

  4. Visual electrophysiology in children

    Directory of Open Access Journals (Sweden)

    Jelka Brecelj

    2005-10-01

    Full Text Available Background: Electrophysiological assessment of vision in children helps to recognise abnormal development of the visual system when it is still susceptible to medication and eventual correction. Visual electrophysiology provides information about the function of the retina (retinal pigment epithelium, cone and rod receptors, bipolar, amacrine, and ganglion cells, optic nerve, chiasmal and postchiasmal visual pathway, and visual cortex.Methods: Electroretinograms (ERG and visual evoked potentials (VEP are recorded non-invasively; in infants are recorded simultaneously ERG with skin electrodes, while in older children separately ERG with HK loop electrode in accordance with ISCEV (International Society for Clinical Electrophysiology of Vision recommendations.Results: Clinical and electrophysiological changes in children with nystagmus, Leber’s congenital amaurosis, achromatopsia, congenital stationary night blindness, progressive retinal dystrophies, optic nerve hypoplasia, albinism, achiasmia, optic neuritis and visual pathway tumours are presented.Conclusions: Electrophysiological tests can help to indicate the nature and the location of dysfunction in unclear ophthalmological and/or neurological cases.

  5. D Web Visualization of Environmental Information - Integration of Heterogeneous Data Sources when Providing Navigation and Interaction

    Science.gov (United States)

    Herman, L.; Řezník, T.

    2015-08-01

    3D information is essential for a number of applications used daily in various domains such as crisis management, energy management, urban planning, and cultural heritage, as well as pollution and noise mapping, etc. This paper is devoted to the issue of 3D modelling from the levels of buildings to cities. The theoretical sections comprise an analysis of cartographic principles for the 3D visualization of spatial data as well as a review of technologies and data formats used in the visualization of 3D models. Emphasis was placed on the verification of available web technologies; for example, X3DOM library was chosen for the implementation of a proof-of-concept web application. The created web application displays a 3D model of the city district of Nový Lískovec in Brno, the Czech Republic. The developed 3D visualization shows a terrain model, 3D buildings, noise pollution, and other related information. Attention was paid to the areas important for handling heterogeneous input data, the design of interactive functionality, and navigation assistants. The advantages, limitations, and future development of the proposed concept are discussed in the conclusions.

  6. Visual Working Memory Enhances the Neural Response to Matching Visual Input.

    Science.gov (United States)

    Gayet, Surya; Guggenmos, Matthias; Christophel, Thomas B; Haynes, John-Dylan; Paffen, Chris L E; Van der Stigchel, Stefan; Sterzer, Philipp

    2017-07-12

    Visual working memory (VWM) is used to maintain visual information available for subsequent goal-directed behavior. The content of VWM has been shown to affect the behavioral response to concurrent visual input, suggesting that visual representations originating from VWM and from sensory input draw upon a shared neural substrate (i.e., a sensory recruitment stance on VWM storage). Here, we hypothesized that visual information maintained in VWM would enhance the neural response to concurrent visual input that matches the content of VWM. To test this hypothesis, we measured fMRI BOLD responses to task-irrelevant stimuli acquired from 15 human participants (three males) performing a concurrent delayed match-to-sample task. In this task, observers were sequentially presented with two shape stimuli and a retro-cue indicating which of the two shapes should be memorized for subsequent recognition. During the retention interval, a task-irrelevant shape (the probe) was briefly presented in the peripheral visual field, which could either match or mismatch the shape category of the memorized stimulus. We show that this probe stimulus elicited a stronger BOLD response, and allowed for increased shape-classification performance, when it matched rather than mismatched the concurrently memorized content, despite identical visual stimulation. Our results demonstrate that VWM enhances the neural response to concurrent visual input in a content-specific way. This finding is consistent with the view that neural populations involved in sensory processing are recruited for VWM storage, and it provides a common explanation for a plethora of behavioral studies in which VWM-matching visual input elicits a stronger behavioral and perceptual response. SIGNIFICANCE STATEMENT Humans heavily rely on visual information to interact with their environment and frequently must memorize such information for later use. Visual working memory allows for maintaining such visual information in the mind

  7. Visual quality analysis for images degraded by different types of noise

    Science.gov (United States)

    Ponomarenko, Nikolay N.; Lukin, Vladimir V.; Ieremeyev, Oleg I.; Egiazarian, Karen O.; Astola, Jaakko T.

    2013-02-01

    Modern visual quality metrics take into account different peculiarities of the Human Visual System (HVS). One of them is described by the Weber-Fechner law and deals with the different sensitivity to distortions in image fragments with different local mean values (intensity, brightness). We analyze how this property can be incorporated into a metric PSNRHVS- M. It is shown that some improvement of its performance can be provided. Then, visual quality of color images corrupted by three types of i.i.d. noise (pure additive, pure multiplicative, and signal dependent, Poisson) is analyzed. Experiments with a group of observers are carried out for distorted color images created on the basis of TID2008 database. Several modern HVS-metrics are considered. It is shown that even the best metrics are unable to assess visual quality of distorted images adequately enough. The reasons for this deal with the observer's attention to certain objects in the test images, i.e., with semantic aspects of vision, which are worth taking into account in design of HVS-metrics.

  8. Filling the Astronomical Void - A Visual Medium for a Visual Subject

    Science.gov (United States)

    Ryan, J.

    1996-12-01

    Astronomy is fundamentally a visual subject. The modern science of astronomy has at its foundation the ancient art of observing the sky visually. The visual elements of astronomy are arguably the most important. Every person in the entire world is affected by visually-observed astronomical phenomena such as the seasonal variations in daylight. However, misconceptions abound and the average person cannot recognize the simple signs in the sky that point to the direction, the hour and the season. Educators and astronomy popularizers widely lament that astronomy is not appreciated in our society. Yet, there is a remarkable dearth of popular literature for teaching the visual elements of astronomy. This is what I refer to as *the astronomical void.* Typical works use illustrations sparsely, relying most heavily on text-based descriptions of the visual astronomical phenomena. Such works leave significant inferential gaps to the inexperienced reader, who is unequipped for making astronomical observations. Thus, the astronomical void remains unfilled by much of the currently available literature. I therefore propose the introduction of a visually-oriented medium for teaching the visual elements of Astronomy. To this end, I have prepared a series of astronomy "comic strips" that are intended to fill the astronomical void. By giving the illustrations the central place, the comic strip medium permits the depiction of motion and other sequential activity, thus effectively representing astronomical phenomena. In addition to the practical advantages, the comic strip is a "user friendly" medium that is inviting and entertaining to a reader. At the present time, I am distributing a monthly comic strip entitled *Starman*, which appears in the newsletters of over 120 local astronomy organizations and on the web at http://www.cyberdrive.net/ starman. I hope to eventually publish a series of full-length books and believe that astronomical comic strips will help expand the perimeter of

  9. Visualization of temporal aspects of tsetse fly eradication in ...

    African Journals Online (AJOL)

    Pattern of how they are applied in time was provided in the animation representation. Further information on areas where different techniques were applied on different years is interactively visualized. Visualization of infestation changes in time was also provided by animation representation. Visualization of eradication ...

  10. MindSeer: a portable and extensible tool for visualization of structural and functional neuroimaging data

    Directory of Open Access Journals (Sweden)

    Brinkley James F

    2007-10-01

    Full Text Available Abstract Background Three-dimensional (3-D visualization of multimodality neuroimaging data provides a powerful technique for viewing the relationship between structure and function. A number of applications are available that include some aspect of 3-D visualization, including both free and commercial products. These applications range from highly specific programs for a single modality, to general purpose toolkits that include many image processing functions in addition to visualization. However, few if any of these combine both stand-alone and remote multi-modality visualization in an open source, portable and extensible tool that is easy to install and use, yet can be included as a component of a larger information system. Results We have developed a new open source multimodality 3-D visualization application, called MindSeer, that has these features: integrated and interactive 3-D volume and surface visualization, Java and Java3D for true cross-platform portability, one-click installation and startup, integrated data management to help organize large studies, extensibility through plugins, transparent remote visualization, and the ability to be integrated into larger information management systems. We describe the design and implementation of the system, as well as several case studies that demonstrate its utility. These case studies are available as tutorials or demos on the associated website: http://sig.biostr.washington.edu/projects/MindSeer. Conclusion MindSeer provides a powerful visualization tool for multimodality neuroimaging data. Its architecture and unique features also allow it to be extended into other visualization domains within biomedicine.

  11. A Visual Analytics Approach for Correlation, Classification, and Regression Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Steed, Chad A [ORNL; SwanII, J. Edward [Mississippi State University (MSU); Fitzpatrick, Patrick J. [Mississippi State University (MSU); Jankun-Kelly, T.J. [Mississippi State University (MSU)

    2012-02-01

    New approaches that combine the strengths of humans and machines are necessary to equip analysts with the proper tools for exploring today's increasing complex, multivariate data sets. In this paper, a novel visual data mining framework, called the Multidimensional Data eXplorer (MDX), is described that addresses the challenges of today's data by combining automated statistical analytics with a highly interactive parallel coordinates based canvas. In addition to several intuitive interaction capabilities, this framework offers a rich set of graphical statistical indicators, interactive regression analysis, visual correlation mining, automated axis arrangements and filtering, and data classification techniques. The current work provides a detailed description of the system as well as a discussion of key design aspects and critical feedback from domain experts.

  12. Illustrative visualization of 3D city models

    Science.gov (United States)

    Doellner, Juergen; Buchholz, Henrik; Nienhaus, Marc; Kirsch, Florian

    2005-03-01

    This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird"s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.

  13. Advancements to Visualization Control System (VCS, part of UV-CDAT), a Visualization Package Designed for Climate Scientists

    Science.gov (United States)

    Lipsa, D.; Chaudhary, A.; Williams, D. N.; Doutriaux, C.; Jhaveri, S.

    2017-12-01

    Climate Data Analysis Tools (UV-CDAT, https://uvcdat.llnl.gov) is a data analysis and visualization software package developed at Lawrence Livermore National Laboratory and designed for climate scientists. Core components of UV-CDAT include: 1) Community Data Management System (CDMS) which provides I/O support and a data model for climate data;2) CDAT Utilities (GenUtil) that processes data using spatial and temporal averaging and statistic functions; and 3) Visualization Control System (VCS) for interactive visualization of the data. VCS is a Python visualization package primarily built for climate scientists, however, because of its generality and breadth of functionality, it can be a useful tool to other scientific applications. VCS provides 1D, 2D and 3D visualization functions such as scatter plot and line graphs for 1d data, boxfill, meshfill, isofill, isoline for 2d scalar data, vector glyphs and streamlines for 2d vector data and 3d_scalar and 3d_vector for 3d data. Specifically for climate data our plotting routines include projections, Skew-T plots and Taylor diagrams. While VCS provided a user-friendly API, the previous implementation of VCS relied on slow performing vector graphics (Cairo) backend which is suitable for smaller dataset and non-interactive graphics. LLNL and Kitware team has added a new backend to VCS that uses the Visualization Toolkit (VTK) as its visualization backend. VTK is one of the most popular open source, multi-platform scientific visualization library written in C++. Its use of OpenGL and pipeline processing architecture results in a high performant VCS library. Its multitude of available data formats and visualization algorithms results in easy adoption of new visualization methods and new data formats in VCS. In this presentation, we describe recent contributions to VCS that includes new visualization plots, continuous integration testing using Conda and CircleCI, tutorials and examples using Jupyter notebooks as well as

  14. The Effect of Delayed Visual Feedback on Synchrony Perception in a Tapping Task

    Directory of Open Access Journals (Sweden)

    Mirjam Keetels

    2011-10-01

    Full Text Available Sensory events following a motor action are, within limits, interpreted as a causal consequence of those actions. For example, the clapping of the hands is initiated by the motor system, but subsequently visual, auditory, and tactile information is provided and processed. In the present study we examine the effect of temporal disturbances in this chain of motor-sensory events. Participants are instructed to tap a surface with their finger in synchrony with a chain of 20 sound clicks (ISI 750 ms. We examined the effect of additional visual information on this ‘tap-sound’-synchronization task. During tapping, subjects will see a video of their own tapping hand on a screen in front of them. The video can either be in synchrony with the tap (real-time recording, or can be slightly delayed (∼40–160 ms. In a control condition, no video is provided. We explore whether ‘tap-sound’ synchrony will be shifted as a function of the delayed visual feedback. Results will provide fundamental insights into how the brain preserves a causal interpretation of motor actions and their sensory consequences.

  15. Metal-mediated aminocatalysis provides mild conditions: Enantioselective Michael addition mediated by primary amino catalysts and alkali-metal ions

    Directory of Open Access Journals (Sweden)

    Matthias Leven

    2013-01-01

    Full Text Available Four catalysts based on new amides of chiral 1,2-diamines and 2-sulfobenzoic acid have been developed. The alkali-metal salts of these betaine-like amides are able to form imines with enones, which are activated by Lewis acid interaction for nucleophilic attack by 4-hydroxycoumarin. The addition of 4-hydroxycoumarin to enones gives ee’s up to 83% and almost quantitative yields in many cases. This novel type of catalysis provides an effective alternative to conventional primary amino catalysis were strong acid additives are essential components.

  16. Visual strategies underpinning the development of visual-motor expertise when hitting a ball.

    Science.gov (United States)

    Sarpeshkar, Vishnu; Abernethy, Bruce; Mann, David L

    2017-10-01

    It is well known that skilled batters in fast-ball sports do not align their gaze with the ball throughout ball-flight, but instead adopt a unique sequence of eye and head movements that contribute toward their skill. However, much of what we know about visual-motor behavior in hitting is based on studies that have employed case study designs, and/or used simplified tasks that fall short of replicating the spatiotemporal demands experienced in the natural environment. The aim of this study was to provide a comprehensive examination of the eye and head movement strategies that underpin the development of visual-motor expertise when intercepting a fast-moving target. Eye and head movements were examined in situ for 4 groups of cricket batters, who were crossed for playing level (elite or club) and age (U19 or adult), when hitting balls that followed either straight or curving ('swinging') trajectories. The results provide support for some widely cited markers of expertise in batting, while questioning the legitimacy of others. Swinging trajectories alter the visual-motor behavior of all batters, though in large part because of the uncertainty generated by the possibility of a variation in trajectory rather than any actual change in trajectory per se. Moreover, curving trajectories influence visual-motor behavior in a nonlinear fashion, with targets that curve away from the observer influencing behavior more than those that curve inward. The findings provide a more comprehensive understanding of the development of visual-motor expertise in interception. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  17. Distributed visualization of gridded geophysical data: the Carbon Data Explorer, version 0.2.3

    Science.gov (United States)

    Endsley, K. A.; Billmire, M. G.

    2016-01-01

    Due to the proliferation of geophysical models, particularly climate models, the increasing resolution of their spatiotemporal estimates of Earth system processes, and the desire to easily share results with collaborators, there is a genuine need for tools to manage, aggregate, visualize, and share data sets. We present a new, web-based software tool - the Carbon Data Explorer - that provides these capabilities for gridded geophysical data sets. While originally developed for visualizing carbon flux, this tool can accommodate any time-varying, spatially explicit scientific data set, particularly NASA Earth system science level III products. In addition, the tool's open-source licensing and web presence facilitate distributed scientific visualization, comparison with other data sets and uncertainty estimates, and data publishing and distribution.

  18. Candidate glutamatergic neurons in the visual system of Drosophila.

    Directory of Open Access Journals (Sweden)

    Shamprasad Varija Raghu

    Full Text Available The visual system of Drosophila contains approximately 60,000 neurons that are organized in parallel, retinotopically arranged columns. A large number of these neurons have been characterized in great anatomical detail. However, studies providing direct evidence for synaptic signaling and the neurotransmitter used by individual neurons are relatively sparse. Here we present a first layout of neurons in the Drosophila visual system that likely release glutamate as their major neurotransmitter. We identified 33 different types of neurons of the lamina, medulla, lobula and lobula plate. Based on the previous Golgi-staining analysis, the identified neurons are further classified into 16 major subgroups representing lamina monopolar (L, transmedullary (Tm, transmedullary Y (TmY, Y, medulla intrinsic (Mi, Mt, Pm, Dm, Mi Am, bushy T (T, translobula plate (Tlp, lobula intrinsic (Lcn, Lt, Li, lobula plate tangential (LPTCs and lobula plate intrinsic (LPi cell types. In addition, we found 11 cell types that were not described by the previous Golgi analysis. This classification of candidate glutamatergic neurons fosters the future neurogenetic dissection of information processing in circuits of the fly visual system.

  19. Enhancements to VTK enabling Scientific Visualization in Immersive Environments

    Energy Technology Data Exchange (ETDEWEB)

    O' Leary, Patrick; Jhaveri, Sankhesh; Chaudhary, Aashish; Sherman, William; Martin, Ken; Lonie, David; Whiting, Eric; Money, James

    2017-04-01

    Modern scientific, engineering and medical computational sim- ulations, as well as experimental and observational data sens- ing/measuring devices, produce enormous amounts of data. While statistical analysis provides insight into this data, scientific vi- sualization is tactically important for scientific discovery, prod- uct design and data analysis. These benefits are impeded, how- ever, when scientific visualization algorithms are implemented from scratch—a time-consuming and redundant process in im- mersive application development. This process can greatly ben- efit from leveraging the state-of-the-art open-source Visualization Toolkit (VTK) and its community. Over the past two (almost three) decades, integrating VTK with a virtual reality (VR) environment has only been attempted to varying degrees of success. In this pa- per, we demonstrate two new approaches to simplify this amalga- mation of an immersive interface with visualization rendering from VTK. In addition, we cover several enhancements to VTK that pro- vide near real-time updates and efficient interaction. Finally, we demonstrate the combination of VTK with both Vrui and OpenVR immersive environments in example applications.

  20. Visualizing water

    Science.gov (United States)

    Baart, F.; van Gils, A.; Hagenaars, G.; Donchyts, G.; Eisemann, E.; van Velzen, J. W.

    2016-12-01

    A compelling visualization is captivating, beautiful and narrative. Here we show how melding the skills of computer graphics, art, statistics, and environmental modeling can be used to generate innovative, attractive and very informative visualizations. We focus on the topic of visualizing forecasts and measurements of water (water level, waves, currents, density, and salinity). For the field of computer graphics and arts, water is an important topic because it occurs in many natural scenes. For environmental modeling and statistics, water is an important topic because the water is essential for transport, a healthy environment, fruitful agriculture, and a safe environment.The different disciplines take different approaches to visualizing water. In computer graphics, one focusses on creating water as realistic looking as possible. The focus on realistic perception (versus the focus on the physical balance pursued by environmental scientists) resulted in fascinating renderings, as seen in recent games and movies. Visualization techniques for statistical results have benefited from the advancement in design and journalism, resulting in enthralling infographics. The field of environmental modeling has absorbed advances in contemporary cartography as seen in the latest interactive data-driven maps. We systematically review the design emerging types of water visualizations. The examples that we analyze range from dynamically animated forecasts, interactive paintings, infographics, modern cartography to web-based photorealistic rendering. By characterizing the intended audience, the design choices, the scales (e.g. time, space), and the explorability we provide a set of guidelines and genres. The unique contributions of the different fields show how the innovations in the current state of the art of water visualization have benefited from inter-disciplinary collaborations.

  1. Recent results in visual servoing

    Science.gov (United States)

    Chaumette, François

    2008-06-01

    Visual servoing techniques consist in using the data provided by a vision sensor in order to control the motions of a dynamic system. Such systems are usually robot arms, mobile robots, aerial robots,… but can also be virtual robots for applications in computer animation, or even a virtual camera for applications in computer vision and augmented reality. A large variety of positioning tasks, or mobile target tracking, can be implemented by controlling from one to all the degrees of freedom of the system. Whatever the sensor configuration, which can vary from one on-board camera on the robot end-effector to several free-standing cameras, a set of visual features has to be selected at best from the image measurements available, allowing to control the degrees of freedom desired. A control law has also to be designed so that these visual features reach a desired value, defining a correct realization of the task. With a vision sensor providing 2D measurements, potential visual features are numerous, since as well 2D data (coordinates of feature points in the image, moments, …) as 3D data provided by a localization algorithm exploiting the extracted 2D measurements can be considered. It is also possible to combine 2D and 3D visual features to take the advantages of each approach while avoiding their respective drawbacks. From the selected visual features, the behavior of the system will have particular properties as for stability, robustness with respect to noise or to calibration errors, robot 3D trajectory, etc. The talk will present the main basic aspects of visual servoing, as well as technical advances obtained recently in the field inside the Lagadic group at INRIA/INRISA Rennes. Several application results will be also described.

  2. Eye tracking for visual marketing

    NARCIS (Netherlands)

    Wedel, M.; Pieters, R.

    2008-01-01

    We provide the theory of visual attention and eye-movements that serves as a basis for evaluating eye-tracking research and for discussing salient and emerging issues in visual marketing. Motivated from its rising importance in marketing practice and its potential for theoretical contribution, we

  3. Visualizing Probabilistic Proof

    OpenAIRE

    Guerra-Pujol, Enrique

    2015-01-01

    The author revisits the Blue Bus Problem, a famous thought-experiment in law involving probabilistic proof, and presents simple Bayesian solutions to different versions of the blue bus hypothetical. In addition, the author expresses his solutions in standard and visual formats, i.e. in terms of probabilities and natural frequencies.

  4. LOD map--A visual interface for navigating multiresolution volume visualization.

    Science.gov (United States)

    Wang, Chaoli; Shen, Han-Wei

    2006-01-01

    In multiresolution volume visualization, a visual representation of level-of-detail (LOD) quality is important for us to examine, compare, and validate different LOD selection algorithms. While traditional methods rely on ultimate images for quality measurement, we introduce the LOD map--an alternative representation of LOD quality and a visual interface for navigating multiresolution data exploration. Our measure for LOD quality is based on the formulation of entropy from information theory. The measure takes into account the distortion and contribution of multiresolution data blocks. A LOD map is generated through the mapping of key LOD ingredients to a treemap representation. The ordered treemap layout is used for relative stable update of the LOD map when the view or LOD changes. This visual interface not only indicates the quality of LODs in an intuitive way, but also provides immediate suggestions for possible LOD improvement through visually-striking features. It also allows us to compare different views and perform rendering budget control. A set of interactive techniques is proposed to make the LOD adjustment a simple and easy task. We demonstrate the effectiveness and efficiency of our approach on large scientific and medical data sets.

  5. [Intraoperative multidimensional visualization].

    Science.gov (United States)

    Sperling, J; Kauffels, A; Grade, M; Alves, F; Kühn, P; Ghadimi, B M

    2016-12-01

    Modern intraoperative techniques of visualization are increasingly being applied in general and visceral surgery. The combination of diverse techniques provides the possibility of multidimensional intraoperative visualization of specific anatomical structures. Thus, it is possible to differentiate between normal tissue and tumor tissue and therefore exactly define tumor margins. The aim of intraoperative visualization of tissue that is to be resected and tissue that should be spared is to lead to a rational balance between oncological and functional results. Moreover, these techniques help to analyze the physiology and integrity of tissues. Using these methods surgeons are able to analyze tissue perfusion and oxygenation. However, to date it is not clear to what extent these imaging techniques are relevant in the clinical routine. The present manuscript reviews the relevant modern visualization techniques focusing on intraoperative computed tomography and magnetic resonance imaging as well as augmented reality, fluorescence imaging and optoacoustic imaging.

  6. A Novel Visual Interface to Foster Innovation in Mechanical Engineering and Protect from Patent Infringement

    Science.gov (United States)

    Sorce, Salvatore; Malizia, Alessio; Jiang, Pingfei; Atherton, Mark; Harrison, David

    2018-04-01

    One of the main time and money consuming tasks in the design of industrial devices and parts is the checking of possible patent infringements. Indeed, the great number of documents to be mined and the wide variety of technical language used to describe inventions are reasons why considerable amounts of time may be needed. On the other hand, the early detection of a possible patent conflict, in addition to reducing the risk of legal disputes, could stimulate a designers’ creativity to overcome similarities in overlapping patents. For this reason, there are a lot of existing patent analysis systems, each with its own features and access modes. We have designed a visual interface providing an intuitive access to such systems, freeing the designers from the specific knowledge of querying languages and providing them with visual clues. We tested the interface on a framework aimed at representing mechanical engineering patents; the framework is based on a semantic database and provides patent conflict analysis for early-stage designs. The interface supports a visual query composition to obtain a list of potentially overlapping designs.

  7. Auditory recognition memory is inferior to visual recognition memory

    OpenAIRE

    Cohen, Michael A.; Horowitz, Todd S.; Wolfe, Jeremy M.

    2009-01-01

    Visual memory for scenes is surprisingly robust. We wished to examine whether an analogous ability exists in the auditory domain. Participants listened to a variety of sound clips and were tested on their ability to distinguish old from new clips. Stimuli ranged from complex auditory scenes (e.g., talking in a pool hall) to isolated auditory objects (e.g., a dog barking) to music. In some conditions, additional information was provided to help participants with encoding. In every situation, h...

  8. VisIO: enabling interactive visualization of ultra-scale, time-series data via high-bandwidth distributed I/O systems

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Christopher J [Los Alamos National Laboratory; Ahrens, James P [Los Alamos National Laboratory; Wang, Jun [UCF

    2010-10-15

    Petascale simulations compute at resolutions ranging into billions of cells and write terabytes of data for visualization and analysis. Interactive visuaUzation of this time series is a desired step before starting a new run. The I/O subsystem and associated network often are a significant impediment to interactive visualization of time-varying data; as they are not configured or provisioned to provide necessary I/O read rates. In this paper, we propose a new I/O library for visualization applications: VisIO. Visualization applications commonly use N-to-N reads within their parallel enabled readers which provides an incentive for a shared-nothing approach to I/O, similar to other data-intensive approaches such as Hadoop. However, unlike other data-intensive applications, visualization requires: (1) interactive performance for large data volumes, (2) compatibility with MPI and POSIX file system semantics for compatibility with existing infrastructure, and (3) use of existing file formats and their stipulated data partitioning rules. VisIO, provides a mechanism for using a non-POSIX distributed file system to provide linear scaling of 110 bandwidth. In addition, we introduce a novel scheduling algorithm that helps to co-locate visualization processes on nodes with the requested data. Testing using VisIO integrated into Para View was conducted using the Hadoop Distributed File System (HDFS) on TACC's Longhorn cluster. A representative dataset, VPIC, across 128 nodes showed a 64.4% read performance improvement compared to the provided Lustre installation. Also tested, was a dataset representing a global ocean salinity simulation that showed a 51.4% improvement in read performance over Lustre when using our VisIO system. VisIO, provides powerful high-performance I/O services to visualization applications, allowing for interactive performance with ultra-scale, time-series data.

  9. Visual memory errors in Parkinson's disease patient with visual hallucinations.

    Science.gov (United States)

    Barnes, J; Boubert, L

    2011-03-01

    The occurrences of visual hallucinations seem to be more prevalent in low light and hallucinators tend to be more prone to false positive type errors in memory tasks. Here we investigated whether the richness of stimuli does indeed affect recognition differently in hallucinating and nonhallucinating participants, and if so whether this difference extends to identifying spatial context. We compared 36 Parkinson's disease (PD) patients with visual hallucinations, 32 Parkinson's patients without hallucinations, and 36 age-matched controls, on a visual memory task where color and black and white pictures were presented at different locations. Participants had to recognize the pictures among distracters along with the location of the stimulus. Findings revealed clear differences in performance between the groups. Both PD groups had impaired recognition compared to the controls, but those with hallucinations were significantly more impaired on black and white than on color stimuli. In addition, the group with hallucinations was significantly impaired compared to the other two groups on spatial memory. We suggest that not only do PD patients have poorer recognition of pictorial stimuli than controls, those who present with visual hallucinations appear to be more heavily reliant on bottom up sensory input and impaired on spatial ability.

  10. Impact of High-Fidelity Simulation and Pharmacist-Specific Didactic Lectures in Addition to ACLS Provider Certification on Pharmacy Resident ACLS Performance.

    Science.gov (United States)

    Bartel, Billie J

    2014-08-01

    This pilot study explored the use of multidisciplinary high-fidelity simulation and additional pharmacist-focused training methods in training postgraduate year 1 (PGY1) pharmacy residents to provide Advanced Cardiovascular Life Support (ACLS) care. Pharmacy resident confidence and comfort level were assessed after completing these training requirements. The ACLS training requirements for pharmacy residents were revised to include didactic instruction on ACLS pharmacology and rhythm recognition and participation in multidisciplinary high-fidelity simulation ACLS experiences in addition to ACLS provider certification. Surveys were administered to participating residents to assess the impact of this additional education on resident confidence and comfort level in cardiopulmonary arrest situations. The new ACLS didactic and simulation training requirements resulted in increased resident confidence and comfort level in all assessed functions. Residents felt more confident in all areas except providing recommendations for dosing and administration of medications and rhythm recognition after completing the simulation scenarios than with ACLS certification training and the didactic components alone. All residents felt the addition of lectures and simulation experiences better prepared them to function as a pharmacist in the ACLS team. Additional ACLS training requirements for pharmacy residents increased overall awareness of pharmacist roles and responsibilities and greatly improved resident confidence and comfort level in performing most essential pharmacist functions during ACLS situations. © The Author(s) 2013.

  11. Statistical modeling for visualization evaluation through data fusion.

    Science.gov (United States)

    Chen, Xiaoyu; Jin, Ran

    2017-11-01

    There is a high demand of data visualization providing insights to users in various applications. However, a consistent, online visualization evaluation method to quantify mental workload or user preference is lacking, which leads to an inefficient visualization and user interface design process. Recently, the advancement of interactive and sensing technologies makes the electroencephalogram (EEG) signals, eye movements as well as visualization logs available in user-centered evaluation. This paper proposes a data fusion model and the application procedure for quantitative and online visualization evaluation. 15 participants joined the study based on three different visualization designs. The results provide a regularized regression model which can accurately predict the user's evaluation of task complexity, and indicate the significance of all three types of sensing data sets for visualization evaluation. This model can be widely applied to data visualization evaluation, and other user-centered designs evaluation and data analysis in human factors and ergonomics. Copyright © 2016 Elsevier Ltd. All rights reserved.

  12. VISUAL DISTRACTION WHILE DRIVING

    Directory of Open Access Journals (Sweden)

    Hajime ITO

    2001-01-01

    The article provides background information and summarizes worldwide trends in research on accident rates, the special characteristics of visual behavior and the effects of visual distraction on drivers and vehicle behavior. It also reports on the state of ISO standardization efforts and related technological trends. Finally, it defines a number of topics for future research in the field of human engineering.

  13. Clustervision: Visual Supervision of Unsupervised Clustering.

    Science.gov (United States)

    Kwon, Bum Chul; Eysenbach, Ben; Verma, Janu; Ng, Kenney; De Filippi, Christopher; Stewart, Walter F; Perer, Adam

    2018-01-01

    Clustering, the process of grouping together similar items into distinct partitions, is a common type of unsupervised machine learning that can be useful for summarizing and aggregating complex multi-dimensional data. However, data can be clustered in many ways, and there exist a large body of algorithms designed to reveal different patterns. While having access to a wide variety of algorithms is helpful, in practice, it is quite difficult for data scientists to choose and parameterize algorithms to get the clustering results relevant for their dataset and analytical tasks. To alleviate this problem, we built Clustervision, a visual analytics tool that helps ensure data scientists find the right clustering among the large amount of techniques and parameters available. Our system clusters data using a variety of clustering techniques and parameters and then ranks clustering results utilizing five quality metrics. In addition, users can guide the system to produce more relevant results by providing task-relevant constraints on the data. Our visual user interface allows users to find high quality clustering results, explore the clusters using several coordinated visualization techniques, and select the cluster result that best suits their task. We demonstrate this novel approach using a case study with a team of researchers in the medical domain and showcase that our system empowers users to choose an effective representation of their complex data.

  14. Pathview Web: user friendly pathway visualization and data integration.

    Science.gov (United States)

    Luo, Weijun; Pant, Gaurav; Bhavnasi, Yeshvant K; Blanchard, Steven G; Brouwer, Cory

    2017-07-03

    Pathway analysis is widely used in omics studies. Pathway-based data integration and visualization is a critical component of the analysis. To address this need, we recently developed a novel R package called Pathview. Pathview maps, integrates and renders a large variety of biological data onto molecular pathway graphs. Here we developed the Pathview Web server, as to make pathway visualization and data integration accessible to all scientists, including those without the special computing skills or resources. Pathview Web features an intuitive graphical web interface and a user centered design. The server not only expands the core functions of Pathview, but also provides many useful features not available in the offline R package. Importantly, the server presents a comprehensive workflow for both regular and integrated pathway analysis of multiple omics data. In addition, the server also provides a RESTful API for programmatic access and conveniently integration in third-party software or workflows. Pathview Web is openly and freely accessible at https://pathview.uncc.edu/. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Transcranial magnetic stimulation of visual cortex in memory: cortical state, interference and reactivation of visual content in memory.

    Science.gov (United States)

    van de Ven, Vincent; Sack, Alexander T

    2013-01-01

    Memory for perceptual events includes the neural representation of the sensory information at short or longer time scales. Recent transcranial magnetic stimulation (TMS) studies of human visual cortex provided evidence that sensory cortex contributes to memory functions. In this review, we provide an exhaustive overview of these studies and ascertain how well the available evidence supports the idea of a causal role of sensory cortex in memory retention and retrieval. We discuss the validity and implications of the studies using a number of methodological and theoretical criteria that are relevant for brain stimulation of visual cortex. While most studies applied TMS to visual cortex to interfere with memory functions, a handful of pioneering studies used TMS to 'reactivate' memories in visual cortex. Interestingly, similar effects of TMS on memory were found in different memory tasks, which suggests that different memory systems share a neural mechanism of memory in visual cortex. At the same time, this neural mechanism likely interacts with higher order brain areas. Based on this overview and evaluation, we provide a first attempt to an integrative framework that describes how sensory processes contribute to memory in visual cortex, and how higher order areas contribute to this mechanism. Copyright © 2012 Elsevier B.V. All rights reserved.

  16. Functional MRI of the visual cortex and visual testing in patients with previous optic neuritis

    DEFF Research Database (Denmark)

    Langkilde, Annika Reynberg; Frederiksen, J.L.; Rostrup, Egill

    2002-01-01

    of the activated area and the signal change following ON, and compared the results with results of neuroophthalmological testing. We studied nine patients with previous acute ON and 10 healthy persons served as controls using fMRI with visual stimulation. In addition to a reduced activated volume, patients showed...... a reduced blood oxygenation level dependent (BOLD) signal increase and a greater asymmetry in the visual cortex, compared with controls. The volume of visual cortical activation was significantly correlated to the result of the contrast sensitivity test. The BOLD signal increase correlated significantly......The volume of cortical activation as detected by functional magnetic resonance imaging (fMRI) in the visual cortex has previously been shown to be reduced following optic neuritis (ON). In order to understand the cause of this change, we studied the cortical activation, both the size...

  17. FluxVisualizer, a Software to Visualize Fluxes through Metabolic Networks

    Directory of Open Access Journals (Sweden)

    Tim Daniel Rose

    2018-04-01

    Full Text Available FluxVisualizer (Version 1.0, 2017, freely available at https://fluxvisualizer.ibgc.cnrs.fr is a software to visualize fluxes values on a scalable vector graphic (SVG representation of a metabolic network by colouring or increasing the width of reaction arrows of the SVG file. FluxVisualizer does not aim to draw metabolic networks but to use a customer’s SVG file allowing him to exploit his representation standards with a minimum of constraints. FluxVisualizer is especially suitable for small to medium size metabolic networks, where a visual representation of the fluxes makes sense. The flux distribution can either be an elementary flux mode (EFM, a flux balance analysis (FBA result or any other flux distribution. It allows the automatic visualization of a series of pathways of the same network as is needed for a set of EFMs. The software is coded in python3 and provides a graphical user interface (GUI and an application programming interface (API. All functionalities of the program can be used from the API and the GUI and allows advanced users to add their own functionalities. The software is able to work with various formats of flux distributions (Metatool, CellNetAnalyzer, COPASI and FAME export files as well as with Excel files. This simple software can save a lot of time when evaluating fluxes simulations on a metabolic network.

  18. A visual assistance environment for cyclotron operation

    International Nuclear Information System (INIS)

    Okamura, Tetsuya; Murakami, Tohru; Agematsu, Takashi; Okumura, Susumu; Arakawa, Kazuo.

    1993-01-01

    A computer-based operation system for a cyclotron which assists inexperienced operators has been developed. Cyclotron start-up operations require dozens of adjustable parameters to be finely tuned to maximize extracted beam current. The human interfaces of the system provide a visual environment designed to enhance beam parameter adjustments. First, the mental model of operators is analyzed. It is supposed to be composed of five partial mental models: beam behavior model, feasible setting regions model, parameter sensitivity model, parameter mutual relation model, and status map model. Next, based on these models, three visual interfaces are developed, i.e., (1) Beam trajectory is rapidly calculated and graphically displayed whenever the operators change the cyclotron parameters. (2) Feasible setting regions (FSR) of the parameters that satisfy the cyclotron's beam acceptance criteria are indicated. (3) Search traces, being a historical visual map of beam current values, are superimposed on the FSRs. Finally, to evaluate system effectiveness, the search time required to reach maximum beam current conditions was measured. In addition, system operability was evaluated using written questionnaires. Results of the experiment showed that the search time to reach specific beam conditions was reduced by approximately 65% using these interfaces. The written questionnaires survey showed the operators highly evaluate system operability. (author)

  19. Reduction of the elevator illusion from continued hypergravity exposure and visual error-corrective feedback

    Science.gov (United States)

    Welch, R. B.; Cohen, M. M.; DeRoshia, C. W.

    1996-01-01

    Ten subjects served as their own controls in two conditions of continuous, centrifugally produced hypergravity (+2 Gz) and a 1-G control condition. Before and after exposure, open-loop measures were obtained of (1) motor control, (2) visual localization, and (3) hand-eye coordination. During exposure in the visual feedback/hypergravity condition, subjects received terminal visual error-corrective feedback from their target pointing, and in the no-visual feedback/hypergravity condition they pointed open loop. As expected, the motor control measures for both experimental conditions revealed very short lived underreaching (the muscle-loading effect) at the outset of hypergravity and an equally transient negative aftereffect on returning to 1 G. The substantial (approximately 17 degrees) initial elevator illusion experienced in both hypergravity conditions declined over the course of the exposure period, whether or not visual feedback was provided. This effect was tentatively attributed to habituation of the otoliths. Visual feedback produced a smaller additional decrement and a postexposure negative after-effect, possible evidence for visual recalibration. Surprisingly, the target-pointing error made during hypergravity in the no-visual-feedback condition was substantially less than that predicted by subjects' elevator illusion. This finding calls into question the neural outflow model as a complete explanation of this illusion.

  20. Effects of visual motion consistent or inconsistent with gravity on postural sway.

    Science.gov (United States)

    Balestrucci, Priscilla; Daprati, Elena; Lacquaniti, Francesco; Maffei, Vincenzo

    2017-07-01

    Vision plays an important role in postural control, and visual perception of the gravity-defined vertical helps maintaining upright stance. In addition, the influence of the gravity field on objects' motion is known to provide a reference for motor and non-motor behavior. However, the role of dynamic visual cues related to gravity in the control of postural balance has been little investigated. In order to understand whether visual cues about gravitational acceleration are relevant for postural control, we assessed the relation between postural sway and visual motion congruent or incongruent with gravity acceleration. Postural sway of 44 healthy volunteers was recorded by means of force platforms while they watched virtual targets moving in different directions and with different accelerations. Small but significant differences emerged in sway parameters with respect to the characteristics of target motion. Namely, for vertically accelerated targets, gravitational motion (GM) was associated with smaller oscillations of the center of pressure than anti-GM. The present findings support the hypothesis that not only static, but also dynamic visual cues about direction and magnitude of the gravitational field are relevant for balance control during upright stance.

  1. Optical radiation and visual health

    International Nuclear Information System (INIS)

    Waxler, M.; Hitchins, V.M.

    1986-01-01

    This book provides a focus on the parameters of ultraviolet light, visible, and infrared radiation s which could cause long-term visual health problems in humans. It reviews early research on radiation effects on the eye, and gives detailed attention to the hazardous effects of optical radiation on the retinal pigment epithelium and the photoreceptors. These data are further analyzed with regard to five potential long-term visual health problems; retinal degeneration, visual aging, disorder of visual development, ocular drug phototoxicity, and cataracts. Finally, epidemiologic principles for studying the relationships between optical radiation and long-term visual health problems are reviewed, concluding with the implications for future research and radiation protection. The contents include: historical perspectives; optical radiation and cataracts; the involvement of the retinal pigment epithelium (RPE); optical radiation damage to the ocular photoreceptors; possible role of optical radiation in retinal degenerations; optical radiation and the aged eye; optical radiation effects on aging and visual perception; optical radiation effects on visual development; and index

  2. Visual system manifestations of Alzheimer's disease.

    Science.gov (United States)

    Kusne, Yael; Wolf, Andrew B; Townley, Kate; Conway, Mandi; Peyman, Gholam A

    2017-12-01

    Alzheimer's disease (AD) is an increasingly common disease with massive personal and economic costs. While it has long been known that AD impacts the visual system, there has recently been an increased focus on understanding both pathophysiological mechanisms that may be shared between the eye and brain and how related biomarkers could be useful for AD diagnosis. Here, were review pertinent cellular and molecular mechanisms of AD pathophysiology, the presence of AD pathology in the visual system, associated functional changes, and potential development of diagnostic tools based on the visual system. Additionally, we discuss links between AD and visual disorders, including possible pathophysiological mechanisms and their relevance for improving our understanding of AD. © 2016 Acta Ophthalmologica Scandinavica Foundation. Published by John Wiley & Sons Ltd.

  3. Does my step look big in this? A visual illusion leads to safer stepping behaviour.

    Directory of Open Access Journals (Sweden)

    David B Elliott

    Full Text Available BACKGROUND: Tripping is a common factor in falls and a typical safety strategy to avoid tripping on steps or stairs is to increase foot clearance over the step edge. In the present study we asked whether the perceived height of a step could be increased using a visual illusion and whether this would lead to the adoption of a safer stepping strategy, in terms of greater foot clearance over the step edge. The study also addressed the controversial question of whether motor actions are dissociated from visual perception. METHODOLOGY/PRINCIPAL FINDINGS: 21 young, healthy subjects perceived the step to be higher in a configuration of the horizontal-vertical illusion compared to a reverse configuration (p = 0.01. During a simple stepping task, maximum toe elevation changed by an amount corresponding to the size of the visual illusion (p<0.001. Linear regression analyses showed highly significant associations between perceived step height and maximum toe elevation for all conditions. CONCLUSIONS/SIGNIFICANCE: The perceived height of a step can be manipulated using a simple visual illusion, leading to the adoption of a safer stepping strategy in terms of greater foot clearance over a step edge. In addition, the strong link found between perception of a visual illusion and visuomotor action provides additional support to the view that the original, controversial proposal by Goodale and Milner (1992 of two separate and distinct visual streams for perception and visuomotor action should be re-evaluated.

  4. Principles of Information Visualization for Business Research

    OpenAIRE

    Ioan I. ANDONE

    2008-01-01

    In the era of data-centric-science, a large number of visualization tools have been created to help researchers understand increasingly rich business databases. Information visualization is a process of constructing a visual presentation of business quantitative data, especially prepared for managerial use. Interactive information visualization provide researchers with remarkable tools for discovery and innovation. By combining powerful data mining methods with user-controlled interfaces, use...

  5. Master VISUALLY Excel 2010

    CERN Document Server

    Marmel, Elaine

    2010-01-01

    The complete visual reference on Excel basics. Aimed at visual learners who are seeking an all-in-one reference that provides in-depth coveage of Excel from a visual viewpoint, this resource delves into all the newest features of Excel 2010. You'll explore Excel with helpful step-by-step instructions that show you, rather than tell you, how to navigate Excel, work with PivotTables and PivotCharts, use macros to streamline work, and collaborate with other users in one document.: This two-color guide features screen shots with specific, numbered instructions so you can learn the actions you need

  6. Right hemispheric dominance of visual phenomena evoked by intracerebral stimulation of the human visual cortex.

    Science.gov (United States)

    Jonas, Jacques; Frismand, Solène; Vignal, Jean-Pierre; Colnat-Coulbois, Sophie; Koessler, Laurent; Vespignani, Hervé; Rossion, Bruno; Maillard, Louis

    2014-07-01

    Electrical brain stimulation can provide important information about the functional organization of the human visual cortex. Here, we report the visual phenomena evoked by a large number (562) of intracerebral electrical stimulations performed at low-intensity with depth electrodes implanted in the occipito-parieto-temporal cortex of 22 epileptic patients. Focal electrical stimulation evoked primarily visual hallucinations with various complexities: simple (spot or blob), intermediary (geometric forms), or complex meaningful shapes (faces); visual illusions and impairments of visual recognition were more rarely observed. With the exception of the most posterior cortical sites, the probability of evoking a visual phenomenon was significantly higher in the right than the left hemisphere. Intermediary and complex hallucinations, illusions, and visual recognition impairments were almost exclusively evoked by stimulation in the right hemisphere. The probability of evoking a visual phenomenon decreased substantially from the occipital pole to the most anterior sites of the temporal lobe, and this decrease was more pronounced in the left hemisphere. The greater sensitivity of the right occipito-parieto-temporal regions to intracerebral electrical stimulation to evoke visual phenomena supports a predominant role of right hemispheric visual areas from perception to recognition of visual forms, regardless of visuospatial and attentional factors. Copyright © 2013 Wiley Periodicals, Inc.

  7. Processing Visual Images

    International Nuclear Information System (INIS)

    Litke, Alan

    2006-01-01

    The back of the eye is lined by an extraordinary biological pixel detector, the retina. This neural network is able to extract vital information about the external visual world, and transmit this information in a timely manner to the brain. In this talk, Professor Litke will describe a system that has been implemented to study how the retina processes and encodes dynamic visual images. Based on techniques and expertise acquired in the development of silicon microstrip detectors for high energy physics experiments, this system can simultaneously record the extracellular electrical activity of hundreds of retinal output neurons. After presenting first results obtained with this system, Professor Litke will describe additional applications of this incredible technology.

  8. Social inequalities in blindness and visual impairment: A review of social determinants

    Directory of Open Access Journals (Sweden)

    Anna Rius

    2012-01-01

    Full Text Available Health inequities are related to social determinants based on gender, socioeconomic status, ethnicity, race, living in a specific geographic region, or having a specific health condition. Such inequities were reviewed for blindness and visual impairment by searching for studies on the subject in PubMed from 2000 to 2011 in the English and Spanish languages. The goal of this article is to provide a current review in understanding how inequities based specifically on the aforementioned social determinants on health influence the prevalence of visual impairment and blindness. With regards to gender inequality, women have a higher prevalence of visual impairment and blindness, which cannot be only reasoned based on age or access to service. Socioeconomic status measured as higher income, higher educational status, or non-manual occupational social class was inversely associated with prevalence of blindness or visual impairment. Ethnicity and race were associated with visual impairment and blindness, although there is general confusion over this socioeconomic position determinant. Geographic inequalities and visual impairment were related to income (of the region, nation or continent, living in a rural area, and an association with socioeconomic and political context was suggested. While inequalities related to blindness and visual impairment have rarely been specifically addressed in research, there is still evidence of the association of social determinants and prevalence of blindness and visual impairment. Additional research should be done on the associations with intermediary determinants and socioeconomic and political context.

  9. Social inequalities in blindness and visual impairment: A review of social determinants

    Science.gov (United States)

    Ulldemolins, Anna Rius; Lansingh, Van C; Valencia, Laura Guisasola; Carter, Marissa J; Eckert, Kristen A

    2012-01-01

    Health inequities are related to social determinants based on gender, socioeconomic status, ethnicity, race, living in a specific geographic region, or having a specific health condition. Such inequities were reviewed for blindness and visual impairment by searching for studies on the subject in PubMed from 2000 to 2011 in the English and Spanish languages. The goal of this article is to provide a current review in understanding how inequities based specifically on the aforementioned social determinants on health influence the prevalence of visual impairment and blindness. With regards to gender inequality, women have a higher prevalence of visual impairment and blindness, which cannot be only reasoned based on age or access to service. Socioeconomic status measured as higher income, higher educational status, or non-manual occupational social class was inversely associated with prevalence of blindness or visual impairment. Ethnicity and race were associated with visual impairment and blindness, although there is general confusion over this socioeconomic position determinant. Geographic inequalities and visual impairment were related to income (of the region, nation or continent), living in a rural area, and an association with socioeconomic and political context was suggested. While inequalities related to blindness and visual impairment have rarely been specifically addressed in research, there is still evidence of the association of social determinants and prevalence of blindness and visual impairment. Additional research should be done on the associations with intermediary determinants and socioeconomic and political context. PMID:22944744

  10. A New Visual Stimulation Program for Improving Visual Acuity in Children with Visual Impairment: A Pilot Study

    Science.gov (United States)

    Tsai, Li-Ting; Hsu, Jung-Lung; Wu, Chien-Te; Chen, Chia-Ching; Su, Yu-Chin

    2016-01-01

    The purpose of this study was to investigate the effectiveness of visual rehabilitation of a computer-based visual stimulation (VS) program combining checkerboard pattern reversal (passive stimulation) with oddball stimuli (attentional modulation) for improving the visual acuity (VA) of visually impaired (VI) children and children with amblyopia and additional developmental problems. Six children (three females, three males; mean age = 3.9 ± 2.3 years) with impaired VA caused by deficits along the anterior and/or posterior visual pathways were recruited. Participants received eight rounds of VS training (two rounds per week) of at least eight sessions per round. Each session consisted of stimulation with 200 or 300 pattern reversals. Assessments of VA (assessed with the Lea symbol VA test or Teller VA cards), visual evoked potential (VEP), and functional vision (assessed with the Chinese-version Functional Vision Questionnaire, FVQ) were carried out before and after the VS program. Significant gains in VA were found after the VS training [VA = 1.05 logMAR ± 0.80 to 0.61 logMAR ± 0.53, Z = –2.20, asymptotic significance (2-tailed) = 0.028]. No significant changes were observed in the FVQ assessment [92.8 ± 12.6 to 100.8 ±SD = 15.4, Z = –1.46, asymptotic significance (2-tailed) = 0.144]. VEP measurement showed improvement in P100 latency and amplitude or integration of the waveform in two participants. Our results indicate that a computer-based VS program with passive checkerboard stimulation, oddball stimulus design, and interesting auditory feedback could be considered as a potential intervention option to improve the VA of a wide age range of VI children and children with impaired VA combined with other neurological disorders. PMID:27148014

  11. A New Visual Stimulation Program for Improving Visual Acuity in Children with Visual Impairment: A Pilot Study.

    Science.gov (United States)

    Tsai, Li-Ting; Hsu, Jung-Lung; Wu, Chien-Te; Chen, Chia-Ching; Su, Yu-Chin

    2016-01-01

    The purpose of this study was to investigate the effectiveness of visual rehabilitation of a computer-based visual stimulation (VS) program combining checkerboard pattern reversal (passive stimulation) with oddball stimuli (attentional modulation) for improving the visual acuity (VA) of visually impaired (VI) children and children with amblyopia and additional developmental problems. Six children (three females, three males; mean age = 3.9 ± 2.3 years) with impaired VA caused by deficits along the anterior and/or posterior visual pathways were recruited. Participants received eight rounds of VS training (two rounds per week) of at least eight sessions per round. Each session consisted of stimulation with 200 or 300 pattern reversals. Assessments of VA (assessed with the Lea symbol VA test or Teller VA cards), visual evoked potential (VEP), and functional vision (assessed with the Chinese-version Functional Vision Questionnaire, FVQ) were carried out before and after the VS program. Significant gains in VA were found after the VS training [VA = 1.05 logMAR ± 0.80 to 0.61 logMAR ± 0.53, Z = -2.20, asymptotic significance (2-tailed) = 0.028]. No significant changes were observed in the FVQ assessment [92.8 ± 12.6 to 100.8 ±SD = 15.4, Z = -1.46, asymptotic significance (2-tailed) = 0.144]. VEP measurement showed improvement in P100 latency and amplitude or integration of the waveform in two participants. Our results indicate that a computer-based VS program with passive checkerboard stimulation, oddball stimulus design, and interesting auditory feedback could be considered as a potential intervention option to improve the VA of a wide age range of VI children and children with impaired VA combined with other neurological disorders.

  12. A new visual stimulation program for improving visual acuity in children with visual impairment: a pilot study

    Directory of Open Access Journals (Sweden)

    Li-Ting eTsai

    2016-04-01

    Full Text Available The purpose of this study was to investigate the effectiveness of visual rehabilitation of a computer-based visual stimulation (VS program combining checkerboard pattern reversal (passive stimulation with oddball stimuli (attentional modulation for improving the visual acuity (VA of visually impaired (VI children and children with amblyopia and additional developmental problems. Six children (3 females, 3 males; mean age = 3.9 ± 2.3 years with impaired VA caused by deficits along the anterior and/or posterior visual pathways were recruited. Participants received eight rounds of VS training (two rounds per week of at least 8 sessions per round. Each session consisted of stimulation with 200 or 300 pattern reversals. Assessments of VA (assessed with the Lea symbol VA test or Teller VA cards, visual evoked potential (VEP, and functional vision (assessed with the Chinese-version Functional Vision Questionnaire, FVQ were carried out before and after the VS program. Significant gains in VA were found after the VS training (VA=1.05 logMAR ± 0.80 to 0.61 logMAR ± 0.53, Z=-2.20, asymptotic significance (2-tailed =0.028. No significant changes were observed in the FVQ assessment (92.8 ± 12.6 to 100.8 ± SD=15.4, Z=-1.46, asymptotic significance (2-tailed = 0.144. VEP measurement showed improvement in P100 latency and amplitude or integration of the waveform in two participants. Our results indicate that a computer-based VS program with passive checkerboard stimulation, oddball stimulus design, and interesting auditory feedback could be considered as a potential intervention option to improve the VA of a wide age range of VI children and children with impaired VA combined with other neurological disorders.

  13. Learning Visualizations by Analogy: Promoting Visual Literacy through Visualization Morphing.

    Science.gov (United States)

    Ruchikachorn, Puripant; Mueller, Klaus

    2015-09-01

    We propose the concept of teaching (and learning) unfamiliar visualizations by analogy, that is, demonstrating an unfamiliar visualization method by linking it to another more familiar one, where the in-betweens are designed to bridge the gap of these two visualizations and explain the difference in a gradual manner. As opposed to a textual description, our morphing explains an unfamiliar visualization through purely visual means. We demonstrate our idea by ways of four visualization pair examples: data table and parallel coordinates, scatterplot matrix and hyperbox, linear chart and spiral chart, and hierarchical pie chart and treemap. The analogy is commutative i.e. any member of the pair can be the unfamiliar visualization. A series of studies showed that this new paradigm can be an effective teaching tool. The participants could understand the unfamiliar visualization methods in all of the four pairs either fully or at least significantly better after they observed or interacted with the transitions from the familiar counterpart. The four examples suggest how helpful visualization pairings be identified and they will hopefully inspire other visualization morphings and associated transition strategies to be identified.

  14. Selective transfer of visual working memory training on Chinese character learning.

    Science.gov (United States)

    Opitz, Bertram; Schneiders, Julia A; Krick, Christoph M; Mecklinger, Axel

    2014-01-01

    Previous research has shown a systematic relationship between phonological working memory capacity and second language proficiency for alphabetic languages. However, little is known about the impact of working memory processes on second language learning in a non-alphabetic language such as Mandarin Chinese. Due to the greater complexity of the Chinese writing system we expect that visual working memory rather than phonological working memory exerts a unique influence on learning Chinese characters. This issue was explored in the present experiment by comparing visual working memory training with an active (auditory working memory training) control condition and a passive, no training control condition. Training induced modulations in language-related brain networks were additionally examined using functional magnetic resonance imaging in a pretest-training-posttest design. As revealed by pre- to posttest comparisons and analyses of individual differences in working memory training gains, visual working memory training led to positive transfer effects on visual Chinese vocabulary learning compared to both control conditions. In addition, we found sustained activation after visual working memory training in the (predominantly visual) left infero-temporal cortex that was associated with behavioral transfer. In the control conditions, activation either increased (active control condition) or decreased (passive control condition) without reliable behavioral transfer effects. This suggests that visual working memory training leads to more efficient processing and more refined responses in brain regions involved in visual processing. Furthermore, visual working memory training boosted additional activation in the precuneus, presumably reflecting mental image generation of the learned characters. We, therefore, suggest that the conjoint activity of the mid-fusiform gyrus and the precuneus after visual working memory training reflects an interaction of working memory and

  15. Visual working memory as visual attention sustained internally over time.

    Science.gov (United States)

    Chun, Marvin M

    2011-05-01

    Visual working memory and visual attention are intimately related, such that working memory encoding and maintenance reflects actively sustained attention to a limited number of visual objects and events important for ongoing cognition and action. Although attention is typically considered to operate over perceptual input, a recent taxonomy proposes to additionally consider how attention can be directed to internal perceptual representations in the absence of sensory input, as well as other internal memories, choices, and thoughts (Chun, Golomb, & Turk-Browne, 2011). Such internal attention enables prolonged binding of features into integrated objects, along with enhancement of relevant sensory mechanisms. These processes are all limited in capacity, although different types of working memory and attention, such as spatial vs. object processing, operate independently with separate capacity. Overall, the success of maintenance depends on the ability to inhibit both external (perceptual) and internal (cognitive) distraction. Working memory is the interface by which attentional mechanisms select and actively maintain relevant perceptual information from the external world as internal representations within the mind. Copyright © 2011. Published by Elsevier Ltd.

  16. Visualization of Minkowski operations by computer graphics techniques

    NARCIS (Netherlands)

    Roerdink, J.B.T.M.; Blaauwgeers, G.S.M.; Serra, J; Soille, P

    1994-01-01

    We consider the problem of visualizing 3D objects defined as a Minkowski addition or subtraction of elementary objects. It is shown that such visualizations can be obtained by using techniques from computer graphics such as ray tracing and Constructive Solid Geometry. Applications of the method are

  17. Math for visualization, visualizing math

    NARCIS (Netherlands)

    Wijk, van J.J.; Hart, G.; Sarhangi, R.

    2013-01-01

    I present an overview of our work in visualization, and reflect on the role of mathematics therein. First, mathematics can be used as a tool to produce visualizations, which is illustrated with examples from information visualization, flow visualization, and cartography. Second, mathematics itself

  18. Robust selectivity to two-object images in human visual cortex

    Science.gov (United States)

    Agam, Yigal; Liu, Hesheng; Papanastassiou, Alexander; Buia, Calin; Golby, Alexandra J.; Madsen, Joseph R.; Kreiman, Gabriel

    2010-01-01

    SUMMARY We can recognize objects in a fraction of a second in spite of the presence of other objects [1–3]. The responses in macaque areas V4 and inferior temporal cortex [4–15] to a neuron’s preferred stimuli are typically suppressed by the addition of a second object within the receptive field (see however [16, 17]). How can this suppression be reconciled with rapid visual recognition in complex scenes? One option is that certain “special categories” are unaffected by other objects [18] but this leaves the problem unsolved for other categories. Another possibility is that serial attentional shifts help ameliorate the problem of distractor objects [19–21]. Yet, psychophysical studies [1–3], scalp recordings [1] and neurophysiological recordings [14, 16, 22–24], suggest that the initial sweep of visual processing contains a significant amount of information. We recorded intracranial field potentials in human visual cortex during presentation of flashes of two-object images. Visual selectivity from temporal cortex during the initial ~200 ms was largely robust to the presence of other objects. We could train linear decoders on the responses to isolated objects and decode information in two-object images. These observations are compatible with parallel, hierarchical and feed-forward theories of rapid visual recognition [25] and may provide a neural substrate to begin to unravel rapid recognition in natural scenes. PMID:20417105

  19. Experiences in using DISCUS for visualizing human communication

    Science.gov (United States)

    Groehn, Matti; Nieminen, Marko; Haho, Paeivi; Smeds, Riitta

    2000-02-01

    In this paper, we present further improvement to the DISCUS software that can be used to record and analyze the flow and constants of business process simulation session discussion. The tool was initially introduced in 'visual data exploration and analysis IV' conference. The initial features of the tool enabled the visualization of discussion flow in business process simulation sessions and the creation of SOM analyses. The improvements of the tool consists of additional visualization possibilities that enable quick on-line analyses and improved graphical statistics. We have also created the very first interface to audio data and implemented two ways to visualize it. We also outline additional possibilities to use the tool in other application areas: these include usability testing and the possibility to use the tool for capturing design rationale in a product development process. The data gathered with DISCUS may be used in other applications, and further work may be done with data ming techniques.

  20. Quality of vision, patient satisfaction and long-term visual function after bilateral implantation of a low addition multifocal intraocular lens.

    Science.gov (United States)

    Pedrotti, Emilio; Mastropasqua, Rodolfo; Bonetto, Jacopo; Demasi, Christian; Aiello, Francesco; Nucci, Carlo; Mariotti, Cesare; Marchini, Giorgio

    2017-07-17

    The aim of the current study was to compare the quality of vision, contrast sensitivity and patient satisfaction with a biaspheric, segmented, rotationally asymmetric IOL (Lentis Comfort LS-313 MF 15-Oculentis GmbH, Berlin, Germany) as opposed to those of a monofocal IOL. This prospective single-blind comparative study included two groups of patients affected by bilateral senile cataract who underwent lens extraction and IOL implantation. The first group received a bilateral implantation of a monofocal IOL, and the second group received a bilateral implantation of the Comfort IOL. Twelve months after surgery uncorrected and corrected visual acuity at different distances (30, 50, 70 cm and 4 m), defocus curve and contrast sensitivity were assessed. Patient's satisfaction and spectacle independence were evaluated by mean of the NEI RQL-42 questionnaire. No significant differences were found between the groups in terms of near vision. The group of patients implanted with a Comfort IOL obtained the best results at intermediate distances (50 and 70 cm P < .001). Both groups showed an excellent uncorrected distance visual acuity (4 m). No statistically significant differences were found in terms of corrected near, intermediate and distance visual acuity. Concerning contrast sensitivity, no statistically significant differences between the groups were observed at any cycles per degree. The NEI RQL-42 questionnaire showed statistically significant differences between the group for "near vision" (P = .015), "dependence on correction" (P = .048) and "suboptimal correction" (P < .001) subscales. Our findings indicated that the Comfort IOL +1.5 D provides a good intermediate spectacle independence together with a high quality of vision, with a low amount of subjective symptoms and a contrast sensitivity similar to those obtained with a monofocal IOL.

  1. The role of 3-D interactive visualization in blind surveys of H I in galaxies

    Science.gov (United States)

    Punzo, D.; van der Hulst, J. M.; Roerdink, J. B. T. M.; Oosterloo, T. A.; Ramatsoku, M.; Verheijen, M. A. W.

    2015-09-01

    Upcoming H I surveys will deliver large datasets, and automated processing using the full 3-D information (two positional dimensions and one spectral dimension) to find and characterize H I objects is imperative. In this context, visualization is an essential tool for enabling qualitative and quantitative human control on an automated source finding and analysis pipeline. We discuss how Visual Analytics, the combination of automated data processing and human reasoning, creativity and intuition, supported by interactive visualization, enables flexible and fast interaction with the 3-D data, helping the astronomer to deal with the analysis of complex sources. 3-D visualization, coupled to modeling, provides additional capabilities helping the discovery and analysis of subtle structures in the 3-D domain. The requirements for a fully interactive visualization tool are: coupled 1-D/2-D/3-D visualization, quantitative and comparative capabilities, combined with supervised semi-automated analysis. Moreover, the source code must have the following characteristics for enabling collaborative work: open, modular, well documented, and well maintained. We review four state of-the-art, 3-D visualization packages assessing their capabilities and feasibility for use in the case of 3-D astronomical data.

  2. Fourier power, subjective distance, and object categories all provide plausible models of BOLD responses in scene-selective visual areas

    Science.gov (United States)

    Lescroart, Mark D.; Stansbury, Dustin E.; Gallant, Jack L.

    2015-01-01

    Perception of natural visual scenes activates several functional areas in the human brain, including the Parahippocampal Place Area (PPA), Retrosplenial Complex (RSC), and the Occipital Place Area (OPA). It is currently unclear what specific scene-related features are represented in these areas. Previous studies have suggested that PPA, RSC, and/or OPA might represent at least three qualitatively different classes of features: (1) 2D features related to Fourier power; (2) 3D spatial features such as the distance to objects in a scene; or (3) abstract features such as the categories of objects in a scene. To determine which of these hypotheses best describes the visual representation in scene-selective areas, we applied voxel-wise modeling (VM) to BOLD fMRI responses elicited by a set of 1386 images of natural scenes. VM provides an efficient method for testing competing hypotheses by comparing predictions of brain activity based on encoding models that instantiate each hypothesis. Here we evaluated three different encoding models that instantiate each of the three hypotheses listed above. We used linear regression to fit each encoding model to the fMRI data recorded from each voxel, and we evaluated each fit model by estimating the amount of variance it predicted in a withheld portion of the data set. We found that voxel-wise models based on Fourier power or the subjective distance to objects in each scene predicted much of the variance predicted by a model based on object categories. Furthermore, the response variance explained by these three models is largely shared, and the individual models explain little unique variance in responses. Based on an evaluation of previous studies and the data we present here, we conclude that there is currently no good basis to favor any one of the three alternative hypotheses about visual representation in scene-selective areas. We offer suggestions for further studies that may help resolve this issue. PMID:26594164

  3. Advancing Water Science through Data Visualization

    Science.gov (United States)

    Li, X.; Troy, T.

    2014-12-01

    As water scientists, we are increasingly handling larger and larger datasets with many variables, making it easy to lose ourselves in the details. Advanced data visualization will play an increasingly significant role in propelling the development of water science in research, economy, policy and education. It can enable analysis within research and further data scientists' understanding of behavior and processes and can potentially affect how the public, whom we often want to inform, understands our work. Unfortunately for water scientists, data visualization is approached in an ad hoc manner when a more formal methodology or understanding could potentially significantly improve both research within the academy and outreach to the public. Firstly to broaden and deepen scientific understanding, data visualization can allow for more analyzed targets to be processed simultaneously and can represent the variables effectively, finding patterns, trends and relationships; thus it can even explores the new research direction or branch of water science. Depending on visualization, we can detect and separate the pivotal and trivial influential factors more clearly to assume and abstract the original complex target system. Providing direct visual perception of the differences between observation data and prediction results of models, data visualization allows researchers to quickly examine the quality of models in water science. Secondly data visualization can also improve public awareness and perhaps influence behavior. Offering decision makers clearer perspectives of potential profits of water, data visualization can amplify the economic value of water science and also increase relevant employment rates. Providing policymakers compelling visuals of the role of water for social and natural systems, data visualization can advance the water management and legislation of water conservation. By building the publics' own data visualization through apps and games about water

  4. Visuals Matter! Designing and using effective visual representations to support project and portfolio decisions

    DEFF Research Database (Denmark)

    Geraldi, Joana; Arlt, Mario

    . They can help managers to be sharper and quicker, especially if visuals are used in a mindful manner. The intent of this book is to increase the awareness of project, program and portfolio practitioners and scholars about the importance of visuals and to provide practical recommendations on how they can......This book is the result of a two-year research project, funded by Project Management Institute and University College London on data visualization in the project and portfolio management contexts. Visuals are powerful and constitute an integral part of analyzing problems and making decisions...... be used and designed mindfully. The research, which underpins this book, focuses on the impact of visuals on cognition of data in project portfolio decisions. The complexity of portfolio problems often exceed human cognitive limitations as a result of a number of factors, such as the large number...

  5. The Visual Geophysical Exploration Environment: A Multi-dimensional Scientific Visualization

    Science.gov (United States)

    Pandya, R. E.; Domenico, B.; Murray, D.; Marlino, M. R.

    2003-12-01

    The Visual Geophysical Exploration Environment (VGEE) is an online learning environment designed to help undergraduate students understand fundamental Earth system science concepts. The guiding principle of the VGEE is the importance of hands-on interaction with scientific visualization and data. The VGEE consists of four elements: 1) an online, inquiry-based curriculum for guiding student exploration; 2) a suite of El Nino-related data sets adapted for student use; 3) a learner-centered interface to a scientific visualization tool; and 4) a set of concept models (interactive tools that help students understand fundamental scientific concepts). There are two key innovations featured in this interactive poster session. One is the integration of concept models and the visualization tool. Concept models are simple, interactive, Java-based illustrations of fundamental physical principles. We developed eight concept models and integrated them into the visualization tool to enable students to probe data. The ability to probe data using a concept model addresses the common problem of transfer: the difficulty students have in applying theoretical knowledge to everyday phenomenon. The other innovation is a visualization environment and data that are discoverable in digital libraries, and installed, configured, and used for investigations over the web. By collaborating with the Integrated Data Viewer developers, we were able to embed a web-launchable visualization tool and access to distributed data sets into the online curricula. The Thematic Real-time Environmental Data Distributed Services (THREDDS) project is working to provide catalogs of datasets that can be used in new VGEE curricula under development. By cataloging this curricula in the Digital Library for Earth System Education (DLESE), learners and educators can discover the data and visualization tool within a framework that guides their use.

  6. Functional MRI of the visual cortex and visual testing in patients with previous optic neuritis

    DEFF Research Database (Denmark)

    Langkilde, Annika Reynberg; Frederiksen, J.L.; Rostrup, Egill

    2002-01-01

    to both the results of the contrast sensitivity test and to the Snellen visual acuity. Our results indicate that fMRI is a useful method for the study of ON, even in cases where the visual acuity is severely impaired. The reduction in activated volume could be explained as a reduced neuronal input......The volume of cortical activation as detected by functional magnetic resonance imaging (fMRI) in the visual cortex has previously been shown to be reduced following optic neuritis (ON). In order to understand the cause of this change, we studied the cortical activation, both the size...... of the activated area and the signal change following ON, and compared the results with results of neuroophthalmological testing. We studied nine patients with previous acute ON and 10 healthy persons served as controls using fMRI with visual stimulation. In addition to a reduced activated volume, patients showed...

  7. The Selection of Tangible Symbols by Educators of Students with Visual Impairments and Additional Disabilities

    Science.gov (United States)

    Trief, Ellen; Bruce, Susan M.; Cascella, Paul W.

    2010-01-01

    Tangible symbols are objects or partial objects that can be physically manipulated and that share a perceptual relationship with what they represent, known as the referent. They make fewer demands on memory and representational ability, making them an appropriate expressive form of communication for individuals with visual impairments and…

  8. Analyzing Earth Science Research Networking through Visualizations

    Science.gov (United States)

    Hasnain, S.; Stephan, R.; Narock, T.

    2017-12-01

    Using D3.js we visualize collaboration amongst several geophysical science organizations, such as the American Geophysical Union (AGU) and the Federation of Earth Science Information Partners (ESIP). We look at historical trends in Earth Science research topics, cross-domain collaboration, and topics of interest to the general population. The visualization techniques used provide an effective way for non-experts to easily explore distributed and heterogeneous Big Data. Analysis of these visualizations provides stakeholders with insights into optimizing meetings, performing impact evaluation, structuring outreach efforts, and identifying new opportunities for collaboration.

  9. Visual short-term memory load reduces retinotopic cortex response to contrast.

    Science.gov (United States)

    Konstantinou, Nikos; Bahrami, Bahador; Rees, Geraint; Lavie, Nilli

    2012-11-01

    Load Theory of attention suggests that high perceptual load in a task leads to reduced sensory visual cortex response to task-unrelated stimuli resulting in "load-induced blindness" [e.g., Lavie, N. Attention, distraction and cognitive control under load. Current Directions in Psychological Science, 19, 143-148, 2010; Lavie, N. Distracted and confused?: Selective attention under load. Trends in Cognitive Sciences, 9, 75-82, 2005]. Consideration of the findings that visual STM (VSTM) involves sensory recruitment [e.g., Pasternak, T., & Greenlee, M. Working memory in primate sensory systems. Nature Reviews Neuroscience, 6, 97-107, 2005] within Load Theory led us to a new hypothesis regarding the effects of VSTM load on visual processing. If VSTM load draws on sensory visual capacity, then similar to perceptual load, high VSTM load should also reduce visual cortex response to incoming stimuli leading to a failure to detect them. We tested this hypothesis with fMRI and behavioral measures of visual detection sensitivity. Participants detected the presence of a contrast increment during the maintenance delay in a VSTM task requiring maintenance of color and position. Increased VSTM load (manipulated by increased set size) led to reduced retinotopic visual cortex (V1-V3) responses to contrast as well as reduced detection sensitivity, as we predicted. Additional visual detection experiments established a clear tradeoff between the amount of information maintained in VSTM and detection sensitivity, while ruling out alternative accounts for the effects of VSTM load in terms of differential spatial allocation strategies or task difficulty. These findings extend Load Theory to demonstrate a new form of competitive interactions between early visual cortex processing and visual representations held in memory under load and provide a novel line of support for the sensory recruitment hypothesis of VSTM.

  10. Visual impairment in children and adolescents in Norway.

    Science.gov (United States)

    Haugen, Olav H; Bredrup, Cecilie; Rødahl, Eyvind

    2016-06-01

    BACKGROUND Due to failures in reporting and poor data security, the Norwegian Registry of Blindness was closed down in 1995. Since that time, no registration of visual impairment has taken place in Norway. All the other Nordic countries have registries for children and adolescents with visual impairment. The purpose of this study was to survey visual impairments and their causes in children and adolescents, and to assess the need for an ophthalmic registry.MATERIAL AND METHOD Data were collected via the county teaching centres for the visually impaired in the period from 2005 - 2010 on children and adolescents aged less than 20 years with impaired vision (n = 628). This was conducted as a point prevalence study as of 1 January 2004. Visual function, ophthalmological diagnosis, systemic diagnosis and additional functional impairments were recorded.RESULTS Approximately two-thirds of children and adolescents with visual impairment had reduced vision, while one-third were blind. The three largest diagnostic groups were neuro-ophthalmic diseases (37 %), retinal diseases (19 %) and conditions affecting the eyeball in general (14 %). The prevalence of additional functional impairments was high, at 53 %, most often in the form of motor problems or cognitive impairments.INTERPRETATION The results of the study correspond well with similar investigations in the other Nordic countries. Our study shows that the registries associated with teaching for the visually impaired are inadequate in terms of medical data, and this underlines the need for an ophthalmic registry of children and adolescents with visual impairment.

  11. The role of visual and direct force feedback in robotics-assisted mitral valve annuloplasty.

    Science.gov (United States)

    Currie, Maria E; Talasaz, Ali; Rayman, Reiza; Chu, Michael W A; Kiaii, Bob; Peters, Terry; Trejos, Ana Luisa; Patel, Rajni

    2017-09-01

    The objective of this work was to determine the effect of both direct force feedback and visual force feedback on the amount of force applied to mitral valve tissue during ex vivo robotics-assisted mitral valve annuloplasty. A force feedback-enabled master-slave surgical system was developed to provide both visual and direct force feedback during robotics-assisted cardiac surgery. This system measured the amount of force applied by novice and expert surgeons to cardiac tissue during ex vivo mitral valve annuloplasty repair. The addition of visual (2.16 ± 1.67), direct (1.62 ± 0.86), or both visual and direct force feedback (2.15 ± 1.08) resulted in lower mean maximum force applied to mitral valve tissue while suturing compared with no force feedback (3.34 ± 1.93 N; P forces on cardiac tissue during robotics-assisted mitral valve annuloplasty suturing, force feedback may be required. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Molecular simulations and visualization: introduction and overview.

    Science.gov (United States)

    Hirst, Jonathan D; Glowacki, David R; Baaden, Marc

    2014-01-01

    Here we provide an introduction and overview of current progress in the field of molecular simulation and visualization, touching on the following topics: (1) virtual and augmented reality for immersive molecular simulations; (2) advanced visualization and visual analytic techniques; (3) new developments in high performance computing; and (4) applications and model building.

  13. Visual Analytics for MOOC Data.

    Science.gov (United States)

    Qu, Huamin; Chen, Qing

    2015-01-01

    With the rise of massive open online courses (MOOCs), tens of millions of learners can now enroll in more than 1,000 courses via MOOC platforms such as Coursera and edX. As a result, a huge amount of data has been collected. Compared with traditional education records, the data from MOOCs has much finer granularity and also contains new pieces of information. It is the first time in history that such comprehensive data related to learning behavior has become available for analysis. What roles can visual analytics play in this MOOC movement? The authors survey the current practice and argue that MOOCs provide an opportunity for visualization researchers and that visual analytics systems for MOOCs can benefit a range of end users such as course instructors, education researchers, students, university administrators, and MOOC providers.

  14. 14 CFR 61.419 - How do I obtain privileges to provide training in an additional category or class of light-sport...

    Science.gov (United States)

    2010-01-01

    ... training in an additional category or class of light-sport aircraft? 61.419 Section 61.419 Aeronautics and...: PILOTS, FLIGHT INSTRUCTORS, AND GROUND INSTRUCTORS Flight Instructors With a Sport Pilot Rating § 61.419 How do I obtain privileges to provide training in an additional category or class of light-sport...

  15. Evaluation of Different Power of Near Addition in Two Different Multifocal Intraocular Lenses

    Directory of Open Access Journals (Sweden)

    Ugur Unsal

    2016-01-01

    Full Text Available Purpose. To compare near, intermediate, and distance vision and quality of vision, when refractive rotational multifocal intraocular lenses with 3.0 diopters or diffractive multifocal intraocular lenses with 2.5 diopters near addition are implanted. Methods. 41 eyes of 41 patients in whom rotational +3.0 diopters near addition IOLs were implanted and 30 eyes of 30 patients in whom diffractive +2.5 diopters near addition IOLs were implanted after cataract surgery were reviewed. Uncorrected and corrected distance visual acuity, intermediate visual acuity, near visual acuity, and patient satisfaction were evaluated 6 months later. Results. The corrected and uncorrected distance visual acuity were the same between both groups (p=0.50 and p=0.509, resp.. The uncorrected intermediate and corrected intermediate and near vision acuities were better in the +2.5 near vision added intraocular lens implanted group (p=0.049, p=0.005, and p=0.001, resp. and the uncorrected near vision acuity was better in the +3.0 near vision added intraocular lens implanted group (p=0.001. The patient satisfactions of both groups were similar. Conclusion. The +2.5 diopters near addition could be a better choice in younger patients with more distance and intermediate visual requirements (driving, outdoor activities, whereas the + 3.0 diopters should be considered for patients with more near vision correction (reading.

  16. How hearing aids, background noise, and visual cues influence objective listening effort.

    Science.gov (United States)

    Picou, Erin M; Ricketts, Todd A; Hornsby, Benjamin W Y

    2013-09-01

    variables were related to changes in listening effort resulting from the addition of background noise. The results of this study suggest that, on the average, hearing aids can reduce objectively measured listening effort. Furthermore, people who are slow verbal processors are more likely to derive hearing aid benefit for listening effort, perhaps because hearing aids improve the auditory input. Although background noise increased objective listening effort, no listener characteristic predicted susceptibility to noise. With regard to visual cues, while there was no effect on average of providing visual cues, there were some listener characteristics that were related to changes in listening effort with vision. Although these relationships are exploratory, they do suggest that these inherent listener characteristics like working memory capacity, verbal processing speed, and lipreading ability may influence susceptibility to changes in listening effort and thus warrant further study.

  17. The left visual-field advantage in rapid visual presentation is amplified rather than reduced by posterior-parietal rTMS

    DEFF Research Database (Denmark)

    Verleger, Rolf; Möller, Friderike; Kuniecki, Michal

    2010-01-01

    ) either as effective or as sham stimulation. In two experiments, either one of these two factors, hemisphere and effectiveness of rTMS, was varied within or between participants. Again, T2 was much better identified in the left than in the right visual field. This advantage of the left visual field......In the present task, series of visual stimuli are rapidly presented left and right, containing two target stimuli, T1 and T2. In previous studies, T2 was better identified in the left than in the right visual field. This advantage of the left visual field might reflect dominance exerted...... by the right over the left hemisphere. If so, then repetitive transcranial magnetic stimulation (rTMS) to the right parietal cortex might release the left hemisphere from right-hemispheric control, thereby improving T2 identification in the right visual field. Alternatively or additionally, the asymmetry in T2...

  18. Employing WebGL to develop interactive stereoscopic 3D content for use in biomedical visualization

    Science.gov (United States)

    Johnston, Semay; Renambot, Luc; Sauter, Daniel

    2013-03-01

    Web Graphics Library (WebGL), the forthcoming web standard for rendering native 3D graphics in a browser, represents an important addition to the biomedical visualization toolset. It is projected to become a mainstream method of delivering 3D online content due to shrinking support for third-party plug-ins. Additionally, it provides a virtual reality (VR) experience to web users accommodated by the growing availability of stereoscopic displays (3D TV, desktop, and mobile). WebGL's value in biomedical visualization has been demonstrated by applications for interactive anatomical models, chemical and molecular visualization, and web-based volume rendering. However, a lack of instructional literature specific to the field prevents many from utilizing this technology. This project defines a WebGL design methodology for a target audience of biomedical artists with a basic understanding of web languages and 3D graphics. The methodology was informed by the development of an interactive web application depicting the anatomy and various pathologies of the human eye. The application supports several modes of stereoscopic displays for a better understanding of 3D anatomical structures.

  19. Visual pigments of the box jellyfish species Chiropsella bronzie

    DEFF Research Database (Denmark)

    O*Connor, Megan; Garm, Anders Lydik; Marshall, Justin

    2010-01-01

    Box jellyfish (Cubomedusae) possess a unique visual system comprising 24 eyes of four morphological types. Moreover, box jellyfish display several visually guided behaviours, including obstacle avoidance and light-shaft attractance. It is largely unknown what kind of visual information box...... results strongly indicate that only one type of visual pigment is present in the upper and lower lens eyes with a peak absorbance of approximately 510 nm. Additionally, the visual pigment appears to undergo bleaching, similar to that of vertebrate visual pigments....

  20. Visualizing light with electrons

    Science.gov (United States)

    Fitzgerald, J. P. S.; Word, R. C.; Koenenkamp, R.

    2014-03-01

    In multiphoton photoemission electron microscopy (nP-PEEM) electrons are emitted from surfaces at a rate proportional to the surface electromagnetic field amplitude. We use 2P-PEEM to give nanometer scale visualizations of light of diffracted and waveguide fields around various microstructures. We use Fourier analysis to determine the phase and amplitude of surface fields in relation to incident light from the interference patterns. To provide quick and intuitive simulations of surface fields, we employ two dimensional Fresnel-Kirchhoff integration, a technique based on freely propagating waves and Huygens' principle. We find generally good agreement between simulations and experiment. Additionally diffracted wave simulations exhibit greater phase accuracy, indicating that these waves are well represented by a two dimensional approximation. The authors gratefully acknowledge funding of this research by the US-DOE Basic Science Office under Contract DE-FG02-10ER46406.

  1. Vision In Stroke cohort: Profile overview of visual impairment.

    Science.gov (United States)

    Rowe, Fiona J

    2017-11-01

    To profile the full range of visual disorders from a large prospective observation study of stroke survivors referred by stroke multidisciplinary teams to orthoptic services with suspected visual problems. Multicenter prospective study undertaken in 20 acute Trust hospitals. Standardized screening/referral forms and investigation forms documented data on referral signs and symptoms plus type and extent of visual impairment. Of 1,345 patients referred with suspected visual impairment, 915 were recruited (59% men; mean age at stroke onset 69 years [SD 14]). Initial visual assessment was at median 22 days post stroke onset. Eight percent had normal visual assessment. Of 92% with confirmed visual impairment, 24% had reduced central visual acuity visual field loss was present in 52%, most commonly homonymous hemianopia. Fifteen percent had visual inattention and 4.6% had other visual perceptual disorders. Overall 84% were visually symptomatic with visual field loss the most common complaint followed by blurred vision, reading difficulty, and diplopia. Treatment options were provided to all with confirmed visual impairment. Targeted advice was most commonly provided along with refraction, prisms, and occlusion. There are a wide range of visual disorders that occur following stroke and, frequently, with visual symptoms. There are equally a wide variety of treatment options available for these individuals. All stroke survivors require screening for visual impairment and warrant referral for specialist assessment and targeted treatment specific to the type of visual impairment.

  2. Visual art and visual perception

    NARCIS (Netherlands)

    Koenderink, Jan J.

    2015-01-01

    Visual art and visual perception ‘Visual art’ has become a minor cul-de-sac orthogonal to THE ART of the museum directors and billionaire collectors. THE ART is conceptual, instead of visual. Among its cherished items are the tins of artist’s shit (Piero Manzoni, 1961, Merda d’Artista) “worth their

  3. The GEANT4 Visualization System

    International Nuclear Information System (INIS)

    Allison, J

    2007-01-01

    The Geant4 Visualization System is a multi-driver graphics system designed to serve the Geant4 Simulation Toolkit. It is aimed at the visualization of Geant4 data, primarily detector descriptions and simulated particle trajectories and hits. It can handle a variety of graphical technologies simultaneously and interchangeably, allowing the user to choose the visual representation most appropriate to requirements. It conforms to the low-level Geant4 abstract graphical user interfaces and introduces new abstract classes from which the various drivers are derived and that can be straightforwardly extended, for example, by the addition of a new driver. It makes use of an extendable class library of models and filters for data representation and selection. The Geant4 Visualization System supports a rich set of interactive commands based on the Geant4 command system. It is included in the Geant4 code distribution and maintained and documented like other components of Geant4

  4. BioJS: an open source JavaScript framework for biological data visualization.

    Science.gov (United States)

    Gómez, John; García, Leyla J; Salazar, Gustavo A; Villaveces, Jose; Gore, Swanand; García, Alexander; Martín, Maria J; Launay, Guillaume; Alcántara, Rafael; Del-Toro, Noemi; Dumousseau, Marine; Orchard, Sandra; Velankar, Sameer; Hermjakob, Henning; Zong, Chenggong; Ping, Peipei; Corpas, Manuel; Jiménez, Rafael C

    2013-04-15

    BioJS is an open-source project whose main objective is the visualization of biological data in JavaScript. BioJS provides an easy-to-use consistent framework for bioinformatics application programmers. It follows a community-driven standard specification that includes a collection of components purposely designed to require a very simple configuration and installation. In addition to the programming framework, BioJS provides a centralized repository of components available for reutilization by the bioinformatics community. http://code.google.com/p/biojs/. Supplementary data are available at Bioinformatics online.

  5. Use of 50% Dextrose as the Distension Medium During Cystoscopy for Visualization of Ureteric Jets.

    Science.gov (United States)

    Narasimhulu, Deepa M; Prabakar, Cheruba; Tang, Nancy; Bral, Pedram

    2016-01-01

    Indigotindisulfonate sodium has been used to color the urine and thereby improve the visualization of ureteric jets during intraoperative cystoscopy. After indigotindisulfonate sodium became unavailable, there has been an ongoing search for an alternate agent to improve visualization of the jets. We used 50% dextrose, which is more viscous than urine, as the distension medium during cystoscopy so that the ureteric efflux is seen as a jet of contrasting viscosity. We instilled 100 mL of 50% dextrose into the bladder through an indwelling catheter, which is then removed and cystoscopy is performed as usual. We observed jets of contrasting viscosity in every patient in whom 50% dextrose was used as compared with coloring agents in which the jet is not always colored at the time of cystoscopy. Visualization of the other structures in the bladder and the bladder wall itself is not altered by 50% dextrose, although the volume of 50% dextrose that we typically use may not provide adequate distension for a complete assessment of the bladder. If additional distension is necessary, normal saline may be used in addition to the 50% dextrose once the ureteric jets have been assessed. Fifty percent dextrose is an effective alternative to indigotindisulfonate sodium for visualization of ureteric jets during cystoscopy.

  6. VarB Plus: An Integrated Tool for Visualization of Genome Variation Datasets

    KAUST Repository

    Hidayah, Lailatul

    2012-07-01

    Research on genomic sequences has been improving significantly as more advanced technology for sequencing has been developed. This opens enormous opportunities for sequence analysis. Various analytical tools have been built for purposes such as sequence assembly, read alignments, genome browsing, comparative genomics, and visualization. From the visualization perspective, there is an increasing trend towards use of large-scale computation. However, more than power is required to produce an informative image. This is a challenge that we address by providing several ways of representing biological data in order to advance the inference endeavors of biologists. This thesis focuses on visualization of variations found in genomic sequences. We develop several visualization functions and embed them in an existing variation visualization tool as extensions. The tool we improved is named VarB, hence the nomenclature for our enhancement is VarB Plus. To the best of our knowledge, besides VarB, there is no tool that provides the capability of dynamic visualization of genome variation datasets as well as statistical analysis. Dynamic visualization allows users to toggle different parameters on and off and see the results on the fly. The statistical analysis includes Fixation Index, Relative Variant Density, and Tajima’s D. Hence we focused our efforts on this tool. The scope of our work includes plots of per-base genome coverage, Principal Coordinate Analysis (PCoA), integration with a read alignment viewer named LookSeq, and visualization of geo-biological data. In addition to description of embedded functionalities, significance, and limitations, future improvements are discussed. The result is four extensions embedded successfully in the original tool, which is built on the Qt framework in C++. Hence it is portable to numerous platforms. Our extensions have shown acceptable execution time in a beta testing with various high-volume published datasets, as well as positive

  7. Realistic tissue visualization using photoacoustic image

    Science.gov (United States)

    Cho, Seonghee; Managuli, Ravi; Jeon, Seungwan; Kim, Jeesu; Kim, Chulhong

    2018-02-01

    Visualization methods are very important in biomedical imaging. As a technology that understands life, biomedical imaging has the unique advantage of providing the most intuitive information in the image. This advantage of biomedical imaging can be greatly improved by choosing a special visualization method. This is more complicated in volumetric data. Volume data has the advantage of containing 3D spatial information. Unfortunately, the data itself cannot directly represent the potential value. Because images are always displayed in 2D space, visualization is the key and creates the real value of volume data. However, image processing of 3D data requires complicated algorithms for visualization and high computational burden. Therefore, specialized algorithms and computing optimization are important issues in volume data. Photoacoustic-imaging is a unique imaging modality that can visualize the optical properties of deep tissue. Because the color of the organism is mainly determined by its light absorbing component, photoacoustic data can provide color information of tissue, which is closer to real tissue color. In this research, we developed realistic tissue visualization using acoustic-resolution photoacoustic volume data. To achieve realistic visualization, we designed specialized color transfer function, which depends on the depth of the tissue from the skin. We used direct ray casting method and processed color during computing shader parameter. In the rendering results, we succeeded in obtaining similar texture results from photoacoustic data. The surface reflected rays were visualized in white, and the reflected color from the deep tissue was visualized red like skin tissue. We also implemented the CUDA algorithm in an OpenGL environment for real-time interactive imaging.

  8. Interactive Design and Visualization of Branched Covering Spaces.

    Science.gov (United States)

    Roy, Lawrence; Kumar, Prashant; Golbabaei, Sanaz; Zhang, Yue; Zhang, Eugene

    2018-01-01

    Branched covering spaces are a mathematical concept which originates from complex analysis and topology and has applications in tensor field topology and geometry remeshing. Given a manifold surface and an -way rotational symmetry field, a branched covering space is a manifold surface that has an -to-1 map to the original surface except at the ramification points, which correspond to the singularities in the rotational symmetry field. Understanding the notion and mathematical properties of branched covering spaces is important to researchers in tensor field visualization and geometry processing, and their application areas. In this paper, we provide a framework to interactively design and visualize the branched covering space (BCS) of an input mesh surface and a rotational symmetry field defined on it. In our framework, the user can visualize not only the BCSs but also their construction process. In addition, our system allows the user to design the geometric realization of the BCS using mesh deformation techniques as well as connecting tubes. This enables the user to verify important facts about BCSs such as that they are manifold surfaces around singularities, as well as the Riemann-Hurwitz formula which relates the Euler characteristic of the BCS to that of the original mesh. Our system is evaluated by student researchers in scientific visualization and geometry processing as well as faculty members in mathematics at our university who teach topology. We include their evaluations and feedback in the paper.

  9. A new web-based tool for data visualization in MDSplus

    Energy Technology Data Exchange (ETDEWEB)

    Manduchi, G., E-mail: gabriele.manduchi@igi.cnr.it [Consorzio RFX, Euratom-ENEA Association, Corso Stati Uniti 4, Padova 35127 (Italy); Fredian, T.; Stillerman, J. [Massachusetts Institute of Technology, 175 Albany Street, Cambridge, MA 02139 (United States)

    2014-05-15

    Highlights: • The paper describes a new web-based data visualization tool for MDSplus. • It describes the experience gained with the previous data visualization tools. • It describes the used technologies for web data access and visualization. • It describes the current architecture of the tool and the new foreseen features. - Abstract: The Java tool jScope has been widely used for years to display acquired waveform in MDSplus. The choice of the Java programming language for its implementation has been successful for several reasons among which the fact that Java supports a multiplatform environment and it is well suited for graphics and the management of network communication. jScope can be used both as a local and remote application. In the latter case, data are acquired via TCP/IP communication using the mdsip protocol. Exporting data in this way however introduces several security problems due to the necessity of opening firewall holes for the user ports. For this reason, and also due to the fact that JavaScript is becoming a widely used language for web applications, a new tool written in JavaScript and called WebScope has been developed for the visualization of MDSplus data in web browsers. Data communication is now achieved via http protocol using Asynchronous JavaScript and XML (AJAX) technology. At the server side, data access is carried out by a Python module that interacts with the web server via Web Server Gateway Interface (WSGI). When a data item, described by an MDSplus expression, is requested by the web browser for visualization, it is returned as a binary message and then handled by callback JavaScript functions activated by the web browser. Scalable Vector Graphics (SVG) technology is used to handle graphics within the web browser and to carry out the same interactive data visualization provided by jScope. In addition to mouse events, touch events are supported to provide interactivity also on touch screens. In this way, waveforms can be

  10. A new web-based tool for data visualization in MDSplus

    International Nuclear Information System (INIS)

    Manduchi, G.; Fredian, T.; Stillerman, J.

    2014-01-01

    Highlights: • The paper describes a new web-based data visualization tool for MDSplus. • It describes the experience gained with the previous data visualization tools. • It describes the used technologies for web data access and visualization. • It describes the current architecture of the tool and the new foreseen features. - Abstract: The Java tool jScope has been widely used for years to display acquired waveform in MDSplus. The choice of the Java programming language for its implementation has been successful for several reasons among which the fact that Java supports a multiplatform environment and it is well suited for graphics and the management of network communication. jScope can be used both as a local and remote application. In the latter case, data are acquired via TCP/IP communication using the mdsip protocol. Exporting data in this way however introduces several security problems due to the necessity of opening firewall holes for the user ports. For this reason, and also due to the fact that JavaScript is becoming a widely used language for web applications, a new tool written in JavaScript and called WebScope has been developed for the visualization of MDSplus data in web browsers. Data communication is now achieved via http protocol using Asynchronous JavaScript and XML (AJAX) technology. At the server side, data access is carried out by a Python module that interacts with the web server via Web Server Gateway Interface (WSGI). When a data item, described by an MDSplus expression, is requested by the web browser for visualization, it is returned as a binary message and then handled by callback JavaScript functions activated by the web browser. Scalable Vector Graphics (SVG) technology is used to handle graphics within the web browser and to carry out the same interactive data visualization provided by jScope. In addition to mouse events, touch events are supported to provide interactivity also on touch screens. In this way, waveforms can be

  11. Task-dependent engagements of the primary visual cortex during kinesthetic and visual motor imagery.

    Science.gov (United States)

    Mizuguchi, Nobuaki; Nakamura, Maiko; Kanosue, Kazuyuki

    2017-01-01

    Motor imagery can be divided into kinesthetic and visual aspects. In the present study, we investigated excitability in the corticospinal tract and primary visual cortex (V1) during kinesthetic and visual motor imagery. To accomplish this, we measured motor evoked potentials (MEPs) and probability of phosphene occurrence during the two types of motor imageries of finger tapping. The MEPs and phosphenes were induced by transcranial magnetic stimulation to the primary motor cortex and V1, respectively. The amplitudes of MEPs and probability of phosphene occurrence during motor imagery were normalized based on the values obtained at rest. Corticospinal excitability increased during both kinesthetic and visual motor imagery, while excitability in V1 was increased only during visual motor imagery. These results imply that modulation of cortical excitability during kinesthetic and visual motor imagery is task dependent. The present finding aids in the understanding of the neural mechanisms underlying motor imagery and provides useful information for the use of motor imagery in rehabilitation or motor imagery training. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  12. Genetic parameter estimates for carcass traits and visual scores including or not genomic information.

    Science.gov (United States)

    Gordo, D G M; Espigolan, R; Tonussi, R L; Júnior, G A F; Bresolin, T; Magalhães, A F Braga; Feitosa, F L; Baldi, F; Carvalheiro, R; Tonhati, H; de Oliveira, H N; Chardulo, L A L; de Albuquerque, L G

    2016-05-01

    CS, FP, and MS could be used as selection criteria to improve HCW, BF, and LMA. The use of genomic information permitted the detection of greater additive genetic variability for LMA and BF. For HCW, the high magnitude of the genetic correlations with visual scores was probably sufficient to recover genetic variability. The methods provided similar breeding value accuracies, especially for the visual scores.

  13. Web GIS in practice IX: a demonstration of geospatial visual analytics using Microsoft Live Labs Pivot technology and WHO mortality data.

    Science.gov (United States)

    Kamel Boulos, Maged N; Viangteeravat, Teeradache; Anyanwu, Matthew N; Ra Nagisetty, Venkateswara; Kuscu, Emin

    2011-03-16

    The goal of visual analytics is to facilitate the discourse between the user and the data by providing dynamic displays and versatile visual interaction opportunities with the data that can support analytical reasoning and the exploration of data from multiple user-customisable aspects. This paper introduces geospatial visual analytics, a specialised subtype of visual analytics, and provides pointers to a number of learning resources about the subject, as well as some examples of human health, surveillance, emergency management and epidemiology-related geospatial visual analytics applications and examples of free software tools that readers can experiment with, such as Google Public Data Explorer. The authors also present a practical demonstration of geospatial visual analytics using partial data for 35 countries from a publicly available World Health Organization (WHO) mortality dataset and Microsoft Live Labs Pivot technology, a free, general purpose visual analytics tool that offers a fresh way to visually browse and arrange massive amounts of data and images online and also supports geographic and temporal classifications of datasets featuring geospatial and temporal components. Interested readers can download a Zip archive (included with the manuscript as an additional file) containing all files, modules and library functions used to deploy the WHO mortality data Pivot collection described in this paper.

  14. Visual pathway impairment by pituitary adenomas: quantitative diagnostics by diffusion tensor imaging.

    Science.gov (United States)

    Lilja, Ylva; Gustafsson, Oscar; Ljungberg, Maria; Starck, Göran; Lindblom, Bertil; Skoglund, Thomas; Bergquist, Henrik; Jakobsson, Karl-Erik; Nilsson, Daniel

    2017-09-01

    OBJECTIVE Despite ample experience in surgical treatment of pituitary adenomas, little is known about objective indices that may reveal risk of visual impairment caused by tumor growth that leads to compression of the anterior visual pathways. This study aimed to explore diffusion tensor imaging (DTI) as a means for objective assessment of injury to the anterior visual pathways caused by pituitary adenomas. METHODS Twenty-three patients with pituitary adenomas, scheduled for transsphenoidal tumor resection, and 20 healthy control subjects were included in the study. A minimum suprasellar tumor extension of Grade 2-4, according to the SIPAP (suprasellar, infrasellar, parasellar, anterior, and posterior) scale, was required for inclusion. Neuroophthalmological examinations, conventional MRI, and DTI were completed in all subjects and were repeated 6 months after surgery. Quantitative assessment of chiasmal lift, visual field defect (VFD), and DTI parameters from the optic tracts was performed. Linear correlations, group comparisons, and prediction models were done in controls and patients. RESULTS Both the degree of VFD and chiasmal lift were significantly correlated with the radial diffusivity (r = 0.55, p visual pathways that were compressed by pituitary adenomas. The correlation between radial diffusivity and visual impairment may reflect a gradual demyelination in the visual pathways caused by an increased tumor effect. The low level of axial diffusivity found in the patient group may represent early atrophy in the visual pathways, detectable on DTI but not by conventional methods. DTI may provide objective data, detect early signs of injury, and be an additional diagnostic tool for determining indication for surgery in cases of pituitary adenomas.

  15. The Puzzle of Visual Development: Behavior and Neural Limits.

    Science.gov (United States)

    Kiorpes, Lynne

    2016-11-09

    The development of visual function takes place over many months or years in primate infants. Visual sensitivity is very poor near birth and improves over different times courses for different visual functions. The neural mechanisms that underlie these processes are not well understood despite many decades of research. The puzzle arises because research into the factors that limit visual function in infants has found surprisingly mature neural organization and adult-like receptive field properties in very young infants. The high degree of visual plasticity that has been documented during the sensitive period in young children and animals leaves the brain vulnerable to abnormal visual experience. Abnormal visual experience during the sensitive period can lead to amblyopia, a developmental disorder of vision affecting ∼3% of children. This review provides a historical perspective on research into visual development and the disorder amblyopia. The mismatch between the status of the primary visual cortex and visual behavior, both during visual development and in amblyopia, is discussed, and several potential resolutions are considered. It seems likely that extrastriate visual areas further along the visual pathways may set important limits on visual function and show greater vulnerability to abnormal visual experience. Analyses based on multiunit, population activity may provide useful representations of the information being fed forward from primary visual cortex to extrastriate processing areas and to the motor output. Copyright © 2016 the authors 0270-6474/16/3611384-10$15.00/0.

  16. Electron microscopy approach for the visualization of the epithelial and endothelial glycocalyx.

    Science.gov (United States)

    Chevalier, L; Selim, J; Genty, D; Baste, J M; Piton, N; Boukhalfa, I; Hamzaoui, M; Pareige, P; Richard, V

    2017-06-01

    This study presents a methodological approach for the visualization of the glycocalyx by electron microscopy. The glycocalyx is a three dimensional network mainly composed of glycolipids, glycoproteins and proteoglycans associated with the plasma membrane. Since less than a decade, the epithelial and endothelial glycocalyx proved to play an important role in physiology and pathology, increasing its research interest especially in vascular functions. Therefore, visualization of the glycocalyx requires reliable techniques and its preservation remains challenging due to its fragile and dynamic organization, which is highly sensitive to the different process steps for electron microscopy sampling. In this study, chemical fixation was performed by perfusion as a good alternative to conventional fixation. Additional lanthanum nitrate in the fixative enhances staining of the glycocalyx in transmission electron microscopy bright field and improves its visualization by detecting the elastic scattered electrons, thus providing a chemical contrast. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  17. New software for neutron data reduction and visualization

    International Nuclear Information System (INIS)

    Worlton, T.; Chatterjee, A.; Hammonds, J.; Chen, D.; Loong, C.K.; Mikkelson, D.; Mikkelson, R.

    2001-01-01

    Development of advanced neutron sources and instruments has necessitated corresponding advances in software for neutron scattering data reduction and visualization. New sources produce datasets more rapidly, and new instruments produce large numbers of spectra. Because of the shorter collection times, users are able to make more measurements on a given sample. This rapid production of datasets requires that users be able to reduce and analyze data quickly to prevent a data bottleneck. In addition, the new sources and instruments are accommodating more users with less neutron-scattering specific expertise, which requires software that is easy to use and freely available. We have developed an Integrated Spectral Analysis Workbench (ISAW) software package to permit the rapid reduction and visualization of neutron data. It can handle large numbers of spectra and merge data from separate measurements. The data can be sorted according to any attribute and transformed in numerous ways. ISAW provides several views of the data that enable users to compare spectra and observe trends in the data. A command interpreter, which is now part of ISAW, allows scientists to easily set up a series of instrument-specific operations to reduce and visualize data automatically. ISAW is written entirely in Java to permit portability to different computer platforms and easy distribution of the software. The software was constructed using modern computer design methods to allow easy customization and improvement. ISAW currently only reads data from IPNS 'run' files, but work is underway to provide input of NeXus files. (author)

  18. New software for neutron data reduction and visualization

    Energy Technology Data Exchange (ETDEWEB)

    Worlton, T.; Chatterjee, A.; Hammonds, J.; Chen, D.; Loong, C.K. [Argonne National Laboratory, Argonne, IL (United States); Mikkelson, D.; Mikkelson, R. [Univ. of Wisconsin-Stout, Menomonie, WI (United States)

    2001-03-01

    Development of advanced neutron sources and instruments has necessitated corresponding advances in software for neutron scattering data reduction and visualization. New sources produce datasets more rapidly, and new instruments produce large numbers of spectra. Because of the shorter collection times, users are able to make more measurements on a given sample. This rapid production of datasets requires that users be able to reduce and analyze data quickly to prevent a data bottleneck. In addition, the new sources and instruments are accommodating more users with less neutron-scattering specific expertise, which requires software that is easy to use and freely available. We have developed an Integrated Spectral Analysis Workbench (ISAW) software package to permit the rapid reduction and visualization of neutron data. It can handle large numbers of spectra and merge data from separate measurements. The data can be sorted according to any attribute and transformed in numerous ways. ISAW provides several views of the data that enable users to compare spectra and observe trends in the data. A command interpreter, which is now part of ISAW, allows scientists to easily set up a series of instrument-specific operations to reduce and visualize data automatically. ISAW is written entirely in Java to permit portability to different computer platforms and easy distribution of the software. The software was constructed using modern computer design methods to allow easy customization and improvement. ISAW currently only reads data from IPNS 'run' files, but work is underway to provide input of NeXus files. (author)

  19. Visual agnosia and focal brain injury.

    Science.gov (United States)

    Martinaud, O

    Visual agnosia encompasses all disorders of visual recognition within a selective visual modality not due to an impairment of elementary visual processing or other cognitive deficit. Based on a sequential dichotomy between the perceptual and memory systems, two different categories of visual object agnosia are usually considered: 'apperceptive agnosia' and 'associative agnosia'. Impaired visual recognition within a single category of stimuli is also reported in: (i) visual object agnosia of the ventral pathway, such as prosopagnosia (for faces), pure alexia (for words), or topographagnosia (for landmarks); (ii) visual spatial agnosia of the dorsal pathway, such as cerebral akinetopsia (for movement), or orientation agnosia (for the placement of objects in space). Focal brain injuries provide a unique opportunity to better understand regional brain function, particularly with the use of effective statistical approaches such as voxel-based lesion-symptom mapping (VLSM). The aim of the present work was twofold: (i) to review the various agnosia categories according to the traditional visual dual-pathway model; and (ii) to better assess the anatomical network underlying visual recognition through lesion-mapping studies correlating neuroanatomical and clinical outcomes. Copyright © 2017 Elsevier Masson SAS. All rights reserved.

  20. Arctic Research Mapping Application (ARMAP): visualize project-level information for U.S. funded research in the Arctic

    Science.gov (United States)

    Kassin, A.; Cody, R. P.; Barba, M.; Escarzaga, S. M.; Score, R.; Dover, M.; Gaylord, A. G.; Manley, W. F.; Habermann, T.; Tweedie, C. E.

    2015-12-01

    The Arctic Research Mapping Application (ARMAP; http://armap.org/) is a suite of online applications and data services that support Arctic science by providing project tracking information (who's doing what, when and where in the region) for United States Government funded projects. In collaboration with 17 research agencies, project locations are displayed in a visually enhanced web mapping application. Key information about each project is presented along with links to web pages that provide additional information. The mapping application includes new reference data layers and an updated ship tracks layer. Visual enhancements are achieved by redeveloping the front-end from FLEX to HTML5 and JavaScript, which now provide access to mobile users utilizing tablets and cell phone devices. New tools have been added that allow users to navigate, select, draw, measure, print, use a time slider, and more. Other module additions include a back-end Apache SOLR search platform that provides users with the capability to perform advance searches throughout the ARMAP database. Furthermore, a new query builder interface has been developed in order to provide more intuitive controls to generate complex queries. These improvements have been made to increase awareness of projects funded by numerous entities in the Arctic, enhance coordination for logistics support, help identify geographic gaps in research efforts and potentially foster more collaboration amongst researchers working in the region. Additionally, ARMAP can be used to demonstrate past, present, and future research efforts supported by the U.S. Government.

  1. Trifocal intraocular lenses: a comparison of the visual performance and quality of vision provided by two different lens designs

    Directory of Open Access Journals (Sweden)

    Gundersen KG

    2017-06-01

    Full Text Available Kjell G Gundersen,1 Rick Potvin2 1IFocus Øyeklinikk AS, Haugesund, Norway; 2Science in Vision, Akron, NY, USA Purpose: To compare two different diffractive trifocal intraocular lens (IOL designs, evaluating longer-term refractive outcomes, visual acuity (VA at various distances, low contrast VA and quality of vision.Patients and methods: Patients with binocularly implanted trifocal IOLs of two different designs (FineVision [FV] and Panoptix [PX] were evaluated 6 months to 2 years after surgery. Best distance-corrected and uncorrected VA were tested at distance (4 m, intermediate (80 and 60 cm and near (40 cm. A binocular defocus curve was collected with the subject’s best distance correction in place. The preferred reading distance was determined along with the VA at that distance. Low contrast VA at distance was also measured. Quality of vision was measured with the National Eye Institute Visual Function Questionnaire near subset and the Quality of Vision questionnaire.Results: Thirty subjects in each group were successfully recruited. The binocular defocus curves differed only at vergences of −1.0 D (FV better, P=0.02, −1.5 and −2.00 D (PX better, P<0.01 for both. Best distance-corrected and uncorrected binocular vision were significantly better for the PX lens at 60 cm (P<0.01 with no significant differences at other distances. The preferred reading distance was between 42 and 43 cm for both lenses, with the VA at the preferred reading distance slightly better with the PX lens (P=0.04. There were no statistically significant differences by lens for low contrast VA (P=0.1 or for quality of vision measures (P>0.3.Conclusion: Both trifocal lenses provided excellent distance, intermediate and near vision, but several measures indicated that the PX lens provided better intermediate vision at 60 cm. This may be important to users of tablets and other handheld devices. Quality of vision appeared similar between the two lens designs

  2. Visual Control for Multirobot Organized Rendezvous.

    Science.gov (United States)

    Lopez-Nicolas, G; Aranda, M; Mezouar, Y; Sagues, C

    2012-08-01

    This paper addresses the problem of visual control of a set of mobile robots. In our framework, the perception system consists of an uncalibrated flying camera performing an unknown general motion. The robots are assumed to undergo planar motion considering nonholonomic constraints. The goal of the control task is to drive the multirobot system to a desired rendezvous configuration relying solely on visual information given by the flying camera. The desired multirobot configuration is defined with an image of the set of robots in that configuration without any additional information. We propose a homography-based framework relying on the homography induced by the multirobot system that gives a desired homography to be used to define the reference target, and a new image-based control law that drives the robots to the desired configuration by imposing a rigidity constraint. This paper extends our previous work, and the main contributions are that the motion constraints on the flying camera are removed, the control law is improved by reducing the number of required steps, the stability of the new control law is proved, and real experiments are provided to validate the proposal.

  3. Car Gestures - Advisory warning using additional steering wheel angles.

    Science.gov (United States)

    Maag, Christian; Schneider, Norbert; Lübbeke, Thomas; Weisswange, Thomas H; Goerick, Christian

    2015-10-01

    Advisory warning systems (AWS) notify the driver about upcoming hazards. This is in contrast to the majority of currently deployed advanced driver assistance systems (ADAS) that manage emergency situations. The target of this study is to investigate the effectiveness, acceptance, and controllability of a specific kind of AWS that uses the haptic information channel for warning the driver. This could be beneficial, as alternatives for using the visual modality can help to reduce the risk of visual overload. The driving simulator study (N=24) compared an AWS based on additional steering wheel angle control (Car Gestures) with a visual warning presented in a simulated head-up display (HUD). Both types of warning were activated 3.5s before the hazard object was reached. An additional condition of unassisted driving completed the experimental design. The subjects encountered potential hazards in a variety of urban situations (e.g. a pedestrian standing on the curbs). For the investigated situations, subjective ratings show that a majority of drivers prefer visual warnings over haptic information via gestures. An analysis of driving behavior indicates that both warning approaches guide the vehicle away from the potential hazard. Whereas gestures lead to a faster lateral driving reaction (compared to HUD warnings), the visual warnings result in a greater safety benefit (measured by the minimum distance to the hazard object). A controllability study with gestures in the wrong direction (i.e. leading toward the hazard object) shows that drivers are able to cope with wrong haptic warnings and safety is not reduced compared to unassisted driving as well as compared to (correct) haptic gestures and visual warnings. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Parallax visualization of full motion video using the Pursuer GUI

    Science.gov (United States)

    Mayhew, Christopher A.; Forgues, Mark B.

    2014-06-01

    In 2013, the Authors reported to the SPIE on the Phase 1 development of a Parallax Visualization (PV) plug-in toolset for Wide Area Motion Imaging (WAMI) data using the Pursuer Graphical User Interface (GUI).1 In addition to the ability to PV WAMI data, the Phase 1 plug-in toolset also featured a limited ability to visualize Full Motion video (FMV) data. The ability to visualize both WAMI and FMV data is highly advantageous capability for an Electric Light Table (ELT) toolset. This paper reports on the Phase 2 development and addition of a full featured FMV capability to the Pursuer WAMI PV Plug-in.

  5. Consequence of audio visual collection in school libraries

    OpenAIRE

    Kuri, Ramesh

    2016-01-01

    The collection of Audio-Visual in library plays important role in teaching and learning. The importance of audio visual (AV) technology in education should not be underestimated. If audio-visual collection in library is carefully planned and designed, it can provide a rich learning environment. In this article, an author discussed the consequences of Audio-Visual collection in libraries especially for students of school library

  6. Hierarchical sets: analyzing pangenome structure through scalable set visualizations

    Science.gov (United States)

    2017-01-01

    Abstract Motivation: The increase in available microbial genome sequences has resulted in an increase in the size of the pangenomes being analyzed. Current pangenome visualizations are not intended for the pangenome sizes possible today and new approaches are necessary in order to convert the increase in available information to increase in knowledge. As the pangenome data structure is essentially a collection of sets we explore the potential for scalable set visualization as a tool for pangenome analysis. Results: We present a new hierarchical clustering algorithm based on set arithmetics that optimizes the intersection sizes along the branches. The intersection and union sizes along the hierarchy are visualized using a composite dendrogram and icicle plot, which, in pangenome context, shows the evolution of pangenome and core size along the evolutionary hierarchy. Outlying elements, i.e. elements whose presence pattern do not correspond with the hierarchy, can be visualized using hierarchical edge bundles. When applied to pangenome data this plot shows putative horizontal gene transfers between the genomes and can highlight relationships between genomes that is not represented by the hierarchy. We illustrate the utility of hierarchical sets by applying it to a pangenome based on 113 Escherichia and Shigella genomes and find it provides a powerful addition to pangenome analysis. Availability and Implementation: The described clustering algorithm and visualizations are implemented in the hierarchicalSets R package available from CRAN (https://cran.r-project.org/web/packages/hierarchicalSets) Contact: thomasp85@gmail.com Supplementary information: Supplementary data are available at Bioinformatics online. PMID:28130242

  7. Knowledge scaffolding visualizations: A guiding framework

    Directory of Open Access Journals (Sweden)

    Elitsa Alexander

    2015-06-01

    Full Text Available In this paper we provide a guiding framework for understanding and selecting visual representations in the knowledge management (KM practice. We build on an interdisciplinary analogy between two connotations of the notion of “scaffolding”: physical scaffolding from an architectural-engineering perspective and scaffolding of the “everyday knowing in practice” from a KM perspective. We classify visual structures for knowledge communication in teams into four types of scaffolds: grounded (corresponding e.g., to perspectives diagrams or dynamic facilitation diagrams, suspended (e.g., negotiation sketches, argument maps, panel (e.g., roadmaps or timelines and reinforcing (e.g., concept diagrams. The article concludes with a set of recommendations in the form of questions to ask whenever practitioners are choosing visualizations for specific KM needs. Our recommendations aim at providing a framework at a broad-brush level to aid choosing a suitable visualization template depending on the type of KM endeavour.

  8. Architecture for Teraflop Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Breckenridge, A.R.; Haynes, R.A.

    1999-04-09

    Sandia Laboratories' computational scientists are addressing a very important question: How do we get insight from the human combined with the computer-generated information? The answer inevitably leads to using scientific visualization. Going one technology leap further is teraflop visualization, where the computing model and interactive graphics are an integral whole to provide computing for insight. In order to implement our teraflop visualization architecture, all hardware installed or software coded will be based on open modules and dynamic extensibility principles. We will illustrate these concepts with examples in our three main research areas: (1) authoring content (the computer), (2) enhancing precision and resolution (the human), and (3) adding behaviors (the physics).

  9. Development of Visualization Tools for ZPPR-15 Analysis

    International Nuclear Information System (INIS)

    Lee, Min Jae; Kim, Sang Ji

    2014-01-01

    ZPPR-15 cores consist of various drawer masters that have great heterogeneity. In order to build a proper homogenization strategy, the geometry of the drawer masters should be carefully analyzed with a visualization. Additionally, a visualization of drawer masters and the core configuration is necessary for minimizing human error during the input processing. For this purpose, visualization tools for a ZPPR-15 analysis has been developed based on a Perl script. In the following section, the implementation of visualization tools will be described and various visualization samples for both drawer masters and ZPPR-15 cores will be demonstrated. Visualization tools for drawer masters and a core configuration were successfully developed for a ZPPR-15 analysis. The visualization tools are expected to be useful for understanding ZPPR-15 experiments, and finding deterministic models of ZPPR-15. It turned out that generating VTK files is handy but the application of VTK files is powerful with the aid of the VISIT program

  10. Public health nurse perceptions of Omaha System data visualization.

    Science.gov (United States)

    Lee, Seonah; Kim, Era; Monsen, Karen A

    2015-10-01

    Electronic health records (EHRs) provide many benefits related to the storage, deployment, and retrieval of large amounts of patient data. However, EHRs have not fully met the need to reuse data for decision making on follow-up care plans. Visualization offers new ways to present health data, especially in EHRs. Well-designed data visualization allows clinicians to communicate information efficiently and effectively, contributing to improved interpretation of clinical data and better patient care monitoring and decision making. Public health nurse (PHN) perceptions of Omaha System data visualization prototypes for use in EHRs have not been evaluated. To visualize PHN-generated Omaha System data and assess PHN perceptions regarding the visual validity, helpfulness, usefulness, and importance of the visualizations, including interactive functionality. Time-oriented visualization for problems and outcomes and Matrix visualization for problems and interventions were developed using PHN-generated Omaha System data to help PHNs consume data and plan care at the point of care. Eleven PHNs evaluated prototype visualizations. Overall PHNs response to visualizations was positive, and feedback for improvement was provided. This study demonstrated the potential for using visualization techniques within EHRs to summarize Omaha System patient data for clinicians. Further research is needed to improve and refine these visualizations and assess the potential to incorporate visualizations within clinical EHRs. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  11. Selective attention modulates the direction of audio-visual temporal recalibration.

    Science.gov (United States)

    Ikumi, Nara; Soto-Faraco, Salvador

    2014-01-01

    Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging), was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  12. Selective attention modulates the direction of audio-visual temporal recalibration.

    Directory of Open Access Journals (Sweden)

    Nara Ikumi

    Full Text Available Temporal recalibration of cross-modal synchrony has been proposed as a mechanism to compensate for timing differences between sensory modalities. However, far from the rich complexity of everyday life sensory environments, most studies to date have examined recalibration on isolated cross-modal pairings. Here, we hypothesize that selective attention might provide an effective filter to help resolve which stimuli are selected when multiple events compete for recalibration. We addressed this question by testing audio-visual recalibration following an adaptation phase where two opposing audio-visual asynchronies were present. The direction of voluntary visual attention, and therefore to one of the two possible asynchronies (flash leading or flash lagging, was manipulated using colour as a selection criterion. We found a shift in the point of subjective audio-visual simultaneity as a function of whether the observer had focused attention to audio-then-flash or to flash-then-audio groupings during the adaptation phase. A baseline adaptation condition revealed that this effect of endogenous attention was only effective toward the lagging flash. This hints at the role of exogenous capture and/or additional endogenous effects producing an asymmetry toward the leading flash. We conclude that selective attention helps promote selected audio-visual pairings to be combined and subsequently adjusted in time but, stimulus organization exerts a strong impact on recalibration. We tentatively hypothesize that the resolution of recalibration in complex scenarios involves the orchestration of top-down selection mechanisms and stimulus-driven processes.

  13. A method for visual inspection of welding by means of image processing of x-ray photograph

    International Nuclear Information System (INIS)

    Koshimizu, Hiroyasu; Yoshida, Tohru.

    1983-01-01

    Computer image processing is becoming a helpful tool even in industrial inspections. A computerized method for welding visual inspection is proposed in this paper. This method is based on computer image processing of X-ray photograph of welding, in which the appearance information of weldments such as shape of weld bead really exists. Structural patterns are extracted at first and seven computer measures for inspection are calculated using those patterns. Software system for visual inspection is constructed based on these seven measures. It was experimentally made clear that this system can provide a performance of more than 0.85 correlation to human visual inspection. As a result, the visual inspection by computer using X-ray photograph became a promising tool to realize objectivity and quantitativity of welding inspection. Additionally, the consistency of the system, the possibility to reduce computing costs, and so on are discussed to improve the proposed method. (author)

  14. Prefrontal contributions to visual selective attention.

    Science.gov (United States)

    Squire, Ryan F; Noudoost, Behrad; Schafer, Robert J; Moore, Tirin

    2013-07-08

    The faculty of attention endows us with the capacity to process important sensory information selectively while disregarding information that is potentially distracting. Much of our understanding of the neural circuitry underlying this fundamental cognitive function comes from neurophysiological studies within the visual modality. Past evidence suggests that a principal function of the prefrontal cortex (PFC) is selective attention and that this function involves the modulation of sensory signals within posterior cortices. In this review, we discuss recent progress in identifying the specific prefrontal circuits controlling visual attention and its neural correlates within the primate visual system. In addition, we examine the persisting challenge of precisely defining how behavior should be affected when attentional function is lost.

  15. Effective visualization assay for alcohol content sensing and methanol differentiation with solvent stimuli-responsive supramolecular ionic materials.

    Science.gov (United States)

    Zhang, Li; Qi, Hetong; Wang, Yuexiang; Yang, Lifen; Yu, Ping; Mao, Lanqun

    2014-08-05

    This study demonstrates a rapid visualization assay for on-spot sensing of alcohol content as well as for discriminating methanol-containing beverages with solvent stimuli-responsive supramolecular ionic material (SIM). The SIM is synthesized by ionic self-assembling of imidazolium-based dication C10(mim)2 and dianionic 2,2'-azino-bis (3-ethylbenzothiazoline-6-sulfonic acid) (ABTS) in water and shows water stability, a solvent stimuli-responsive property, and adaptive encapsulation capability. The rationale for the visualization assay demonstrated here is based on the combined utilization of the unique properties of SIM, including its water stability, ethanol stimuli-responsive feature, and adaptive encapsulation capability toward optically active rhodamine 6G (Rh6G); the addition of ethanol into a stable aqueous dispersion of Rh6G-encapsulated SIM (Rh6G-SIM) destructs the Rh6G-SIM structure, resulting in the release of Rh6G from SIM into the solvent. Alcohol content can thus be visualized with the naked eyes through the color change of the dispersion caused by the addition of ethanol. Alcohol content can also be quantified by measuring the fluorescence line of Rh6G released from Rh6G-SIM on a thin-layer chromatography (TLC) plate in response to alcoholic beverages. By fixing the diffusion distance of the mobile phase, the fluorescence line of Rh6G shows a linear relationship with alcohol content (vol %) within a concentration range from 15% to 40%. We utilized this visualization assay for on-spot visualizing of the alcohol contents of three Chinese commercial spirits and discriminating methanol-containing counterfeit beverages. We found that addition of a trace amount of methanol leads to a large increase of the length of Rh6G on TLC plates, which provides a method to identify methanol adulterated beverages with labeled ethanol content. This study provides a simple yet effective assay for alcohol content sensing and methanol differentiation.

  16. Visual working memory contaminates perception.

    Science.gov (United States)

    Kang, Min-Suk; Hong, Sang Wook; Blake, Randolph; Woodman, Geoffrey F

    2011-10-01

    Indirect evidence suggests that the contents of visual working memory may be maintained within sensory areas early in the visual hierarchy. We tested this possibility using a well-studied motion repulsion phenomenon in which perception of one direction of motion is distorted when another direction of motion is viewed simultaneously. We found that observers misperceived the actual direction of motion of a single motion stimulus if, while viewing that stimulus, they were holding a different motion direction in visual working memory. Control experiments showed that none of a variety of alternative explanations could account for this repulsion effect induced by working memory. Our findings provide compelling evidence that visual working memory representations directly interact with the same neural mechanisms as those involved in processing basic sensory events.

  17. Student Visual Communication of Evolution

    Science.gov (United States)

    Oliveira, Alandeom W.; Cook, Kristin

    2017-06-01

    Despite growing recognition of the importance of visual representations to science education, previous research has given attention mostly to verbal modalities of evolution instruction. Visual aspects of classroom learning of evolution are yet to be systematically examined by science educators. The present study attends to this issue by exploring the types of evolutionary imagery deployed by secondary students. Our visual design analysis revealed that students resorted to two larger categories of images when visually communicating evolution: spatial metaphors (images that provided a spatio-temporal account of human evolution as a metaphorical "walk" across time and space) and symbolic representations ("icons of evolution" such as personal portraits of Charles Darwin that simply evoked evolutionary theory rather than metaphorically conveying its conceptual contents). It is argued that students need opportunities to collaboratively critique evolutionary imagery and to extend their visual perception of evolution beyond dominant images.

  18. Improving visual perception through neurofeedback

    Science.gov (United States)

    Scharnowski, Frank; Hutton, Chloe; Josephs, Oliver; Weiskopf, Nikolaus; Rees, Geraint

    2012-01-01

    Perception depends on the interplay of ongoing spontaneous activity and stimulus-evoked activity in sensory cortices. This raises the possibility that training ongoing spontaneous activity alone might be sufficient for enhancing perceptual sensitivity. To test this, we trained human participants to control ongoing spontaneous activity in circumscribed regions of retinotopic visual cortex using real-time functional MRI based neurofeedback. After training, we tested participants using a new and previously untrained visual detection task that was presented at the visual field location corresponding to the trained region of visual cortex. Perceptual sensitivity was significantly enhanced only when participants who had previously learned control over ongoing activity were now exercising control, and only for that region of visual cortex. Our new approach allows us to non-invasively and non-pharmacologically manipulate regionally specific brain activity, and thus provide ‘brain training’ to deliver particular perceptual enhancements. PMID:23223302

  19. VISUAL3D - An EIT network on visualization of geomodels

    Science.gov (United States)

    Bauer, Tobias

    2017-04-01

    When it comes to interpretation of data and understanding of deep geological structures and bodies at different scales then modelling tools and modelling experience is vital for deep exploration. Geomodelling provides a platform for integration of different types of data, including new kinds of information (e.g., new improved measuring methods). EIT Raw Materials, initiated by the EIT (European Institute of Innovation and Technology) and funded by the European Commission, is the largest and strongest consortium in the raw materials sector worldwide. The VISUAL3D network of infrastructure is an initiative by EIT Raw Materials and aims at bringing together partners with 3D-4D-visualisation infrastructure and 3D-4D-modelling experience. The recently formed network collaboration interlinks hardware, software and expert knowledge in modelling visualization and output. A special focus will be the linking of research, education and industry and integrating multi-disciplinary data and to visualize the data in three and four dimensions. By aiding network collaborations we aim at improving the combination of geomodels with differing file formats and data characteristics. This will create an increased competency in modelling visualization and the ability to interchange and communicate models more easily. By combining knowledge and experience in geomodelling with expertise in Virtual Reality visualization partners of EIT Raw Materials but also external parties will have the possibility to visualize, analyze and validate their geomodels in immersive VR-environments. The current network combines partners from universities, research institutes, geological surveys and industry with a strong background in geological 3D-modelling and 3D visualization and comprises: Luleå University of Technology, Geological Survey of Finland, Geological Survey of Denmark and Greenland, TUBA Freiberg, Uppsala University, Geological Survey of France, RWTH Aachen, DMT, KGHM Cuprum, Boliden, Montan

  20. A model for visual memory encoding.

    Directory of Open Access Journals (Sweden)

    Rodolphe Nenert

    Full Text Available Memory encoding engages multiple concurrent and sequential processes. While the individual processes involved in successful encoding have been examined in many studies, a sequence of events and the importance of modules associated with memory encoding has not been established. For this reason, we sought to perform a comprehensive examination of the network for memory encoding using data driven methods and to determine the directionality of the information flow in order to build a viable model of visual memory encoding. Forty healthy controls ages 19-59 performed a visual scene encoding task. FMRI data were preprocessed using SPM8 and then processed using independent component analysis (ICA with the reliability of the identified components confirmed using ICASSO as implemented in GIFT. The directionality of the information flow was examined using Granger causality analyses (GCA. All participants performed the fMRI task well above the chance level (>90% correct on both active and control conditions and the post-fMRI testing recall revealed correct memory encoding at 86.33 ± 5.83%. ICA identified involvement of components of five different networks in the process of memory encoding, and the GCA allowed for the directionality of the information flow to be assessed, from visual cortex via ventral stream to the attention network and then to the default mode network (DMN. Two additional networks involved in this process were the cerebellar and the auditory-insular network. This study provides evidence that successful visual memory encoding is dependent on multiple modules that are part of other networks that are only indirectly related to the main process. This model may help to identify the node(s of the network that are affected by a specific disease processes and explain the presence of memory encoding difficulties in patients in whom focal or global network dysfunction exists.

  1. A model for visual memory encoding.

    Science.gov (United States)

    Nenert, Rodolphe; Allendorfer, Jane B; Szaflarski, Jerzy P

    2014-01-01

    Memory encoding engages multiple concurrent and sequential processes. While the individual processes involved in successful encoding have been examined in many studies, a sequence of events and the importance of modules associated with memory encoding has not been established. For this reason, we sought to perform a comprehensive examination of the network for memory encoding using data driven methods and to determine the directionality of the information flow in order to build a viable model of visual memory encoding. Forty healthy controls ages 19-59 performed a visual scene encoding task. FMRI data were preprocessed using SPM8 and then processed using independent component analysis (ICA) with the reliability of the identified components confirmed using ICASSO as implemented in GIFT. The directionality of the information flow was examined using Granger causality analyses (GCA). All participants performed the fMRI task well above the chance level (>90% correct on both active and control conditions) and the post-fMRI testing recall revealed correct memory encoding at 86.33 ± 5.83%. ICA identified involvement of components of five different networks in the process of memory encoding, and the GCA allowed for the directionality of the information flow to be assessed, from visual cortex via ventral stream to the attention network and then to the default mode network (DMN). Two additional networks involved in this process were the cerebellar and the auditory-insular network. This study provides evidence that successful visual memory encoding is dependent on multiple modules that are part of other networks that are only indirectly related to the main process. This model may help to identify the node(s) of the network that are affected by a specific disease processes and explain the presence of memory encoding difficulties in patients in whom focal or global network dysfunction exists.

  2. Design of an aid to visual inspection workstation

    Science.gov (United States)

    Tait, Robert; Harding, Kevin

    2016-05-01

    Visual Inspection is the most common means for inspecting manufactured parts for random defects such as pits, scratches, breaks, corrosion or general wear. The reason for the need for visual inspection is the very random nature of what might be a defect. Some defects may be very rare, being seen once or twice a year, but May still be critical to part performance. Because of this random and rare nature, even the most sophisticated image analysis programs have not been able to recognize all possible defects. Key to any future automation of inspection is obtaining good sample images of what might be a defect. However, most visual check take no images and consequently generate no digital data or historical record beyond a simple count. Any additional tool to captures such images must be able to do so without taking addition time. This paper outlines the design of a potential visual inspection station that would be compatible with current visual inspection methods, but afford the means for reliable digital imaging and in many cases augmented capabilities to assist the inspection. Considerations in this study included: resolution, depth of field, feature highlighting, and ease of digital capture, annotations and inspection augmentation for repeatable registration as well as operator assistance and training.

  3. Cone visual pigments are present in gecko rod cells.

    Science.gov (United States)

    Kojima, D; Okano, T; Fukada, Y; Shichida, Y; Yoshizawa, T; Ebrey, T G

    1992-08-01

    The Tokay gecko (Gekko gekko), a nocturnal lizard, has two kinds of visual pigments, P467 and P521. In spite of the pure-rod morphology of the photoreceptor cells, the biochemical properties of P521 and P467 resemble those of iodopsin (the chicken red-sensitive cone visual pigment) and rhodopsin, respectively. We have found that the amino acid sequence of P521 deduced from the cDNA was very similar to that of iodopsin. In addition, P467 has the highest homology with the chicken green-sensitive cone visual pigment, although it also has a relatively high homology with rhodopsins. These results give additional strength to the transmutation theory of Walls [Walls, G. L. (1934) Am. J. Ophthalmol. 17, 892-915], who proposed that the rod-shaped photoreceptor cells of lizards have been derived from ancestral cone-like photoreceptors. Apparently amino acid sequences of visual pigments are less changeable than the morphology of the photoreceptor cells in the course of evolution.

  4. Cortical visual impairment: Characteristics and treatment

    Directory of Open Access Journals (Sweden)

    Vučinić Vesna

    2014-01-01

    Full Text Available According to the latest studies, Cortical visual impairment – CVI is one of the most common causes of problems and difficulties in visual functioning. It results from the impairment of the central part of visual system, i.e. visual cortex, posterior visual pathway, or both. The diagnosis is usually made in the first three years of life. The aim of this paper is to present the characteristics of children with CVI, and the strategies used for treatment. CVI has a negative impact on almost all developmental domains, visual-perceptive skills, motor skills, cognitive skills, and social skills. In children with CVI, vision ranges from the total inability to see to minimal visual perceptive difficulties, while more than 50% have multiple disabilities. Due to the progress in understanding the patterns of neuron activity and neuroplasticity, as well as the intensive studies of strengths and weaknesses of children with CVI, special treatment has been designed and performed in the last few decades, which provides optimal visual functioning in everyday life for these children.

  5. Software attribute visualization for high integrity software

    Energy Technology Data Exchange (ETDEWEB)

    Pollock, G.M.

    1998-03-01

    This report documents a prototype tool developed to investigate the use of visualization and virtual reality technologies for improving software surety confidence. The tool is utilized within the execution phase of the software life cycle. It provides a capability to monitor an executing program against prespecified requirements constraints provided in a program written in the requirements specification language SAGE. The resulting Software Attribute Visual Analysis Tool (SAVAnT) also provides a technique to assess the completeness of a software specification.

  6. Immersion and togetherness: How live visualization of audience engagement can enhance music events

    NARCIS (Netherlands)

    N. Shirzadian (Najereh); J.A. Redi (Judith); T. Röggla (Tom); A. Panza (Alice); F.-M. Nack (Frank); P.S. Cesar Garcia (Pablo Santiago)

    2017-01-01

    textabstractThis paper evaluates the influence of an additional visual aesthetic layer on the experience of concert goers during a live event. The additional visual layer incorporates musical features as well as bio-sensing data collected during the concert, which is coordinated by our audience

  7. Visual impairment and traits of autism in children.

    Science.gov (United States)

    Wrzesińska, Magdalena; Kapias, Joanna; Nowakowska-Domagała, Katarzyna; Kocur, Józef

    2017-04-30

    Visual impairment present from birth or from an early childhood may lead to psychosocial and emotional disorders. 11-40% of children in the group with visual impairment show traits of autism. The aim of this paper was to present the selected examples of how visual impairment in children is related to the occurrence of autism and to describe the available tools for diagnosing autism in children with visual impairment. So far the relation between visual impairment in children and autism has not been sufficiently confirmed. Psychiatric and psychological diagnosis of children with visual impairment has some difficulties in differentiating between "blindism" and traits typical for autism resulting from a lack of standardized diagnostic tools used to diagnosing children with visual impairment. Another difficulty in diagnosing autism in children with visual impairment is the coexistence of other disabilities in case of most children with vision impairment. Additionally, apart from difficulties in diagnosing autistic disorders in children with eye dysfunctions there is also a question of what tools should be used in therapy and rehabilitation of patients.

  8. The effects of acute alcohol exposure on the response properties of neurons in visual cortex area 17 of cats

    International Nuclear Information System (INIS)

    Chen Bo; Xia Jing; Li Guangxing; Zhou Yifeng

    2010-01-01

    Physiological and behavioral studies have demonstrated that a number of visual functions such as visual acuity, contrast sensitivity, and motion perception can be impaired by acute alcohol exposure. The orientation- and direction-selective responses of cells in primary visual cortex are thought to participate in the perception of form and motion. To investigate how orientation selectivity and direction selectivity of neurons are influenced by acute alcohol exposure in vivo, we used the extracellular single-unit recording technique to examine the response properties of neurons in primary visual cortex (A17) of adult cats. We found that alcohol reduces spontaneous activity, visual evoked unit responses, the signal-to-noise ratio, and orientation selectivity of A17 cells. In addition, small but detectable changes in both the preferred orientation/direction and the bandwidth of the orientation tuning curve of strongly orientation-biased A17 cells were observed after acute alcohol administration. Our findings may provide physiological evidence for some alcohol-related deficits in visual function observed in behavioral studies.

  9. Learning Visual Design through Hypermedia: Pathways to Visual Literacy.

    Science.gov (United States)

    Lockee, Barbara; Hergert, Tom

    The interactive multimedia application described here attempts to provide learners and teachers with a common frame of reference for communicating about visual media. The system is based on a list of concepts related to composition, and illustrates those concepts with photographs, paintings, graphic designs, and motion picture scenes. The ability…

  10. Visual function, driving safety, and the elderly.

    Science.gov (United States)

    Keltner, J L; Johnson, C A

    1987-09-01

    The authors have conducted a survey of the Departments of Motor Vehicles in all 50 states, the District of Columbia, and Puerto Rico requesting information about the visual standards, accidents, and conviction rates for different age groups. In addition, we have reviewed the literature on visual function and traffic safety. Elderly drivers have a greater number of vision problems that affect visual acuity and/or peripheral visual fields. Although the elderly are responsible for a small percentage of the total number of traffic accidents, the types of accidents they are involved in (e.g., failure to yield the right-of-way, intersection collisions, left turns onto crossing streets) may be related to peripheral and central visual field problems. Because age-related changes in performance occur at different rates for various individuals, licensing of the elderly driver should be based on functional abilities rather than age. Based on information currently available, we can make the following recommendations: (1) periodic evaluations of visual acuity and visual fields should be performed every 1 to 2 years in the population over age 65; (2) drivers of any age with multiple accidents or moving violations should have visual acuity and visual fields evaluated; and (3) a system should be developed for physicians to report patients with potentially unsafe visual function. The authors believe that these recommendations may help to reduce the number of traffic accidents that result from peripheral visual field deficits.

  11. Enhancement of Online Robotics Learning Using Real-Time 3D Visualization Technology

    Directory of Open Access Journals (Sweden)

    Richard Chiou

    2010-06-01

    Full Text Available This paper discusses a real-time e-Lab Learning system based on the integration of 3D visualization technology with a remote robotic laboratory. With the emergence and development of the Internet field, online learning is proving to play a significant role in the upcoming era. In an effort to enhance Internet-based learning of robotics and keep up with the rapid progression of technology, a 3- Dimensional scheme of viewing the robotic laboratory has been introduced in addition to the remote controlling of the robots. The uniqueness of the project lies in making this process Internet-based, and remote robot operated and visualized in 3D. This 3D system approach provides the students with a more realistic feel of the 3D robotic laboratory even though they are working remotely. As a result, the 3D visualization technology has been tested as part of a laboratory in the MET 205 Robotics and Mechatronics class and has received positive feedback by most of the students. This type of research has introduced a new level of realism and visual communications to online laboratory learning in a remote classroom.

  12. GPU-based large-scale visualization

    KAUST Repository

    Hadwiger, Markus

    2013-11-19

    Recent advances in image and volume acquisition as well as computational advances in simulation have led to an explosion of the amount of data that must be visualized and analyzed. Modern techniques combine the parallel processing power of GPUs with out-of-core methods and data streaming to enable the interactive visualization of giga- and terabytes of image and volume data. A major enabler for interactivity is making both the computational and the visualization effort proportional to the amount of data that is actually visible on screen, decoupling it from the full data size. This leads to powerful display-aware multi-resolution techniques that enable the visualization of data of almost arbitrary size. The course consists of two major parts: An introductory part that progresses from fundamentals to modern techniques, and a more advanced part that discusses details of ray-guided volume rendering, novel data structures for display-aware visualization and processing, and the remote visualization of large online data collections. You will learn how to develop efficient GPU data structures and large-scale visualizations, implement out-of-core strategies and concepts such as virtual texturing that have only been employed recently, as well as how to use modern multi-resolution representations. These approaches reduce the GPU memory requirements of extremely large data to a working set size that fits into current GPUs. You will learn how to perform ray-casting of volume data of almost arbitrary size and how to render and process gigapixel images using scalable, display-aware techniques. We will describe custom virtual texturing architectures as well as recent hardware developments in this area. We will also describe client/server systems for distributed visualization, on-demand data processing and streaming, and remote visualization. We will describe implementations using OpenGL as well as CUDA, exploiting parallelism on GPUs combined with additional asynchronous

  13. Eye structure, activity rhythms and visually-driven behavior are tuned to visual niche in ants

    Directory of Open Access Journals (Sweden)

    Ayse eYilmaz

    2014-06-01

    Full Text Available Insects have evolved physiological adaptations and behavioural strategies that allow them to cope with a broad spectrum of environmental challenges and contribute to their evolutionary success. Visual performance plays a key role in this success. Correlates between life style and eye organization have been reported in various insect species. Yet, if and how visual ecology translates effectively into different visual discrimination and learning capabilities has been less explored. Here we report results from optical and behavioural analyses performed in two sympatric ant species, Formica cunicularia and Camponotus aethiops. We show that the former are diurnal while the latter are cathemeral. Accordingly, F. cunicularia workers present compound eyes with higher resolution, while C. aethiops workers exhibit eyes with lower resolution but higher sensitivity. The discrimination and learning of visual stimuli differs significantly between these species in controlled dual-choice experiments: discrimination learning of small-field visual stimuli is achieved by F. cunicularia but not by C. aethiops, while both species master the discrimination of large-field visual stimuli. Our work thus provides a paradigmatic example about how timing of foraging activities and visual environment match the organization of compound eyes and visually-driven behaviour. This correspondence underlines the relevance of an ecological/evolutionary framework for analyses in behavioural neuroscience.

  14. Graph-based clustering and data visualization algorithms

    CERN Document Server

    Vathy-Fogarassy, Ágnes

    2013-01-01

    This work presents a data visualization technique that combines graph-based topology representation and dimensionality reduction methods to visualize the intrinsic data structure in a low-dimensional vector space. The application of graphs in clustering and visualization has several advantages. A graph of important edges (where edges characterize relations and weights represent similarities or distances) provides a compact representation of the entire complex data set. This text describes clustering and visualization methods that are able to utilize information hidden in these graphs, based on

  15. Sports Stars: Analyzing the Performance of Astronomers at Visualization-based Discovery

    Science.gov (United States)

    Fluke, C. J.; Parrington, L.; Hegarty, S.; MacMahon, C.; Morgan, S.; Hassan, A. H.; Kilborn, V. A.

    2017-05-01

    In this data-rich era of astronomy, there is a growing reliance on automated techniques to discover new knowledge. The role of the astronomer may change from being a discoverer to being a confirmer. But what do astronomers actually look at when they distinguish between “sources” and “noise?” What are the differences between novice and expert astronomers when it comes to visual-based discovery? Can we identify elite talent or coach astronomers to maximize their potential for discovery? By looking to the field of sports performance analysis, we consider an established, domain-wide approach, where the expertise of the viewer (i.e., a member of the coaching team) plays a crucial role in identifying and determining the subtle features of gameplay that provide a winning advantage. As an initial case study, we investigate whether the SportsCode performance analysis software can be used to understand and document how an experienced Hi astronomer makes discoveries in spectral data cubes. We find that the process of timeline-based coding can be applied to spectral cube data by mapping spectral channels to frames within a movie. SportsCode provides a range of easy to use methods for annotation, including feature-based codes and labels, text annotations associated with codes, and image-based drawing. The outputs, including instance movies that are uniquely associated with coded events, provide the basis for a training program or team-based analysis that could be used in unison with discipline specific analysis software. In this coordinated approach to visualization and analysis, SportsCode can act as a visual notebook, recording the insight and decisions in partnership with established analysis methods. Alternatively, in situ annotation and coding of features would be a valuable addition to existing and future visualization and analysis packages.

  16. Issues and Problems in Malaysian Contemporary Visual Arts

    Directory of Open Access Journals (Sweden)

    Mohamad Faizuan Mat

    2016-06-01

    Full Text Available In Malaysia, there is a question in term of intellectualism activities in the context of visual epistemology. Therefore, this paper revealed the problems that linger in the Malaysian contemporary visual art scene. In fact, Malaysian contemporary artists appear to have insufficient intellectualism values and less discourse activities. The lacks of scholars in the field of visual arts create a gap in the visual arts scene in Malaysia. The question of this study was to uncover the main problems in Malaysian visual arts that led to the problem of art intellectual development. In addition, this paper presents the awareness of the valuable contributions in the intellectual development that able to enhance the communication in the art object.Keywords: art knowledge; art object; contemporary art; interpretation; perception;

  17. Classification of visual and linguistic tasks using eye-movement features.

    Science.gov (United States)

    Coco, Moreno I; Keller, Frank

    2014-03-07

    The role of the task has received special attention in visual-cognition research because it can provide causal explanations of goal-directed eye-movement responses. The dependency between visual attention and task suggests that eye movements can be used to classify the task being performed. A recent study by Greene, Liu, and Wolfe (2012), however, fails to achieve accurate classification of visual tasks based on eye-movement features. In the present study, we hypothesize that tasks can be successfully classified when they differ with respect to the involvement of other cognitive domains, such as language processing. We extract the eye-movement features used by Greene et al. as well as additional features from the data of three different tasks: visual search, object naming, and scene description. First, we demonstrated that eye-movement responses make it possible to characterize the goals of these tasks. Then, we trained three different types of classifiers and predicted the task participants performed with an accuracy well above chance (a maximum of 88% for visual search). An analysis of the relative importance of features for classification accuracy reveals that just one feature, i.e., initiation time, is sufficient for above-chance performance (a maximum of 79% accuracy in object naming). Crucially, this feature is independent of task duration, which differs systematically across the three tasks we investigated. Overall, the best task classification performance was obtained with a set of seven features that included both spatial information (e.g., entropy of attention allocation) and temporal components (e.g., total fixation on objects) of the eye-movement record. This result confirms the task-dependent allocation of visual attention and extends previous work by showing that task classification is possible when tasks differ in the cognitive processes involved (purely visual tasks such as search vs. communicative tasks such as scene description).

  18. Design of smart home sensor visualizations for older adults.

    Science.gov (United States)

    Le, Thai; Reeder, Blaine; Chung, Jane; Thompson, Hilaire; Demiris, George

    2014-07-24

    Smart home sensor systems provide a valuable opportunity to continuously and unobtrusively monitor older adult wellness. However, the density of sensor data can be challenging to visualize, especially for an older adult consumer with distinct user needs. We describe the design of sensor visualizations informed by interviews with older adults. The goal of the visualizations is to present sensor activity data to an older adult consumer audience that supports both longitudinal detection of trends and on-demand display of activity details for any chosen day. The design process is grounded through participatory design with older adult interviews during a six-month pilot sensor study. Through a secondary analysis of interviews, we identified the visualization needs of older adults. We incorporated these needs with cognitive perceptual visualization guidelines and the emotional design principles of Norman to develop sensor visualizations. We present a design of sensor visualization that integrate both temporal and spatial components of information. The visualization supports longitudinal detection of trends while allowing the viewer to view activity within a specific date.CONCLUSIONS: Appropriately designed visualizations for older adults not only provide insight into health and wellness, but also are a valuable resource to promote engagement within care.

  19. Design of smart home sensor visualizations for older adults.

    Science.gov (United States)

    Le, Thai; Reeder, Blaine; Chung, Jane; Thompson, Hilaire; Demiris, George

    2014-01-01

    Smart home sensor systems provide a valuable opportunity to continuously and unobtrusively monitor older adult wellness. However, the density of sensor data can be challenging to visualize, especially for an older adult consumer with distinct user needs. We describe the design of sensor visualizations informed by interviews with older adults. The goal of the visualizations is to present sensor activity data to an older adult consumer audience that supports both longitudinal detection of trends and on-demand display of activity details for any chosen day. The design process is grounded through participatory design with older adult interviews during a six-month pilot sensor study. Through a secondary analysis of interviews, we identified the visualization needs of older adults. We incorporated these needs with cognitive perceptual visualization guidelines and the emotional design principles of Norman to develop sensor visualizations. We present a design of sensor visualization that integrate both temporal and spatial components of information. The visualization supports longitudinal detection of trends while allowing the viewer to view activity within a specific date. Appropriately designed visualizations for older adults not only provide insight into health and wellness, but also are a valuable resource to promote engagement within care.

  20. Standalone visualization tool for three-dimensional DRAGON geometrical models

    International Nuclear Information System (INIS)

    Lukomski, A.; McIntee, B.; Moule, D.; Nichita, E.

    2008-01-01

    DRAGON is a neutron transport and depletion code able to solve one-, two- and three-dimensional problems. To date DRAGON provides two visualization modules, able to represent respectively two- and three-dimensional geometries. The two-dimensional visualization module generates a postscript file, while the three dimensional visualization module generates a MATLAB M-file with instructions for drawing the tracks in the DRAGON TRACKING data structure, which implicitly provide a representation of the geometry. The current work introduces a new, standalone, tool based on the open-source Visualization Toolkit (VTK) software package which allows the visualization of three-dimensional geometrical models by reading the DRAGON GEOMETRY data structure and generating an axonometric image which can be manipulated interactively by the user. (author)

  1. Visual motion transforms visual space representations similarly throughout the human visual hierarchy.

    Science.gov (United States)

    Harvey, Ben M; Dumoulin, Serge O

    2016-02-15

    Several studies demonstrate that visual stimulus motion affects neural receptive fields and fMRI response amplitudes. Here we unite results of these two approaches and extend them by examining the effects of visual motion on neural position preferences throughout the hierarchy of human visual field maps. We measured population receptive field (pRF) properties using high-field fMRI (7T), characterizing position preferences simultaneously over large regions of the visual cortex. We measured pRFs properties using sine wave gratings in stationary apertures, moving at various speeds in either the direction of pRF measurement or the orthogonal direction. We find direction- and speed-dependent changes in pRF preferred position and size in all visual field maps examined, including V1, V3A, and the MT+ map TO1. These effects on pRF properties increase up the hierarchy of visual field maps. However, both within and between visual field maps the extent of pRF changes was approximately proportional to pRF size. This suggests that visual motion transforms the representation of visual space similarly throughout the visual hierarchy. Visual motion can also produce an illusory displacement of perceived stimulus position. We demonstrate perceptual displacements using the same stimulus configuration. In contrast to effects on pRF properties, perceptual displacements show only weak effects of motion speed, with far larger speed-independent effects. We describe a model where low-level mechanisms could underlie the observed effects on neural position preferences. We conclude that visual motion induces similar transformations of visuo-spatial representations throughout the visual hierarchy, which may arise through low-level mechanisms. Copyright © 2015 Elsevier Inc. All rights reserved.

  2. Data mining and visualization techniques

    Science.gov (United States)

    Wong, Pak Chung [Richland, WA; Whitney, Paul [Richland, WA; Thomas, Jim [Richland, WA

    2004-03-23

    Disclosed are association rule identification and visualization methods, systems, and apparatus. An association rule in data mining is an implication of the form X.fwdarw.Y where X is a set of antecedent items and Y is the consequent item. A unique visualization technique that provides multiple antecedent, consequent, confidence, and support information is disclosed to facilitate better presentation of large quantities of complex association rules.

  3. The Open Space Sculptures Used in the Gençlik Park towards Visual Perception of Park Users

    Directory of Open Access Journals (Sweden)

    Ahmet Polat

    2012-11-01

    Full Text Available Urban parks are the most important areas that allow recreational activities in our towns. Increasing the visual quality of urban parks provides positive impacts on urban quality. Besides the artistic and technical features of open space sculptures which are used for urban park designs are the visual perceptions and preferences of park users are also important. In the context of this study, six sculptures in Gençlik Park which is in the boundaries Ankara have been considered. The aim of the study, to measure the visual quality of the sculptures in the urban parks through park users and to reveal the relationship between visual landscape indicators (of being interesting, coherence, complexity, meaningfulness, and mystery and the visual quality. For this purpose, the six pieces in Ankara Youth Park of sculpture were evaluated the scope of research. According to the results of the study; it was realized that park users like sculptures visually. A statistically significant relationship was found between the visual quality of the sculptures and some landscape indicators (to be interesting, mystery and harmony. In addition to these, some suggestions were made regarding the use of sculptures in urban parks.

  4. Storytelling and Visualization: An Extended Survey

    Directory of Open Access Journals (Sweden)

    Chao Tong

    2018-03-01

    Full Text Available Throughout history, storytelling has been an effective way of conveying information and knowledge. In the field of visualization, storytelling is rapidly gaining momentum and evolving cutting-edge techniques that enhance understanding. Many communities have commented on the importance of storytelling in data visualization. Storytellers tend to be integrating complex visualizations into their narratives in growing numbers. In this paper, we present a survey of storytelling literature in visualization and present an overview of the common and important elements in storytelling visualization. We also describe the challenges in this field as well as a novel classification of the literature on storytelling in visualization. Our classification scheme highlights the open and unsolved problems in this field as well as the more mature storytelling sub-fields. The benefits offer a concise overview and a starting point into this rapidly evolving research trend and provide a deeper understanding of this topic.

  5. Geoplotlib: a Python Toolbox for Visualizing Geographical Data

    OpenAIRE

    Cuttone, Andrea; Lehmann, Sune; Larsen, Jakob Eg

    2016-01-01

    We introduce geoplotlib, an open-source python toolbox for visualizing geographical data. geoplotlib supports the development of hardware-accelerated interactive visualizations in pure python, and provides implementations of dot maps, kernel density estimation, spatial graphs, Voronoi tesselation, shapefiles and many more common spatial visualizations. We describe geoplotlib design, functionalities and use cases.

  6. Interactive 4D Visualization of Sediment Transport Models

    Science.gov (United States)

    Butkiewicz, T.; Englert, C. M.

    2013-12-01

    Coastal sediment transport models simulate the effects that waves, currents, and tides have on near-shore bathymetry and features such as beaches and barrier islands. Understanding these dynamic processes is integral to the study of coastline stability, beach erosion, and environmental contamination. Furthermore, analyzing the results of these simulations is a critical task in the design, placement, and engineering of coastal structures such as seawalls, jetties, support pilings for wind turbines, etc. Despite the importance of these models, there is a lack of available visualization software that allows users to explore and perform analysis on these datasets in an intuitive and effective manner. Existing visualization interfaces for these datasets often present only one variable at a time, using two dimensional plan or cross-sectional views. These visual restrictions limit the ability to observe the contents in the proper overall context, both in spatial and multi-dimensional terms. To improve upon these limitations, we use 3D rendering and particle system based illustration techniques to show water column/flow data across all depths simultaneously. We can also encode multiple variables across different perceptual channels (color, texture, motion, etc.) to enrich surfaces with multi-dimensional information. Interactive tools are provided, which can be used to explore the dataset and find regions-of-interest for further investigation. Our visualization package provides an intuitive 4D (3D, time-varying) visualization of sediment transport model output. In addition, we are also integrating real world observations with the simulated data to support analysis of the impact from major sediment transport events. In particular, we have been focusing on the effects of Superstorm Sandy on the Redbird Artificial Reef Site, offshore of Delaware Bay. Based on our pre- and post-storm high-resolution sonar surveys, there has significant scour and bedform migration around the

  7. Teach yourself visually Office 2013

    CERN Document Server

    Marmel, Elaine

    2013-01-01

    Learn the new Microsoft Office suite the easy, visual way Microsoft Office 2013 is a power-packed suite of office productivity tools including Word, Excel, PowerPoint, Outlook, Access, and Publisher. This easy-to-use visual guide covers the basics of all six programs, with step-by-step instructions and full-color screen shots showing what you should see at each step. You'll also learn about using Office Internet and graphics tools, while the additional examples and advice scattered through the book give you tips on maximizing the Office suite. If you learn best when you can see how

  8. Realistic Visualization of Virtual Views

    DEFF Research Database (Denmark)

    Livatino, Salvatore

    2005-01-01

    that can be impractical and sometime impossible. In addition, the artificial nature of data often makes visualized virtual scenarios not realistic enough. Not realistic in the sense that a synthetic scene is easy to discriminate visually from a natural scene. A new field of research has consequently...... developed and received much attention in recent years: Realistic Virtual View Synthesis. The main goal is a high fidelity representation of virtual scenarios while easing modeling and physical phenomena simulation. In particular, realism is achieved by the transfer to the novel view of all the physical...... phenomena captured in the reference photographs, (i.e. the transfer of photographic-realism). An overview of most prominent approaches in realistic virtual view synthesis will be presented and briefly discussed. Applications of proposed methods to visual survey, virtual cinematography, as well as mobile...

  9. Visual Attention in Posterior Stroke and Relations to Alexia

    DEFF Research Database (Denmark)

    Petersen, Anders; Vangkilde, Signe; Fabricius, Charlotte

    2016-01-01

    that reduced visual speed and span may explain pure alexia. Eight patients with unilateral PCA strokes (four left hemisphere, four right hemisphere) were selected on the basis of lesion location, rather than the presence of any visual symptoms. Visual attention was characterized by a whole report paradigm......Impaired visual attention is common following strokes in the territory of the middle cerebral artery, particularly in the right hemisphere, while attentional effects of more posterior lesions are less clear. Commonly, such deficits are investigated in relation to specific syndromes like visual...... agnosia or pure alexia. The aim of this study was to characterize visual processing speed and apprehension span following posterior cerebral artery (PCA) stroke. In addition, the relationship between these attentional parameters and single word reading is investigated, as previous studies have suggested...

  10. Prefrontal Neurons Represent Motion Signals from Across the Visual Field But for Memory-Guided Comparisons Depend on Neurons Providing These Signals.

    Science.gov (United States)

    Wimmer, Klaus; Spinelli, Philip; Pasternak, Tatiana

    2016-09-07

    Visual decisions often involve comparisons of sequential stimuli that can appear at any location in the visual field. The lateral prefrontal cortex (LPFC) in nonhuman primates, shown to play an important role in such comparisons, receives information about contralateral stimuli directly from sensory neurons in the same hemisphere, and about ipsilateral stimuli indirectly from neurons in the opposite hemisphere. This asymmetry of sensory inputs into the LPFC poses the question of whether and how its neurons incorporate sensory information arriving from the two hemispheres during memory-guided comparisons of visual motion. We found that, although responses of individual LPFC neurons to contralateral stimuli were stronger and emerged 40 ms earlier, they carried remarkably similar signals about motion direction in the two hemifields, with comparable direction selectivity and similar direction preferences. This similarity was also apparent around the time of the comparison between the current and remembered stimulus because both ipsilateral and contralateral responses showed similar signals reflecting the remembered direction. However, despite availability in the LPFC of motion information from across the visual field, these "comparison effects" required for the comparison stimuli to appear at the same retinal location. This strict dependence on spatial overlap of the comparison stimuli suggests participation of neurons with localized receptive fields in the comparison process. These results suggest that while LPFC incorporates many key aspects of the information arriving from sensory neurons residing in opposite hemispheres, it continues relying on the interactions with these neurons at the time of generating signals leading to successful perceptual decisions. Visual decisions often involve comparisons of sequential visual motion that can appear at any location in the visual field. We show that during such comparisons, the lateral prefrontal cortex (LPFC) contains

  11. Alterations of the visual pathways in congenital blindness

    DEFF Research Database (Denmark)

    Ptito, Maurice; Schneider, Fabien C G; Paulson, Olaf B

    2008-01-01

    /19 and the middle temporal cortex (MT) showing volume reductions of up to 20%. Additional significant white matter alterations were observed in the inferior longitudinal tract and in the posterior part of the corpus callosum, which links the visual areas of both hemispheres. Our data indicate that the afferent...... projections to the visual cortex in CB are largely atrophied. Despite the massive volume reductions in the occipital lobes, there is compelling evidence from the literature (reviewed in Noppeney 2007; Ptito and Kupers 2005) that blind subjects activate their visual cortex when performing tasks that involve...

  12. Flow Visualization with Quantified Spatial and Temporal Errors Using Edge Maps

    KAUST Repository

    Bhatia, H.; Jadhav, S.; Bremer, P.; Guoning Chen,; Levine, J. A.; Nonato, L. G.; Pascucci, V.

    2012-01-01

    Robust analysis of vector fields has been established as an important tool for deriving insights from the complex systems these fields model. Traditional analysis and visualization techniques rely primarily on computing streamlines through numerical integration. The inherent numerical errors of such approaches are usually ignored, leading to inconsistencies that cause unreliable visualizations and can ultimately prevent in-depth analysis. We propose a new representation for vector fields on surfaces that replaces numerical integration through triangles with maps from the triangle boundaries to themselves. This representation, called edge maps, permits a concise description of flow behaviors and is equivalent to computing all possible streamlines at a user defined error threshold. Independent of this error streamlines computed using edge maps are guaranteed to be consistent up to floating point precision, enabling the stable extraction of features such as the topological skeleton. Furthermore, our representation explicitly stores spatial and temporal errors which we use to produce more informative visualizations. This work describes the construction of edge maps, the error quantification, and a refinement procedure to adhere to a user defined error bound. Finally, we introduce new visualizations using the additional information provided by edge maps to indicate the uncertainty involved in computing streamlines and topological structures. © 2012 IEEE.

  13. Flow Visualization with Quantified Spatial and Temporal Errors Using Edge Maps

    KAUST Repository

    Bhatia, H.

    2012-09-01

    Robust analysis of vector fields has been established as an important tool for deriving insights from the complex systems these fields model. Traditional analysis and visualization techniques rely primarily on computing streamlines through numerical integration. The inherent numerical errors of such approaches are usually ignored, leading to inconsistencies that cause unreliable visualizations and can ultimately prevent in-depth analysis. We propose a new representation for vector fields on surfaces that replaces numerical integration through triangles with maps from the triangle boundaries to themselves. This representation, called edge maps, permits a concise description of flow behaviors and is equivalent to computing all possible streamlines at a user defined error threshold. Independent of this error streamlines computed using edge maps are guaranteed to be consistent up to floating point precision, enabling the stable extraction of features such as the topological skeleton. Furthermore, our representation explicitly stores spatial and temporal errors which we use to produce more informative visualizations. This work describes the construction of edge maps, the error quantification, and a refinement procedure to adhere to a user defined error bound. Finally, we introduce new visualizations using the additional information provided by edge maps to indicate the uncertainty involved in computing streamlines and topological structures. © 2012 IEEE.

  14. Visual Infrared Color Gradients in Elliptical Galaxies

    NARCIS (Netherlands)

    Peletier, R. F.; Valentijn, E. A.; Jameson, R. F.; de Zeeuw, P.T.

    1987-01-01

    Simultaneous measurements for visual and visual-infrared colors provide the means to determine both the average temperature of the giant branch and the turnoff-temperature of the main sequence. This allows to model fractional contributions of different populations, including age- and

  15. Recent Visual Experience Shapes Visual Processing in Rats through Stimulus-Specific Adaptation and Response Enhancement.

    Science.gov (United States)

    Vinken, Kasper; Vogels, Rufin; Op de Beeck, Hans

    2017-03-20

    From an ecological point of view, it is generally suggested that the main goal of vision in rats and mice is navigation and (aerial) predator evasion [1-3]. The latter requires fast and accurate detection of a change in the visual environment. An outstanding question is whether there are mechanisms in the rodent visual system that would support and facilitate visual change detection. An experimental protocol frequently used to investigate change detection in humans is the oddball paradigm, in which a rare, unexpected stimulus is presented in a train of stimulus repetitions [4]. A popular "predictive coding" theory of cortical responses states that neural responses should decrease for expected sensory input and increase for unexpected input [5, 6]. Despite evidence for response suppression and enhancement in noninvasive scalp recordings in humans with this paradigm [7, 8], it has proven challenging to observe both phenomena in invasive action potential recordings in other animals [9-11]. During a visual oddball experiment, we recorded multi-unit spiking activity in rat primary visual cortex (V1) and latero-intermediate area (LI), which is a higher area of the rodent ventral visual stream. In rat V1, there was only evidence for response suppression related to stimulus-specific adaptation, and not for response enhancement. However, higher up in area LI, spiking activity showed clear surprise-based response enhancement in addition to stimulus-specific adaptation. These results show that neural responses along the rat ventral visual stream become increasingly sensitive to changes in the visual environment, suggesting a system specialized in the detection of unexpected events. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. Invariant Visual Object and Face Recognition: Neural and Computational Bases, and a Model, VisNet.

    Science.gov (United States)

    Rolls, Edmund T

    2012-01-01

    Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy model in which invariant representations can be built by self-organizing learning based on the temporal and spatial statistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associative synaptic learning rule with a short-term memory trace, and/or it can use spatial continuity in continuous spatial transformation learning which does not require a temporal trace. The model of visual processing in the ventral cortical stream can build representations of objects that are invariant with respect to translation, view, size, and also lighting. The model has been extended to provide an account of invariant representations in the dorsal visual system of the global motion produced by objects such as looming, rotation, and object-based movement. The model has been extended to incorporate top-down feedback connections to model the control of attention by biased competition in, for example, spatial and object search tasks. The approach has also been extended to account for how the visual system can select single objects in complex visual scenes, and how multiple objects can be represented in a scene. The approach has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.

  17. COMICS: Cartoon Visualization of Omics Data in Spatial Context Using Anatomical Ontologies.

    Science.gov (United States)

    Travin, Dmitrii; Popov, Iaroslav; Guler, Arzu Tugce; Medvedev, Dmitry; van der Plas-Duivesteijn, Suzanne; Varela, Monica; Kolder, Iris C R M; Meijer, Annemarie H; Spaink, Herman P; Palmblad, Magnus

    2018-01-05

    COMICS is an interactive and open-access web platform for integration and visualization of molecular expression data in anatomograms of zebrafish, carp, and mouse model systems. Anatomical ontologies are used to map omics data across experiments and between an experiment and a particular visualization in a data-dependent manner. COMICS is built on top of several existing resources. Zebrafish and mouse anatomical ontologies with their controlled vocabulary (CV) and defined hierarchy are used with the ontoCAT R package to aggregate data for comparison and visualization. Libraries from the QGIS geographical information system are used with the R packages "maps" and "maptools" to visualize and interact with molecular expression data in anatomical drawings of the model systems. COMICS allows users to upload their own data from omics experiments, using any gene or protein nomenclature they wish, as long as CV terms are used to define anatomical regions or developmental stages. Common nomenclatures such as the ZFIN gene names and UniProt accessions are provided additional support. COMICS can be used to generate publication-quality visualizations of gene and protein expression across experiments. Unlike previous tools that have used anatomical ontologies to interpret imaging data in several animal models, including zebrafish, COMICS is designed to take spatially resolved data generated by dissection or fractionation and display this data in visually clear anatomical representations rather than large data tables. COMICS is optimized for ease-of-use, with a minimalistic web interface and automatic selection of the appropriate visual representation depending on the input data.

  18. Multiple variables data sets visualization in ROOT

    International Nuclear Information System (INIS)

    Couet, O

    2008-01-01

    The ROOT graphical framework provides support for many different functions including basic graphics, high-level visualization techniques, output on files, 3D viewing etc. They use well-known world standards to render graphics on screen, to produce high-quality output files, and to generate images for Web publishing. Many techniques allow visualization of all the basic ROOT data types, but the graphical framework was still a bit weak in the visualization of multiple variables data sets. This paper presents latest developments done in the ROOT framework to visualize multiple variables (>4) data sets

  19. A Visualization Review of Cloud Computing Algorithms in the Last Decade

    Directory of Open Access Journals (Sweden)

    Junhu Ruan

    2016-10-01

    Full Text Available Cloud computing has competitive advantages—such as on-demand self-service, rapid computing, cost reduction, and almost unlimited storage—that have attracted extensive attention from both academia and industry in recent years. Some review works have been reported to summarize extant studies related to cloud computing, but few analyze these studies based on the citations. Co-citation analysis can provide scholars a strong support to identify the intellectual bases and leading edges of a specific field. In addition, advanced algorithms, which can directly affect the availability, efficiency, and security of cloud computing, are the key to conducting computing across various clouds. Motivated by these observations, we conduct a specific visualization review of the studies related to cloud computing algorithms using one mainstream co-citation analysis tool—CiteSpace. The visualization results detect the most influential studies, journals, countries, institutions, and authors on cloud computing algorithms and reveal the intellectual bases and focuses of cloud computing algorithms in the literature, providing guidance for interested researchers to make further studies on cloud computing algorithms.

  20. Interactive data visualization foundations, techniques, and applications

    CERN Document Server

    Ward, Matthew; Keim, Daniel

    2010-01-01

    Visualization is the process of representing data, information, and knowledge in a visual form to support the tasks of exploration, confirmation, presentation, and understanding. This book is designed as a textbook for students, researchers, analysts, professionals, and designers of visualization techniques, tools, and systems. It covers the full spectrum of the field, including mathematical and analytical aspects, ranging from its foundations to human visual perception; from coded algorithms for different types of data, information and tasks to the design and evaluation of new visualization techniques. Sample programs are provided as starting points for building one's own visualization tools. Numerous data sets have been made available that highlight different application areas and allow readers to evaluate the strengths and weaknesses of different visualization methods. Exercises, programming projects, and related readings are given for each chapter. The book concludes with an examination of several existin...

  1. Intuitive Visualization of Transient Flow: Towards a Full 3D Tool

    Science.gov (United States)

    Michel, Isabel; Schröder, Simon; Seidel, Torsten; König, Christoph

    2015-04-01

    Visualization of geoscientific data is a challenging task especially when targeting a non-professional audience. In particular, the graphical presentation of transient vector data can be a significant problem. With STRING Fraunhofer ITWM (Kaiserslautern, Germany) in collaboration with delta h Ingenieurgesellschaft mbH (Witten, Germany) developed a commercial software for intuitive 2D visualization of 3D flow problems. Through the intuitive character of the visualization experts can more easily transport their findings to non-professional audiences. In STRING pathlets moving with the flow provide an intuition of velocity and direction of both steady-state and transient flow fields. The visualization concept is based on the Lagrangian view of the flow which means that the pathlets' movement is along the direction given by pathlines. In order to capture every detail of the flow an advanced method for intelligent, time-dependent seeding of the pathlets is implemented based on ideas of the Finite Pointset Method (FPM) originally conceived at and continuously developed by Fraunhofer ITWM. Furthermore, by the same method pathlets are removed during the visualization to avoid visual cluttering. Additional scalar flow attributes, for example concentration or potential, can either be mapped directly to the pathlets or displayed in the background of the pathlets on the 2D visualization plane. The extensive capabilities of STRING are demonstrated with the help of different applications in groundwater modeling. We will discuss the strengths and current restrictions of STRING which have surfaced during daily use of the software, for example by delta h. Although the software focusses on the graphical presentation of flow data for non-professional audiences its intuitive visualization has also proven useful to experts when investigating details of flow fields. Due to the popular reception of STRING and its limitation to 2D, the need arises for the extension to a full 3D tool

  2. Scientific visualization uncertainty, multifield, biomedical, and scalable visualization

    CERN Document Server

    Chen, Min; Johnson, Christopher; Kaufman, Arie; Hagen, Hans

    2014-01-01

    Based on the seminar that took place in Dagstuhl, Germany in June 2011, this contributed volume studies the four important topics within the scientific visualization field: uncertainty visualization, multifield visualization, biomedical visualization and scalable visualization. • Uncertainty visualization deals with uncertain data from simulations or sampled data, uncertainty due to the mathematical processes operating on the data, and uncertainty in the visual representation, • Multifield visualization addresses the need to depict multiple data at individual locations and the combination of multiple datasets, • Biomedical is a vast field with select subtopics addressed from scanning methodologies to structural applications to biological applications, • Scalability in scientific visualization is critical as data grows and computational devices range from hand-held mobile devices to exascale computational platforms. Scientific Visualization will be useful to practitioners of scientific visualization, ...

  3. Interactive Visualization of Healthcare Data Using Tableau.

    Science.gov (United States)

    Ko, Inseok; Chang, Hyejung

    2017-10-01

    Big data analysis is receiving increasing attention in many industries, including healthcare. Visualization plays an important role not only in intuitively showing the results of data analysis but also in the whole process of collecting, cleaning, analyzing, and sharing data. This paper presents a procedure for the interactive visualization and analysis of healthcare data using Tableau as a business intelligence tool. Starting with installation of the Tableau Desktop Personal version 10.3, this paper describes the process of understanding and visualizing healthcare data using an example. The example data of colon cancer patients were obtained from health insurance claims in years 2012 and 2013, provided by the Health Insurance Review and Assessment Service. To explore the visualization of healthcare data using Tableau for beginners, this paper describes the creation of a simple view for the average length of stay of colon cancer patients. Since Tableau provides various visualizations and customizations, the level of analysis can be increased with small multiples, view filtering, mark cards, and Tableau charts. Tableau is a software that can help users explore and understand their data by creating interactive visualizations. The software has the advantages that it can be used in conjunction with almost any database, and it is easy to use by dragging and dropping to create an interactive visualization expressing the desired format.

  4. Visual pollution in public spaces in Venezuela

    International Nuclear Information System (INIS)

    Mendez Velandia, Carmen Arelys

    2013-01-01

    Each day cities inhabitants are exposed to visual pollution. This work assess the environmental impact caused by visual pollution in public spaces, using as a case of study a mixed-use neighborhood in San Cristobal, the capital of Tachira state, Venezuela. Such assessment was made using a qualitative approach, where special emphasis was paid to the perception of these impacts by a purposive sample of users of this area. The compilation and analysis of information reveal the main visual pollutants existing in these public spaces where, in addition to outdoor advertising, overhead wires, rubbish, graffiti, vacant land, among others, cars and outdoor kiosks. Neighborhood users are sensitive to the presence of visual pollutants, which affects them physically and psychologically, as well as for the visual quality of their environment. Such signs were used to guide a qualitative appraisal of environmental impacts generated by these circumstances and to propose policies to mitigate them.

  5. The Two Visual Systems Hypothesis: new challenges and insights from visual form agnosic patient DF

    Directory of Open Access Journals (Sweden)

    Robert Leslie Whitwell

    2014-12-01

    Full Text Available Patient DF, who developed visual form agnosia following carbon monoxide poisoning, is still able to use vision to adjust the configuration of her grasping hand to the geometry of a goal object. This striking dissociation between perception and action in DF provided a key piece of evidence for the formulation of Goodale and Milner’s Two Visual Systems Hypothesis (TVSH. According to the TVSH, the ventral stream plays a critical role in constructing our visual percepts, whereas the dorsal stream mediates the visual control of action, such as visually guided grasping. In this review, we discuss recent studies of DF that provide new insights into the functional organization of the dorsal and ventral streams. We confirm recent evidence that DF has dorsal as well as ventral brain damage – and that her dorsal-stream lesions and surrounding atrophy have increased in size since her first published brain scan. We argue that the damage to DF’s dorsal stream explains her deficits in directing actions at targets in the periphery. We then focus on DF’s ability to accurately adjust her in-flight hand aperture to changes in the width of goal objects (grip scaling whose dimensions she cannot explicitly report. An examination of several studies of DF’s grip scaling under natural conditions reveals a modest though significant deficit. Importantly, however, she continues to show a robust dissociation between form vision for perception and form vision for action. We also review recent studies that explore the role of online visual feedback and terminal haptic feedback in the programming and control of her grasping. These studies make it clear that DF is no more reliant on visual or haptic feedback than are neurologically-intact individuals. In short, we argue that her ability to grasp objects depends on visual feedforward processing carried out by visuomotor networks in her dorsal stream that function in the much the same way as they do in neurologically

  6. WebViz:A Web-based Collaborative Interactive Visualization System for large-Scale Data Sets

    Science.gov (United States)

    Yuen, D. A.; McArthur, E.; Weiss, R. M.; Zhou, J.; Yao, B.

    2010-12-01

    WebViz is a web-based application designed to conduct collaborative, interactive visualizations of large data sets for multiple users, allowing researchers situated all over the world to utilize the visualization services offered by the University of Minnesota’s Laboratory for Computational Sciences and Engineering (LCSE). This ongoing project has been built upon over the last 3 1/2 years .The motivation behind WebViz lies primarily with the need to parse through an increasing amount of data produced by the scientific community as a result of larger and faster multicore and massively parallel computers coming to the market, including the use of general purpose GPU computing. WebViz allows these large data sets to be visualized online by anyone with an account. The application allows users to save time and resources by visualizing data ‘on the fly’, wherever he or she may be located. By leveraging AJAX via the Google Web Toolkit (http://code.google.com/webtoolkit/), we are able to provide users with a remote, web portal to LCSE's (http://www.lcse.umn.edu) large-scale interactive visualization system already in place at the University of Minnesota. LCSE’s custom hierarchical volume rendering software provides high resolution visualizations on the order of 15 million pixels and has been employed for visualizing data primarily from simulations in astrophysics to geophysical fluid dynamics . In the current version of WebViz, we have implemented a highly extensible back-end framework built around HTTP "server push" technology. The web application is accessible via a variety of devices including netbooks, iPhones, and other web and javascript-enabled cell phones. Features in the current version include the ability for users to (1) securely login (2) launch multiple visualizations (3) conduct collaborative visualization sessions (4) delegate control aspects of a visualization to others and (5) engage in collaborative chats with other users within the user interface

  7. Coherent visualization of spatial data adapted to roles, tasks, and hardware

    Science.gov (United States)

    Wagner, Boris; Peinsipp-Byma, Elisabeth

    2012-06-01

    Modern crisis management requires that users with different roles and computer environments have to deal with a high volume of various data from different sources. For this purpose, Fraunhofer IOSB has developed a geographic information system (GIS) which supports the user depending on available data and the task he has to solve. The system provides merging and visualization of spatial data from various civilian and military sources. It supports the most common spatial data standards (OGC, STANAG) as well as some proprietary interfaces, regardless if these are filebased or database-based. To set the visualization rules generic Styled Layer Descriptors (SLDs) are used, which are an Open Geospatial Consortium (OGC) standard. SLDs allow specifying which data are shown, when and how. The defined SLDs consider the users' roles and task requirements. In addition it is possible to use different displays and the visualization also adapts to the individual resolution of the display. Too high or low information density is avoided. Also, our system enables users with different roles to work together simultaneously using the same data base. Every user is provided with the appropriate and coherent spatial data depending on his current task. These so refined spatial data are served via the OGC services Web Map Service (WMS: server-side rendered raster maps), or the Web Map Tile Service - (WMTS: pre-rendered and cached raster maps).

  8. Visualization of graphical information fusion results

    Science.gov (United States)

    Blasch, Erik; Levchuk, Georgiy; Staskevich, Gennady; Burke, Dustin; Aved, Alex

    2014-06-01

    Graphical fusion methods are popular to describe distributed sensor applications such as target tracking and pattern recognition. Additional graphical methods include network analysis for social, communications, and sensor management. With the growing availability of various data modalities, graphical fusion methods are widely used to combine data from multiple sensors and modalities. To better understand the usefulness of graph fusion approaches, we address visualization to increase user comprehension of multi-modal data. The paper demonstrates a use case that combines graphs from text reports and target tracks to associate events and activities of interest visualization for testing Measures of Performance (MOP) and Measures of Effectiveness (MOE). The analysis includes the presentation of the separate graphs and then graph-fusion visualization for linking network graphs for tracking and classification.

  9. Understanding visualization: a formal approach using category theory and semiotics.

    Science.gov (United States)

    Vickers, Paul; Faith, Joe; Rossiter, Nick

    2013-06-01

    This paper combines the vocabulary of semiotics and category theory to provide a formal analysis of visualization. It shows how familiar processes of visualization fit the semiotic frameworks of both Saussure and Peirce, and extends these structures using the tools of category theory to provide a general framework for understanding visualization in practice, including: Relationships between systems, data collected from those systems, renderings of those data in the form of representations, the reading of those representations to create visualizations, and the use of those visualizations to create knowledge and understanding of the system under inspection. The resulting framework is validated by demonstrating how familiar information visualization concepts (such as literalness, sensitivity, redundancy, ambiguity, generalizability, and chart junk) arise naturally from it and can be defined formally and precisely. This paper generalizes previous work on the formal characterization of visualization by, inter alia, Ziemkiewicz and Kosara and allows us to formally distinguish properties of the visualization process that previous work does not.

  10. A telephone survey of low vision services in U.S. schools for the blind and visually impaired.

    Science.gov (United States)

    Kran, Barry S; Wright, Darick W

    2008-07-01

    The scope of clinical low vision services and access to comprehensive eye care through U.S. schools for the blind and visually impaired is not well known. Advances in medicine and educational trends toward inclusion have resulted in higher numbers of visually impaired children with additional cognitive, motor, and developmental impairments enrolled in U.S. schools for the blind and visually impaired. The availability and frequency of eye care and vision education services for individuals with visual and multiple impairments at schools for the blind is explored in this report using data collected in a 24-item telephone survey from 35 of 42 identified U.S. schools for the blind. The results indicate that 54% of the contacted schools (19) offer clinical eye examinations. All of these schools provide eye care to the 6 to 21 age group, yet only 10 schools make this service available to children from birth to 3 years of age. In addition, two thirds of these schools discontinue eye care when the students graduate or transition to adult service agencies. The majority (94.7%) of eye care is provided by optometrists or a combination of optometry and ophthalmology, and 42.1% of these schools have an affiliation with an optometric institution. When there is a collaborative agreement, clinical services for students are available more frequently. The authors find that questions emerge regarding access to care, identification of appropriate models of care, and training of educational/medical/optometric personnel to meet the needs of a very complex patient population.

  11. Accumulation and Decay of Visual Capture and the Ventriloquism Aftereffect Caused by Brief Audio-Visual Disparities

    Science.gov (United States)

    Bosen, Adam K.; Fleming, Justin T.; Allen, Paul D.; O’Neill, William E.; Paige, Gary D.

    2016-01-01

    Visual capture and the ventriloquism aftereffect resolve spatial disparities of incongruent auditory-visual (AV) objects by shifting auditory spatial perception to align with vision. Here, we demonstrated the distinct temporal characteristics of visual capture and the ventriloquism aftereffect in response to brief AV disparities. In a set of experiments, subjects localized either the auditory component of AV targets (A within AV) or a second sound presented at varying delays (1-20s) after AV exposure (A2 after AV). AV targets were trains of brief presentations (1 or 20), covering a ±30° azimuthal range, and with ±8° (R or L) disparity. We found that the magnitude of visual capture generally reached its peak within a single AV pair and did not dissipate with time, while the ventriloquism aftereffect accumulated with repetitions of AV pairs and dissipated with time. Additionally, the magnitude of the auditory shift induced by each phenomenon was uncorrelated across listeners and visual capture was unaffected by subsequent auditory targets, indicating that visual capture and the ventriloquism aftereffect are separate mechanisms with distinct effects on auditory spatial perception. Our results indicate that visual capture is a ‘sample-and-hold’ process that binds related objects and stores the combined percept in memory, whereas the ventriloquism aftereffect is a ‘leaky integrator’ process that accumulates with experience and decays with time to compensate for cross-modal disparities. PMID:27837258

  12. Radical “Visual Capture” Observed in a Patient with Severe Visual Agnosia

    Science.gov (United States)

    Takaiwa, Akiko; Yoshimura, Hirokazu; Abe, Hirofumi; Terai, Satoshi

    2003-01-01

    We report the case of a 79-year-old female with visual agnosia due to brain infarction in the left posterior cerebral artery. She could recognize objects used in daily life rather well by touch (the number of objects correctly identified was 16 out of 20 presented objects), but she could not recognize them as well by vision (6 out of 20). In this case, it was expected that she would recognize them well when permitted to use touch and vision simultaneously. Our patient, however, performed poorly, producing 5 correct answers out of 20 in the Vision-and-Touch condition. It would be natural to think that visual capture functions when vision and touch provide contradictory information on concrete positions and shapes. However, in the present case, it functioned in spite of the visual deficit in recognizing objects. This should be called radical visual capture. By presenting detailed descriptions of her symptoms and neuropsychological and neuroradiological data, we clarify the characteristics of this type of capture. PMID:12719638

  13. Radical “Visual Capture” Observed in a Patient with Severe Visual Agnosia

    Directory of Open Access Journals (Sweden)

    Akiko Takaiwa

    2003-01-01

    Full Text Available We report the case of a 79-year-old female with visual agnosia due to brain infarction in the left posterior cerebral artery. She could recognize objects used in daily life rather well by touch (the number of objects correctly identified was 16 out of 20 presented objects, but she could not recognize them as well by vision (6 out of 20. In this case, it was expected that she would recognize them well when permitted to use touch and vision simultaneously. Our patient, however, performed poorly, producing 5 correct answers out of 20 in the Vision-and-Touch condition. It would be natural to think that visual capture functions when vision and touch provide contradictory information on concrete positions and shapes. However, in the present case, it functioned in spite of the visual deficit in recognizing objects. This should be called radical visual capture. By presenting detailed descriptions of her symptoms and neuropsychological and neuroradiological data, we clarify the characteristics of this type of capture.

  14. [Quality of life in visual impaired children treated for Early Visual Stimulation].

    Science.gov (United States)

    Messa, Alcione Aparecida; Nakanami, Célia Regina; Lopes, Marcia Caires Bestilleiro

    2012-01-01

    To evaluate the quality of life in visually impaired children followed in the Early Visual Stimulation Ambulatory of Unifesp in two moments, before and after rehabilitational intervention of multiprofessional team. A CVFQ quality of life questionnaire was used. This instrument has a version for less than three years old children and another one for children older than three years (three to seven years) divided in six subscales: General health, General vision health, Competence, Personality, Family impact and Treatment. The correlation between the subscales on two moments was significant. There was a statistically significant difference in general vision health (p=0,029) and other important differences obtained in general health, family impact and quality of life general score. The questionnaire showed to be effective in order to measure the quality of life related to vision on families followed on this ambulatory. The multidisciplinary interventions provided visual function and familiar quality of life improvement. The quality of life related to vision in children followed in Early Visual Stimulation Ambulatory of Unifesp showed a significant improvement on general vision health.

  15. Multimodal assessment of visual attention using the Bethesda Eye & Attention Measure (BEAM).

    Science.gov (United States)

    Ettenhofer, Mark L; Hershaw, Jamie N; Barry, David M

    2016-01-01

    Computerized cognitive tests measuring manual response time (RT) and errors are often used in the assessment of visual attention. Evidence suggests that saccadic RT and errors may also provide valuable information about attention. This study was conducted to examine a novel approach to multimodal assessment of visual attention incorporating concurrent measurements of saccadic eye movements and manual responses. A computerized cognitive task, the Bethesda Eye & Attention Measure (BEAM) v.34, was designed to evaluate key attention networks through concurrent measurement of saccadic and manual RT and inhibition errors. Results from a community sample of n = 54 adults were analyzed to examine effects of BEAM attention cues on manual and saccadic RT and inhibition errors, internal reliability of BEAM metrics, relationships between parallel saccadic and manual metrics, and relationships of BEAM metrics to demographic characteristics. Effects of BEAM attention cues (alerting, orienting, interference, gap, and no-go signals) were consistent with previous literature examining key attention processes. However, corresponding saccadic and manual measurements were weakly related to each other, and only manual measurements were related to estimated verbal intelligence or years of education. This study provides preliminary support for the feasibility of multimodal assessment of visual attention using the BEAM. Results suggest that BEAM saccadic and manual metrics provide divergent measurements. Additional research will be needed to obtain comprehensive normative data, to cross-validate BEAM measurements with other indicators of neural and cognitive function, and to evaluate the utility of these metrics within clinical populations of interest.

  16. 3D Stereoscopic Visualization of Fenestrated Stent Grafts

    International Nuclear Information System (INIS)

    Sun Zhonghua; Squelch, Andrew; Bartlett, Andrew; Cunningham, Kylie; Lawrence-Brown, Michael

    2009-01-01

    The purpose of this study was to present a technique of stereoscopic visualization in the evaluation of patients with abdominal aortic aneurysm treated with fenestrated stent grafts compared with conventional 2D visualizations. Two patients with abdominal aortic aneurysm undergoing fenestrated stent grafting were selected for inclusion in the study. Conventional 2D views including axial, multiplanar reformation, maximum-intensity projection, and volume rendering and 3D stereoscopic visualizations were assessed by two experienced reviewers independently with regard to the treatment outcomes of fenestrated repair. Interobserver agreement was assessed with Kendall's W statistic. Multiplanar reformation and maximum-intensity projection visualizations were scored the highest in the evaluation of parameters related to the fenestrated stent grafting, while 3D stereoscopic visualization was scored as valuable in the evaluation of appearance (any distortions) of the fenestrated stent. Volume rendering was found to play a limited role in the follow-up of fenestrated stent grafting. 3D stereoscopic visualization adds additional information that assists endovascular specialists to identify any distortions of the fenestrated stents when compared with 2D visualizations.

  17. Multisensory training can promote or impede visual perceptual learning of speech stimuli: visual-tactile vs. visual-auditory training.

    Science.gov (United States)

    Eberhardt, Silvio P; Auer, Edward T; Bernstein, Lynne E

    2014-01-01

    In a series of studies we have been investigating how multisensory training affects unisensory perceptual learning with speech stimuli. Previously, we reported that audiovisual (AV) training with speech stimuli can promote auditory-only (AO) perceptual learning in normal-hearing adults but can impede learning in congenitally deaf adults with late-acquired cochlear implants. Here, impeder and promoter effects were sought in normal-hearing adults who participated in lipreading training. In Experiment 1, visual-only (VO) training on paired associations between CVCVC nonsense word videos and nonsense pictures demonstrated that VO words could be learned to a high level of accuracy even by poor lipreaders. In Experiment 2, visual-auditory (VA) training in the same paradigm but with the addition of synchronous vocoded acoustic speech impeded VO learning of the stimuli in the paired-associates paradigm. In Experiment 3, the vocoded AO stimuli were shown to be less informative than the VO speech. Experiment 4 combined vibrotactile speech stimuli with the visual stimuli during training. Vibrotactile stimuli were shown to promote visual perceptual learning. In Experiment 5, no-training controls were used to show that training with visual speech carried over to consonant identification of untrained CVCVC stimuli but not to lipreading words in sentences. Across this and previous studies, multisensory training effects depended on the functional relationship between pathways engaged during training. Two principles are proposed to account for stimulus effects: (1) Stimuli presented to the trainee's primary perceptual pathway will impede learning by a lower-rank pathway. (2) Stimuli presented to the trainee's lower rank perceptual pathway will promote learning by a higher-rank pathway. The mechanisms supporting these principles are discussed in light of multisensory reverse hierarchy theory (RHT).

  18. Attention and visual memory in visualization and computer graphics.

    Science.gov (United States)

    Healey, Christopher G; Enns, James T

    2012-07-01

    A fundamental goal of visualization is to produce images of data that support visual analysis, exploration, and discovery of novel insights. An important consideration during visualization design is the role of human visual perception. How we "see" details in an image can directly impact a viewer's efficiency and effectiveness. This paper surveys research on attention and visual perception, with a specific focus on results that have direct relevance to visualization and visual analytics. We discuss theories of low-level visual perception, then show how these findings form a foundation for more recent work on visual memory and visual attention. We conclude with a brief overview of how knowledge of visual attention and visual memory is being applied in visualization and graphics. We also discuss how challenges in visualization are motivating research in psychophysics.

  19. Testing of Visual Field with Virtual Reality Goggles in Manual and Visual Grasp Modes

    Directory of Open Access Journals (Sweden)

    Dariusz Wroblewski

    2014-01-01

    Full Text Available Automated perimetry is used for the assessment of visual function in a variety of ophthalmic and neurologic diseases. We report development and clinical testing of a compact, head-mounted, and eye-tracking perimeter (VirtualEye that provides a more comfortable test environment than the standard instrumentation. VirtualEye performs the equivalent of a full threshold 24-2 visual field in two modes: (1 manual, with patient response registered with a mouse click, and (2 visual grasp, where the eye tracker senses change in gaze direction as evidence of target acquisition. 59 patients successfully completed the test in manual mode and 40 in visual grasp mode, with 59 undergoing the standard Humphrey field analyzer (HFA testing. Large visual field defects were reliably detected by VirtualEye. Point-by-point comparison between the results obtained with the different modalities indicates: (1 minimal systematic differences between measurements taken in visual grasp and manual modes, (2 the average standard deviation of the difference distributions of about 5 dB, and (3 a systematic shift (of 4–6 dB to lower sensitivities for VirtualEye device, observed mostly in high dB range. The usability survey suggested patients’ acceptance of the head-mounted device. The study appears to validate the concepts of a head-mounted perimeter and the visual grasp mode.

  20. Testing of visual field with virtual reality goggles in manual and visual grasp modes.

    Science.gov (United States)

    Wroblewski, Dariusz; Francis, Brian A; Sadun, Alfredo; Vakili, Ghazal; Chopra, Vikas

    2014-01-01

    Automated perimetry is used for the assessment of visual function in a variety of ophthalmic and neurologic diseases. We report development and clinical testing of a compact, head-mounted, and eye-tracking perimeter (VirtualEye) that provides a more comfortable test environment than the standard instrumentation. VirtualEye performs the equivalent of a full threshold 24-2 visual field in two modes: (1) manual, with patient response registered with a mouse click, and (2) visual grasp, where the eye tracker senses change in gaze direction as evidence of target acquisition. 59 patients successfully completed the test in manual mode and 40 in visual grasp mode, with 59 undergoing the standard Humphrey field analyzer (HFA) testing. Large visual field defects were reliably detected by VirtualEye. Point-by-point comparison between the results obtained with the different modalities indicates: (1) minimal systematic differences between measurements taken in visual grasp and manual modes, (2) the average standard deviation of the difference distributions of about 5 dB, and (3) a systematic shift (of 4-6 dB) to lower sensitivities for VirtualEye device, observed mostly in high dB range. The usability survey suggested patients' acceptance of the head-mounted device. The study appears to validate the concepts of a head-mounted perimeter and the visual grasp mode.

  1. Invariant visual object and face recognition: neural and computational bases, and a model, VisNet

    Directory of Open Access Journals (Sweden)

    Edmund T eRolls

    2012-06-01

    Full Text Available Neurophysiological evidence for invariant representations of objects and faces in the primate inferior temporal visual cortex is described. Then a computational approach to how invariant representations are formed in the brain is described that builds on the neurophysiology. A feature hierarchy modelin which invariant representations can be built by self-organizing learning based on the temporal and spatialstatistics of the visual input produced by objects as they transform in the world is described. VisNet can use temporal continuity in an associativesynaptic learning rule with a short term memory trace, and/or it can use spatialcontinuity in Continuous Spatial Transformation learning which does not require a temporal trace. The model of visual processing in theventral cortical stream can build representations of objects that are invariant withrespect to translation, view, size, and also lighting. The modelhas been extended to provide an account of invariant representations in the dorsal visualsystem of the global motion produced by objects such as looming, rotation, and objectbased movement. The model has been extended to incorporate top-down feedback connectionsto model the control of attention by biased competition in for example spatial and objectsearch tasks. The model has also been extended to account for how the visual system canselect single objects in complex visual scenes, and how multiple objects can berepresented in a scene. The model has also been extended to provide, with an additional layer, for the development of representations of spatial scenes of the type found in the hippocampus.

  2. Teach yourself visually Fire tablets

    CERN Document Server

    Marmel, Elaine

    2014-01-01

    Expert visual guidance to getting the most out of your Fire tablet Teach Yourself VISUALLY Fire Tablets is the comprehensive guide to getting the most out of your new Fire tablet. Learn to find and read new bestsellers through the Kindle app, browse the app store to find top games, surf the web, send e-mail, shop online, and much more! With expert guidance laid out in a highly visual style, this book is perfect for those new to the Fire tablet, providing all the information you need to get the most out of your device. Abundant screenshots of the Fire tablet graphically rich, touch-based Androi

  3. VASCo: computation and visualization of annotated protein surface contacts

    Directory of Open Access Journals (Sweden)

    Thallinger Gerhard G

    2009-01-01

    Full Text Available Abstract Background Structural data from crystallographic analyses contain a vast amount of information on protein-protein contacts. Knowledge on protein-protein interactions is essential for understanding many processes in living cells. The methods to investigate these interactions range from genetics to biophysics, crystallography, bioinformatics and computer modeling. Also crystal contact information can be useful to understand biologically relevant protein oligomerisation as they rely in principle on the same physico-chemical interaction forces. Visualization of crystal and biological contact data including different surface properties can help to analyse protein-protein interactions. Results VASCo is a program package for the calculation of protein surface properties and the visualization of annotated surfaces. Special emphasis is laid on protein-protein interactions, which are calculated based on surface point distances. The same approach is used to compare surfaces of two aligned molecules. Molecular properties such as electrostatic potential or hydrophobicity are mapped onto these surface points. Molecular surfaces and the corresponding properties are calculated using well established programs integrated into the package, as well as using custom developed programs. The modular package can easily be extended to include new properties for annotation. The output of the program is most conveniently displayed in PyMOL using a custom-made plug-in. Conclusion VASCo supplements other available protein contact visualisation tools and provides additional information on biological interactions as well as on crystal contacts. The tool provides a unique feature to compare surfaces of two aligned molecules based on point distances and thereby facilitates the visualization and analysis of surface differences.

  4. Synergy Maps: exploring compound combinations using network-based visualization.

    Science.gov (United States)

    Lewis, Richard; Guha, Rajarshi; Korcsmaros, Tamás; Bender, Andreas

    2015-01-01

    The phenomenon of super-additivity of biological response to compounds applied jointly, termed synergy, has the potential to provide many therapeutic benefits. Therefore, high throughput screening of compound combinations has recently received a great deal of attention. Large compound libraries and the feasibility of all-pairs screening can easily generate large, information-rich datasets. Previously, these datasets have been visualized using either a heat-map or a network approach-however these visualizations only partially represent the information encoded in the dataset. A new visualization technique for pairwise combination screening data, termed "Synergy Maps", is presented. In a Synergy Map, information about the synergistic interactions of compounds is integrated with information about their properties (chemical structure, physicochemical properties, bioactivity profiles) to produce a single visualization. As a result the relationships between compound and combination properties may be investigated simultaneously, and thus may afford insight into the synergy observed in the screen. An interactive web app implementation, available at http://richlewis42.github.io/synergy-maps, has been developed for public use, which may find use in navigating and filtering larger scale combination datasets. This tool is applied to a recent all-pairs dataset of anti-malarials, tested against Plasmodium falciparum, and a preliminary analysis is given as an example, illustrating the disproportionate synergism of histone deacetylase inhibitors previously described in literature, as well as suggesting new hypotheses for future investigation. Synergy Maps improve the state of the art in compound combination visualization, by simultaneously representing individual compound properties and their interactions. The web-based tool allows straightforward exploration of combination data, and easier identification of correlations between compound properties and interactions.

  5. An approach for the development of visual configuration systems

    DEFF Research Database (Denmark)

    Hvam, Lars; Ladeby, Klaes Rohde

    2007-01-01

    How can a visual configuration system be developed to support the specification process' in companies that manufacture customer tailored products? This article focuses on how visual configuration systems can be developed. The approach for developing visual configuration systems has been developed...... by Centre for Product Modelling (CPM) at The Technical University of Denmark. The approach is based on experiences from a visualization project in co-operation between CPM and the global provider of power protection American Power Conversion (APC). The visual configuration system was developed in 2001...... of the product in the visual configuration system....

  6. MEVA--An Interactive Visualization Application for Validation of Multifaceted Meteorological Data with Multiple 3D Devices.

    Directory of Open Access Journals (Sweden)

    Carolin Helbig

    Full Text Available To achieve more realistic simulations, meteorologists develop and use models with increasing spatial and temporal resolution. The analyzing, comparing, and visualizing of resulting simulations becomes more and more challenging due to the growing amounts and multifaceted character of the data. Various data sources, numerous variables and multiple simulations lead to a complex database. Although a variety of software exists suited for the visualization of meteorological data, none of them fulfills all of the typical domain-specific requirements: support for quasi-standard data formats and different grid types, standard visualization techniques for scalar and vector data, visualization of the context (e.g., topography and other static data, support for multiple presentation devices used in modern sciences (e.g., virtual reality, a user-friendly interface, and suitability for cooperative work.Instead of attempting to develop yet another new visualization system to fulfill all possible needs in this application domain, our approach is to provide a flexible workflow that combines different existing state-of-the-art visualization software components in order to hide the complexity of 3D data visualization tools from the end user. To complete the workflow and to enable the domain scientists to interactively visualize their data without advanced skills in 3D visualization systems, we developed a lightweight custom visualization application (MEVA - multifaceted environmental data visualization application that supports the most relevant visualization and interaction techniques and can be easily deployed. Specifically, our workflow combines a variety of different data abstraction methods provided by a state-of-the-art 3D visualization application with the interaction and presentation features of a computer-games engine. Our customized application includes solutions for the analysis of multirun data, specifically with respect to data uncertainty and

  7. Three dimensional visualization of medical images

    International Nuclear Information System (INIS)

    Suto, Yasuzo

    1992-01-01

    Three dimensional visualization is a stereoscopic technique that allows the diagnosis and treatment of complicated anatomy site of the bone and organ. In this article, the current status and technical application of three dimensional visualization are introduced with special reference to X-ray CT and MRI. The surface display technique is the most common for three dimensional visualization, consisting of geometric model, voxel element, and stereographic composition techniques. Recent attention has been paid to display method of the content of the subject called as volume rendering, whereby information on the living body is provided accurately. The application of three dimensional visualization is described in terms of diagnostic imaging and surgical simulation. (N.K.)

  8. BoreholeAR: A mobile tablet application for effective borehole database visualization using an augmented reality technology

    Science.gov (United States)

    Lee, Sangho; Suh, Jangwon; Park, Hyeong-Dong

    2015-03-01

    Boring logs are widely used in geological field studies since the data describes various attributes of underground and surface environments. However, it is difficult to manage multiple boring logs in the field as the conventional management and visualization methods are not suitable for integrating and combining large data sets. We developed an iPad application to enable its user to search the boring log rapidly and visualize them using the augmented reality (AR) technique. For the development of the application, a standard borehole database appropriate for a mobile-based borehole database management system was designed. The application consists of three modules: an AR module, a map module, and a database module. The AR module superimposes borehole data on camera imagery as viewed by the user and provides intuitive visualization of borehole locations. The map module shows the locations of corresponding borehole data on a 2D map with additional map layers. The database module provides data management functions for large borehole databases for other modules. Field survey was also carried out using more than 100,000 borehole data.

  9. Visual Climate Knowledge Discovery within a Grid Environment

    Science.gov (United States)

    Heitzler, Magnus; Kiertscher, Simon; Lang, Ulrich; Nocke, Thomas; Wahnes, Jens; Winkelmann, Volker

    2013-04-01

    The C3Grid-INAD project aims to provide a common grid infrastructure for the climate science community to improve access to climate related data and domain workflows via the Internet. To make sense of the heterogeneous, often large-sized or even dynamically generated and modified files originating from C3Grid, a highly flexible and user-friendly analysis software is needed to run on different high-performance computing nodes within the grid environment, when requested by a user. Because visual analysis tools directly address human visual perception and therefore are being considered to be highly intuitive, two distinct visualization workflows have been integrated in C3Grid-INAD, targeting different application backgrounds. First, a GrADS-based workflow enables the ad-hoc visualization of selected datasets in respect to data source, temporal and spatial extent, as well as variables of interest. Being low in resource demands, this workflow allows for users to gain fast insights through basic spatial visualization. For more advanced visual analysis purposes, a second workflow enables the user to start a visualization session via Virtual Network Computing (VNC) and VirtualGL to access high-performance computing nodes on which a wide variety of different visual analysis tools are provided. These are made available using the easy-to-use software system SimEnvVis. Considering metadata as well as user preferences and analysis goals, SimEnvVis evaluates the attached tools and launches the selected visual analysis tool by providing a dynamically parameterized template. This approach facilitates the selection of the most suitable tools, and at the same time eases the process of familiarization with them. Because of a higher demand for computational resources, SimEnvVis-sessions are restricted to a smaller set of users at a time. This architecture enables climate scientists not only to remotely access, but also to visually analyze highly heterogeneous data originating from C3

  10. Visual signal quality assessment quality of experience (QOE)

    CERN Document Server

    Ma, Lin; Lin, Weisi; Ngan, King

    2015-01-01

    This book provides comprehensive coverage of the latest trends/advances in subjective and objective quality evaluation for traditional visual signals, such as 2D images and video, as well as the most recent challenges for the field of multimedia quality assessment and processing, such as mobile video and social media. Readers will learn how to ensure the highest storage/delivery/ transmission quality of visual content (including image, video, graphics, animation, etc.) from the server to the consumer, under resource constraints, such as computation, bandwidth, storage space, battery life, etc.    Provides an overview of quality assessment for traditional visual signals; Covers newly emerged visual signals such as social media, 3D image/video, mobile video, high dynamic range (HDR) images, graphics/animation, etc., which demand better quality of experience (QoE); Helps readers to develop better quality metrics and processing methods for newly emerged visual signals; Enables testing, optimizing, benchmarking...

  11. Visualization in scientific computing

    National Research Council Canada - National Science Library

    Nielson, Gregory M; Shriver, Bruce D; Rosenblum, Lawrence J

    1990-01-01

    The purpose of this text is to provide a reference source to scientists, engineers, and students who are new to scientific visualization or who are interested in expanding their knowledge in this subject...

  12. Visual working memory is more tolerant than visual long-term memory.

    Science.gov (United States)

    Schurgin, Mark W; Flombaum, Jonathan I

    2018-05-07

    Human visual memory is tolerant, meaning that it supports object recognition despite variability across encounters at the image level. Tolerant object recognition remains one capacity in which artificial intelligence trails humans. Typically, tolerance is described as a property of human visual long-term memory (VLTM). In contrast, visual working memory (VWM) is not usually ascribed a role in tolerant recognition, with tests of that system usually demanding discriminatory power-identifying changes, not sameness. There are good reasons to expect that VLTM is more tolerant; functionally, recognition over the long-term must accommodate the fact that objects will not be viewed under identical conditions; and practically, the passive and massive nature of VLTM may impose relatively permissive criteria for thinking that two inputs are the same. But empirically, tolerance has never been compared across working and long-term visual memory. We therefore developed a novel paradigm for equating encoding and test across different memory types. In each experiment trial, participants saw two objects, memory for one tested immediately (VWM) and later for the other (VLTM). VWM performance was better than VLTM and remained robust despite the introduction of image and object variability. In contrast, VLTM performance suffered linearly as more variability was introduced into test stimuli. Additional experiments excluded interference effects as causes for the observed differences. These results suggest the possibility of a previously unidentified role for VWM in the acquisition of tolerant representations for object recognition. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  13. MEG/EEG source reconstruction, statistical evaluation, and visualization with NUTMEG.

    Science.gov (United States)

    Dalal, Sarang S; Zumer, Johanna M; Guggisberg, Adrian G; Trumpis, Michael; Wong, Daniel D E; Sekihara, Kensuke; Nagarajan, Srikantan S

    2011-01-01

    NUTMEG is a source analysis toolbox geared towards cognitive neuroscience researchers using MEG and EEG, including intracranial recordings. Evoked and unaveraged data can be imported to the toolbox for source analysis in either the time or time-frequency domains. NUTMEG offers several variants of adaptive beamformers, probabilistic reconstruction algorithms, as well as minimum-norm techniques to generate functional maps of spatiotemporal neural source activity. Lead fields can be calculated from single and overlapping sphere head models or imported from other software. Group averages and statistics can be calculated as well. In addition to data analysis tools, NUTMEG provides a unique and intuitive graphical interface for visualization of results. Source analyses can be superimposed onto a structural MRI or headshape to provide a convenient visual correspondence to anatomy. These results can also be navigated interactively, with the spatial maps and source time series or spectrogram linked accordingly. Animations can be generated to view the evolution of neural activity over time. NUTMEG can also display brain renderings and perform spatial normalization of functional maps using SPM's engine. As a MATLAB package, the end user may easily link with other toolboxes or add customized functions.

  14. Peranan Komunikasi Visual bagi Identitas Perusahaan

    Directory of Open Access Journals (Sweden)

    Laura Christina Luzar

    2013-04-01

    Full Text Available In the current era of globalization, along with expanding the market, many companies are competing to attract the attention of consumers to buy their products. One way to compete and survive in this growing market is by creating an image and visual identity. Strong characters could position the image of a company, visual identity is so necessary in showing the image which wants to be introduced to the public. The vigorous competition between firms creates visual identity, become a prominent feature of each companies. Therefore, it is needed a visual communication designer who can create and develop the concept of corporate identity systems. Visual communication designer also have responsibility to create identity into a system that does not sell directly, but provides identity, information, persuasive and ultimately serves as an effective marketing media. 

  15. Visual Literacy and Visual Thinking.

    Science.gov (United States)

    Hortin, John A.

    It is proposed that visual literacy be defined as the ability to understand (read) and use (write) images and to think and learn in terms of images. This definition includes three basic principles: (1) visuals are a language and thus analogous to verbal language; (2) a visually literate person should be able to understand (read) images and use…

  16. Visual Literacy and Visual Culture.

    Science.gov (United States)

    Messaris, Paul

    Familiarity with specific images or sets of images plays a role in a culture's visual heritage. Two questions can be asked about this type of visual literacy: Is this a type of knowledge that is worth building into the formal educational curriculum of our schools? What are the educational implications of visual literacy? There is a three-part…

  17. Assessment of visual disability using visual evoked potentials.

    Science.gov (United States)

    Jeon, Jihoon; Oh, Seiyul; Kyung, Sungeun

    2012-08-06

    The purpose of this study is to validate the use of visual evoked potential (VEP) to objectively quantify visual acuity in normal and amblyopic patients, and determine if it is possible to predict visual acuity in disability assessment to register visual pathway lesions. A retrospective chart review was conducted of patients diagnosed with normal vision, unilateral amblyopia, optic neuritis, and visual disability who visited the university medical center for registration from March 2007 to October 2009. The study included 20 normal subjects (20 right eyes: 10 females, 10 males, ages 9-42 years), 18 unilateral amblyopic patients (18 amblyopic eyes, ages 19-36 years), 19 optic neuritis patients (19 eyes: ages 9-71 years), and 10 patients with visual disability having visual pathway lesions. Amplitude and latencies were analyzed and correlations with visual acuity (logMAR) were derived from 20 normal and 18 amblyopic subjects. Correlation of VEP amplitude and visual acuity (logMAR) of 19 optic neuritis patients confirmed relationships between visual acuity and amplitude. We calculated the objective visual acuity (logMAR) of 16 eyes from 10 patients to diagnose the presence or absence of visual disability using relations derived from 20 normal and 18 amblyopic eyes. Linear regression analyses between amplitude of pattern visual evoked potentials and visual acuity (logMAR) of 38 eyes from normal (right eyes) and amblyopic (amblyopic eyes) subjects were significant [y = -0.072x + 1.22, x: VEP amplitude, y: visual acuity (logMAR)]. There were no significant differences between visual acuity prediction values, which substituted amplitude values of 19 eyes with optic neuritis into function. We calculated the objective visual acuity of 16 eyes of 10 patients to diagnose the presence or absence of visual disability using relations of y = -0.072x + 1.22 (-0.072). This resulted in a prediction reference of visual acuity associated with malingering vs. real

  18. 3D visualization of port simulation.

    Energy Technology Data Exchange (ETDEWEB)

    Horsthemke, W. H.; Macal, C. M.; Nevins, M. R.

    1999-06-14

    Affordable and realistic three dimensional visualization technology can be applied to large scale constructive simulations such as the port simulation model, PORTSIM. These visualization tools enhance the experienced planner's ability to form mental models of how seaport operations will unfold when the simulation model is implemented and executed. They also offer unique opportunities to train new planners not only in the use of the simulation model but on the layout and design of seaports. Simulation visualization capabilities are enhanced by borrowing from work on interface design, camera control, and data presentation. Using selective fidelity, the designers of these visualization systems can reduce their time and efforts by concentrating on those features which yield the most value for their simulation. Offering the user various observational tools allows the freedom to simply watch or engage in the simulation without getting lost. Identifying the underlying infrastructure or cargo items with labels can provide useful information at the risk of some visual clutter. The PortVis visualization expands the PORTSIM user base which can benefit from the results provided by this capability, especially in strategic planning, mission rehearsal, and training. Strategic planners will immediately reap the benefits of seeing the impact of increased throughput visually without keeping track of statistical data. Mission rehearsal and training users will have an effective training tool to supplement their operational training exercises which are limited in number because of their high costs. Having another effective training modality in this visualization system allows more training to take place and more personnel to gain an understanding of seaport operations. This simulation and visualization training can be accomplished at lower cost than would be possible for the operational training exercises alone. The application of PORTSIM and PortVis will lead to more efficient

  19. A feast of visualization

    Science.gov (United States)

    2008-12-01

    Strength through structure The visualization and assessment of inner human bone structures can provide better predictions of fracture risk due to osteoporosis. Using micro-computed tomography (µCT), Christoph Räth from the Max Planck Institute for Extraterrestrial Physics and colleagues based in Munich, Vienna and Salzburg have shown how complex lattice-shaped bone structures can be visualized. The structures were quantified by calculating certain "texture measures" that yield new information about the stability of the bone. A 3D visualization showing the variation with orientation of one of the texture measures for four different bone specimens (from left to right) is shown above. Such analyses may help us to improve our understanding of disease and drug-induced changes in bone structure (C Räth et al. 2008 New J. Phys. 10 125010).

  20. Visual analytics of inherently noisy crowdsourced data on ultra high resolution displays

    Science.gov (United States)

    Huynh, Andrew; Ponto, Kevin; Lin, Albert Yu-Min; Kuester, Falko

    The increasing prevalence of distributed human microtasking, crowdsourcing, has followed the exponential increase in data collection capabilities. The large scale and distributed nature of these microtasks produce overwhelming amounts of information that is inherently noisy due to the nature of human input. Furthermore, these inputs create a constantly changing dataset with additional information added on a daily basis. Methods to quickly visualize, filter, and understand this information over temporal and geospatial constraints is key to the success of crowdsourcing. This paper present novel methods to visually analyze geospatial data collected through crowdsourcing on top of remote sensing satellite imagery. An ultra high resolution tiled display system is used to explore the relationship between human and satellite remote sensing data at scale. A case study is provided that evaluates the presented technique in the context of an archaeological field expedition. A team in the field communicated in real-time with and was guided by researchers in the remote visual analytics laboratory, swiftly sifting through incoming crowdsourced data to identify target locations that were identified as viable archaeological sites.

  1. [Dice test--a simple method for assessment of visual acuity in infants with visual deficits].

    Science.gov (United States)

    Rohrschneider, K; Brill, B; Bayer, Y; Ahrens, P

    2010-07-01

    Determination of visual acuity in low vision infants or patients with additional cerebral retardation is difficult. In our low vision department we used dice of different sizes and colors as well as other defined objects to determine visual acuity (VA). In this study we compared the results of the dice test with conventional tests for measurement of visual acuity. A total of 88 children with different causes of visual impairment e.g. albinism, retinal scars, retinopathy of prematurity (ROP), achromatopsia and optic atrophy etc., were included in this longitudinal study. Median follow-up time was 8.7 years (range 2.9-18.9 years). The first reliable examination was performed between the ages of 4 and 24 months (median 11 months). We estimated VA depending on the edge length of the dice, which were recognized at a distance of 30 cm, while 4 mm complied with VA 20/200. Best corrected binocular visual acuity was compared between the dice test, measurement with the Lea symbols and with numbers or Landolt rings. Estimation of visual acuity using the dice test was possible at the end of the first year of life (median 11 months, range 4-27 months). Although observation is limited to visual acuity results in the low vision range between light reaction and 20/120 there was nearly complete agreement between all three VA measurements. Visual acuity ranged from light perception to 20/20 with a median of 20/100. In 39 patients visual acuity was 20/200 or less at the end of the observation period. Visual acuity estimation overestimated visual acuity only in 5 out of the 88 patients, while in all of the patients with later acuity measurements better than 20/200, our best value of 20/200 was achieved. Using simple visual objects, such as dice with different colors and size down to an edge length of 4 mm, it is possible to estimate visual acuity in low vision infants within the first year of life. This option is also very helpful in patients who are not able to perform other visual

  2. Making LibrariesAccessible for Visually Impaired Users: Practical Advice For Librarians

    Directory of Open Access Journals (Sweden)

    Devney Hamilton

    2011-12-01

    Full Text Available This article provides an introduction to making university libraries accessible to visually impaired users. It includes a summary of how visually impaired students access information and how libraries can provide access to materials, devices and software, and staff support to ensure visually impaired students ’ equal opportunity to use the library. The practical advice for librarians are based on interviews with 18 visually impaired university students and professionals who specialize in media, library services and information retrieval.

  3. Scalable data management, analysis and visualization (SDAV) Institute. Final Scientific/Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Geveci, Berk [Kitware, Inc., Clifton Park, NY (United States)

    2017-03-28

    The purpose of the SDAV institute is to provide tools and expertise in scientific data management, analysis, and visualization to DOE’s application scientists. Our goal is to actively work with application teams to assist them in achieving breakthrough science, and to provide technical solutions in the data management, analysis, and visualization regimes that are broadly used by the computational science community. Over the last 5 years members of our institute worked directly with application scientists and DOE leadership-class facilities to assist them by applying the best tools and technologies at our disposal. We also enhanced our tools based on input from scientists on their needs. Many of the applications we have been working with are based on connections with scientists established in previous years. However, we contacted additional scientists though our outreach activities, as well as engaging application teams running on leading DOE computing systems. Our approach is to employ an evolutionary development and deployment process: first considering the application of existing tools, followed by the customization necessary for each particular application, and then the deployment in real frameworks and infrastructures. The institute is organized into three areas, each with area leaders, who keep track of progress, engagement of application scientists, and results. The areas are: (1) Data Management, (2) Data Analysis, and (3) Visualization. Kitware has been involved in the Visualization area. This report covers Kitware’s contributions over the last 5 years (February 2012 – February 2017). For details on the work performed by the SDAV institute as a whole, please see the SDAV final report.

  4. Wavefront holoscopy: application of digital in-line holography for the inspection of engraved marks in progressive addition lenses.

    Science.gov (United States)

    Perucho, Beatriz; Micó, Vicente

    2014-01-01

    Progressive addition lenses (PALs) are engraved with permanent marks at standardized locations in order to guarantee correct centering and alignment throughout the manufacturing and mounting processes. Out of the production line, engraved marks provide useful information about the PAL as well as act as locator marks to re-ink again the removable marks. Even though those marks should be visible by simple visual inspection with the naked eye, engraving marks are often faint and weak, obscured by scratches, and partially occluded and difficult to recognize on tinted or antireflection-coated lenses. Here, we present an extremely simple optical device (named as wavefront holoscope) for visualization and characterization of permanent marks in PAL based on digital in-line holography. Essentially, a point source of coherent light illuminates the engraved mark placed just before a CCD camera that records a classical Gabor in-line hologram. The recorded hologram is then digitally processed to provide a set of high-contrast images of the engraved marks. Experimental results are presented showing the applicability of the proposed method as a new ophthalmic instrument for visualization and characterization of engraved marks in PALs.

  5. Appraisals of Salient Visual Elements in Web Page Design

    Directory of Open Access Journals (Sweden)

    Johanna M. Silvennoinen

    2016-01-01

    Full Text Available Visual elements in user interfaces elicit emotions in users and are, therefore, essential to users interacting with different software. Although there is research on the relationship between emotional experience and visual user interface design, the focus has been on the overall visual impression and not on visual elements. Additionally, often in a software development process, programming and general usability guidelines are considered as the most important parts of the process. Therefore, knowledge of programmers’ appraisals of visual elements can be utilized to understand the web page designs we interact with. In this study, appraisal theory of emotion is utilized to elaborate the relationship of emotional experience and visual elements from programmers’ perspective. Participants (N=50 used 3E-templates to express their visual and emotional experiences of web page designs. Content analysis of textual data illustrates how emotional experiences are elicited by salient visual elements. Eight hierarchical visual element categories were found and connected to various emotions, such as frustration, boredom, and calmness, via relational emotion themes. The emotional emphasis was on centered, symmetrical, and balanced composition, which was experienced as pleasant and calming. The results benefit user-centered visual interface design and researchers of visual aesthetics in human-computer interaction.

  6. Repetitive Transcranial Direct Current Stimulation Induced Excitability Changes of Primary Visual Cortex and Visual Learning Effects-A Pilot Study.

    Science.gov (United States)

    Sczesny-Kaiser, Matthias; Beckhaus, Katharina; Dinse, Hubert R; Schwenkreis, Peter; Tegenthoff, Martin; Höffken, Oliver

    2016-01-01

    Studies on noninvasive motor cortex stimulation and motor learning demonstrated cortical excitability as a marker for a learning effect. Transcranial direct current stimulation (tDCS) is a non-invasive tool to modulate cortical excitability. It is as yet unknown how tDCS-induced excitability changes and perceptual learning in visual cortex correlate. Our study aimed to examine the influence of tDCS on visual perceptual learning in healthy humans. Additionally, we measured excitability in primary visual cortex (V1). We hypothesized that anodal tDCS would improve and cathodal tDCS would have minor or no effects on visual learning. Anodal, cathodal or sham tDCS were applied over V1 in a randomized, double-blinded design over four consecutive days (n = 30). During 20 min of tDCS, subjects had to learn a visual orientation-discrimination task (ODT). Excitability parameters were measured by analyzing paired-stimulation behavior of visual-evoked potentials (ps-VEP) and by measuring phosphene thresholds (PTs) before and after the stimulation period of 4 days. Compared with sham-tDCS, anodal tDCS led to an improvement of visual discrimination learning (p learning effect. For cathodal tDCS, no significant effects on learning or on excitability could be seen. Our results showed that anodal tDCS over V1 resulted in improved visual perceptual learning and increased cortical excitability. tDCS is a promising tool to alter V1 excitability and, hence, perceptual visual learning.

  7. PANDA-view: An easy-to-use tool for statistical analysis and visualization of quantitative proteomics data.

    Science.gov (United States)

    Chang, Cheng; Xu, Kaikun; Guo, Chaoping; Wang, Jinxia; Yan, Qi; Zhang, Jian; He, Fuchu; Zhu, Yunping

    2018-05-22

    Compared with the numerous software tools developed for identification and quantification of -omics data, there remains a lack of suitable tools for both downstream analysis and data visualization. To help researchers better understand the biological meanings in their -omics data, we present an easy-to-use tool, named PANDA-view, for both statistical analysis and visualization of quantitative proteomics data and other -omics data. PANDA-view contains various kinds of analysis methods such as normalization, missing value imputation, statistical tests, clustering and principal component analysis, as well as the most commonly-used data visualization methods including an interactive volcano plot. Additionally, it provides user-friendly interfaces for protein-peptide-spectrum representation of the quantitative proteomics data. PANDA-view is freely available at https://sourceforge.net/projects/panda-view/. 1987ccpacer@163.com and zhuyunping@gmail.com. Supplementary data are available at Bioinformatics online.

  8. Visual memory, the long and the short of it: A review of visual working memory and long-term memory.

    Science.gov (United States)

    Schurgin, Mark W

    2018-04-23

    The majority of research on visual memory has taken a compartmentalized approach, focusing exclusively on memory over shorter or longer durations, that is, visual working memory (VWM) or visual episodic long-term memory (VLTM), respectively. This tutorial provides a review spanning the two areas, with readers in mind who may only be familiar with one or the other. The review is divided into six sections. It starts by distinguishing VWM and VLTM from one another, in terms of how they are generally defined and their relative functions. This is followed by a review of the major theories and methods guiding VLTM and VWM research. The final section is devoted toward identifying points of overlap and distinction across the two literatures to provide a synthesis that will inform future research in both fields. By more intimately relating methods and theories from VWM and VLTM to one another, new advances can be made that may shed light on the kinds of representational content and structure supporting human visual memory.

  9. EDITORIAL: Focus on Visualization in Physics FOCUS ON VISUALIZATION IN PHYSICS

    Science.gov (United States)

    Sanders, Barry C.; Senden, Tim; Springel, Volker

    2008-12-01

    cosmology wherein this (in principle invisible) dark matter dominates the cosmic matter content. The advantages of visualization found for simulated data also hold for real world data as well. With the application of computerized acquisition many scientific disciplines are witnessing exponential growth rates of the volume of accumulated raw data, which often makes it daunting to condense the information into a manageable form, a challenge that can be addressed by modern visualization techniques. Such visualizations are also often an enticing way to communicate scientific results to the general public. This need for visualization is especially true in basic science, with its reliance on a benevolent and interested general public that drives the need for high-quality visualizations. Despite the widespread use of visualization, this technology has suffered from a lack of the unifying influence of shared common experiences. As with any emerging technology practitioners have often independently found solutions to similar problems. It is the aim of this focus issue to celebrate the importance of visualization, report on its growing use by the broad community of physicists, including biophysics, chemical physics, geophysics, astrophysics, and medical physics, and provide an opportunity for the diverse community of scientists using visualization to share work in one issue of a journal that itself is in the vanguard of supporting visualization and multimedia. A remarkable breadth and diversity of visualization in physics is to be found in this issue spanning fundamental aspects of relativity theory to computational fluid dynamics. The topics span length scales that are as small as quantum phenomena to the entire observable Universe. We have been impressed by the quality of the submissions and hope that this snap-shot will introduce, inform, motivate and maybe even help to unify visualization in physics. Readers are also directed to the December issue of Physics World which includes

  10. Eye movements in depth to visual illusions

    NARCIS (Netherlands)

    Wismeijer, D.A.

    2009-01-01

    We perceive the three-dimensional (3D) environment that surrounds us with deceptive effortlessness. In fact, we are far from comprehending how the visual system provides us with this stable perception of the (3D) world around us. This thesis will focus on the interplay between visual perception of

  11. Photovoltaic restoration of sight with high visual acuity

    Science.gov (United States)

    Lorach, Henri; Goetz, Georges; Smith, Richard; Lei, Xin; Mandel, Yossi; Kamins, Theodore; Mathieson, Keith; Huie, Philip; Harris, James; Sher, Alexander; Palanker, Daniel

    2015-01-01

    Patients with retinal degeneration lose sight due to gradual demise of photoreceptors. Electrical stimulation of the surviving retinal neurons provides an alternative route for delivery of visual information. We demonstrate that subretinal arrays with 70 μm photovoltaic pixels provide highly localized stimulation, with electrical and visual receptive fields of comparable sizes in rat retinal ganglion cells. Similarly to normal vision, retinal response to prosthetic stimulation exhibits flicker fusion at high frequencies, adaptation to static images and non-linear spatial summation. In rats with retinal degeneration, these photovoltaic arrays provide spatial resolution of 64 ± 11 μm, corresponding to half of the normal visual acuity in pigmented rats. Ease of implantation of these wireless and modular arrays, combined with their high resolution opens the door to functional restoration of sight. PMID:25915832

  12. The impact of online visual on users' motivation and behavioural intention - A comparison between persuasive and non-persuasive visuals

    Science.gov (United States)

    Ibrahim, Nurulhuda; Shiratuddin, Mohd Fairuz; Wong, Kok Wai

    2016-08-01

    Research related to the first impression has highlighted the importance of visual appeal in influencing the favourable attitude towards a website. In the perspective of impression formation, it is proposed that the users are actually attracted to certain characteristics or aspects of the visual properties of a website, while ignoring the rests. Therefore, this study aims to investigate which visual strongly appeals to the users by comparing the impact of common visuals with the persuasive visuals. The principles of social influence are proposed as the added value to the persuasiveness of the web visuals. An experimental study is conducted and the PLS-SEM method is employed to analyse the obtained data. The result of the exploratory analyses demonstrated that the structural model has better quality when tested with persuasive data sample compared to non-persuasive data sample, evident with stronger coefficient of determination and path coefficients. Thus, it is concluded that persuasive visual provides better impact towards users' attitude and behavioural intention of a website.

  13. Visual Perceptual Learning and Models.

    Science.gov (United States)

    Dosher, Barbara; Lu, Zhong-Lin

    2017-09-15

    Visual perceptual learning through practice or training can significantly improve performance on visual tasks. Originally seen as a manifestation of plasticity in the primary visual cortex, perceptual learning is more readily understood as improvements in the function of brain networks that integrate processes, including sensory representations, decision, attention, and reward, and balance plasticity with system stability. This review considers the primary phenomena of perceptual learning, theories of perceptual learning, and perceptual learning's effect on signal and noise in visual processing and decision. Models, especially computational models, play a key role in behavioral and physiological investigations of the mechanisms of perceptual learning and for understanding, predicting, and optimizing human perceptual processes, learning, and performance. Performance improvements resulting from reweighting or readout of sensory inputs to decision provide a strong theoretical framework for interpreting perceptual learning and transfer that may prove useful in optimizing learning in real-world applications.

  14. Visual cues for data mining

    Science.gov (United States)

    Rogowitz, Bernice E.; Rabenhorst, David A.; Gerth, John A.; Kalin, Edward B.

    1996-04-01

    This paper describes a set of visual techniques, based on principles of human perception and cognition, which can help users analyze and develop intuitions about tabular data. Collections of tabular data are widely available, including, for example, multivariate time series data, customer satisfaction data, stock market performance data, multivariate profiles of companies and individuals, and scientific measurements. In our approach, we show how visual cues can help users perform a number of data mining tasks, including identifying correlations and interaction effects, finding clusters and understanding the semantics of cluster membership, identifying anomalies and outliers, and discovering multivariate relationships among variables. These cues are derived from psychological studies on perceptual organization, visual search, perceptual scaling, and color perception. These visual techniques are presented as a complement to the statistical and algorithmic methods more commonly associated with these tasks, and provide an interactive interface for the human analyst.

  15. Enhancing Visual Basic GUI Applications using VRML Scenes

    OpenAIRE

    Bala Dhandayuthapani Veerasamy

    2010-01-01

    Rapid Application Development (RAD) enables ever expanding needs for speedy development of computer application programs that are sophisticated, reliable, and full-featured. Visual Basic was the first RAD tool for the Windows operating system, and too many people say still it is the best. To provide very good attraction in visual basic 6 applications, this paper directing to use VRML scenes over the visual basic environment.

  16. Qualitative Research Methods in Visual Communication. Case Study: Visual Networks in the Promotional Videos of the European Year of Volunteering

    Directory of Open Access Journals (Sweden)

    Camelia Cmeciu

    2013-05-01

    Full Text Available European Years are a means of promoting European issues at a macro and micro-level. The objective of this paper is to provide the visual differences in the framing of the issue of volunteering at a European and national level. The approach focuses on a blending of two qualitative research methods in visual communication: ATLAS.ti (computer assisted/ aided qualitative data analysis software and social semiotics. The results of our analysis highlight two network views on volunteering promoted through videos, a salience of transactional processes in the implementation of volunteering at a European and national level, and a classification of various types of social practices specific to Romania. This study provides an insight into the way in which two different qualitative methods may be combined in order to provide a visual representation and interpretation to a European issue.

  17. Auditory and visual memory in musicians and nonmusicians.

    Science.gov (United States)

    Cohen, Michael A; Evans, Karla K; Horowitz, Todd S; Wolfe, Jeremy M

    2011-06-01

    Numerous studies have shown that musicians outperform nonmusicians on a variety of tasks. Here we provide the first evidence that musicians have superior auditory recognition memory for both musical and nonmusical stimuli, compared to nonmusicians. However, this advantage did not generalize to the visual domain. Previously, we showed that auditory recognition memory is inferior to visual recognition memory. Would this be true even for trained musicians? We compared auditory and visual memory in musicians and nonmusicians using familiar music, spoken English, and visual objects. For both groups, memory for the auditory stimuli was inferior to memory for the visual objects. Thus, although considerable musical training is associated with better musical and nonmusical auditory memory, it does not increase the ability to remember sounds to the levels found with visual stimuli. This suggests a fundamental capacity difference between auditory and visual recognition memory, with a persistent advantage for the visual domain.

  18. Advances and limitations of visual conditioning protocols in harnessed bees.

    Science.gov (United States)

    Avarguès-Weber, Aurore; Mota, Theo

    2016-10-01

    Bees are excellent invertebrate models for studying visual learning and memory mechanisms, because of their sophisticated visual system and impressive cognitive capacities associated with a relatively simple brain. Visual learning in free-flying bees has been traditionally studied using an operant conditioning paradigm. This well-established protocol, however, can hardly be combined with invasive procedures for studying the neurobiological basis of visual learning. Different efforts have been made to develop protocols in which harnessed honey bees could associate visual cues with reinforcement, though learning performances remain poorer than those obtained with free-flying animals. Especially in the last decade, the intention of improving visual learning performances of harnessed bees led many authors to adopt distinct visual conditioning protocols, altering parameters like harnessing method, nature and duration of visual stimulation, number of trials, inter-trial intervals, among others. As a result, the literature provides data hardly comparable and sometimes contradictory. In the present review, we provide an extensive analysis of the literature available on visual conditioning of harnessed bees, with special emphasis on the comparison of diverse conditioning parameters adopted by different authors. Together with this comparative overview, we discuss how these diverse conditioning parameters could modulate visual learning performances of harnessed bees. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. The phonological and visual basis of developmental dyslexia in Brazilian Portuguese reading children.

    Directory of Open Access Journals (Sweden)

    Giseli Donadon Germano

    2014-10-01

    Full Text Available Evidence from opaque languages suggests that visual attention processing abilities in addition to phonological skills may act as cognitive underpinnings of developmental dyslexia. We explored the role of these two cognitive abilities on reading fluency in Brazilian Portuguese, a more transparent orthography than French or English. Sixty-six dyslexic and normal Brazilian Portuguese children participated. They were administered three tasks of phonological skills (phoneme identification, phoneme and syllable blending and three visual tasks (a letter global report task and two non-verbal tasks of visual closure and visual constancy. Results show that Brazilian Portuguese dyslexic children are impaired not only in phonological processing but further in visual processing. The phonological and visual processing abilities significantly and independently contribute to reading fluency in the whole population. Last, different cognitively homogeneous subtypes can be identified in the Brazilian Portuguese dyslexic population. Two subsets of dyslexic children were identified as having a single cognitive disorder, phonological or visual; another group exhibited a double deficit and a few children showed no visual or phonological disorder. Thus the current findings extend previous data from more opaque orthographies as French and English, in showing the importance of investigating visual processing skills in addition to phonological skills in dyslexic children whatever their language orthography transparency.

  20. Current food chain information provides insufficient information for modern meat inspection of pigs.

    Science.gov (United States)

    Felin, Elina; Jukola, Elias; Raulo, Saara; Heinonen, Jaakko; Fredriksson-Ahomaa, Maria

    2016-05-01

    Meat inspection now incorporates a more risk-based approach for protecting human health against meat-borne biological hazards. Official post-mortem meat inspection of pigs has shifted to visual meat inspection. The official veterinarian decides on additional post-mortem inspection procedures, such as incisions and palpations. The decision is based on declarations in the food chain information (FCI), ante-mortem inspection and post-mortem inspection. However, a smooth slaughter and inspection process is essential. Therefore, one should be able to assess prior to slaughter which pigs are suitable for visual meat inspection only, and which need more profound inspection procedures. This study evaluates the usability of the FCI provided by pig producers and considered the possibility for risk ranking of incoming slaughter batches according to the previous meat inspection data and the current FCI. Eighty-five slaughter batches comprising 8954 fattening pigs were randomly selected at a slaughterhouse that receives animals from across Finland. The mortality rate, the FCI and the meat inspection results for each batch were obtained. The current FCI alone provided insufficient and inaccurate information for risk ranking purposes for meat inspection. The partial condemnation rate for a batch was best predicted by the partial condemnation rate calculated for all the pigs sent for slaughter from the same holding in the previous year (p<0.001) and by prior information on cough declared in the current FCI (p=0.02) statement. Training and information to producers are needed to make the FCI reporting procedures more accurate. Historical meat inspection data on pigs slaughtered from the same holdings and well-chosen symptoms/signs for reporting, should be included in the FCI to facilitate the allocation of pigs for visual inspection. The introduced simple scoring system can be easily used for additional information for directing batches to appropriate meat inspection procedures. To

  1. Visual ergonomics in the workplace.

    Science.gov (United States)

    Anshel, Jeffrey R

    2007-10-01

    This article provides information about visual function and its role in workplace productivity. By understanding the connection among comfort, health, and productivity and knowing the many options for effective ergonomic workplace lighting, the occupational health nurse can be sensitive to potential visual stress that can affect all areas of performance. Computer vision syndrome-the eye and vision problems associated with near work experienced during or related to computer use-is defined and solutions to it are discussed.

  2. [Spectral sensitivity and visual pigments of the coastal crab Hemigrapsus sanguineus].

    Science.gov (United States)

    Shukoliukov, S A; Zak, P P; Kalamkarov, G R; Kalishevich, O O; Ostrovskiĭ, M A

    1980-01-01

    It has been shown that the compound eye of the coastal crab has one photosensitive pigment rhodopsin and screening pigments, black and orange one. The orange pigment has lambda max = 480 nm, rhodopsin in digitonin is stable towards hydroxylamin action, has lambda max = 490-495 nm and after bleaching is transformed into free retinene and opsin. The pigments with lambda max = 430 and 475 nm of the receptor part of the eye are also solubilized. These pigments are not photosensitive but they dissociate under the effect of hydroxylamine. The curye of spectral sensitivity of the coastal crab has the basic maximum at approximately 525 nm and the additional one at 450 nm, which seems to be provided by a combination of the visual pigment--rhodopsin (lambda max 500 nm) with a carotinoid filter (lambda max 480-490). Specific features of the visual system of coastal crab are discussed.

  3. Architecture of a spatial data service system for statistical analysis and visualization of regional climate changes

    Science.gov (United States)

    Titov, A. G.; Okladnikov, I. G.; Gordov, E. P.

    2017-11-01

    The use of large geospatial datasets in climate change studies requires the development of a set of Spatial Data Infrastructure (SDI) elements, including geoprocessing and cartographical visualization web services. This paper presents the architecture of a geospatial OGC web service system as an integral part of a virtual research environment (VRE) general architecture for statistical processing and visualization of meteorological and climatic data. The architecture is a set of interconnected standalone SDI nodes with corresponding data storage systems. Each node runs a specialized software, such as a geoportal, cartographical web services (WMS/WFS), a metadata catalog, and a MySQL database of technical metadata describing geospatial datasets available for the node. It also contains geospatial data processing services (WPS) based on a modular computing backend realizing statistical processing functionality and, thus, providing analysis of large datasets with the results of visualization and export into files of standard formats (XML, binary, etc.). Some cartographical web services have been developed in a system’s prototype to provide capabilities to work with raster and vector geospatial data based on OGC web services. The distributed architecture presented allows easy addition of new nodes, computing and data storage systems, and provides a solid computational infrastructure for regional climate change studies based on modern Web and GIS technologies.

  4. Loss of vision: imaging the visual pathways

    International Nuclear Information System (INIS)

    Jaeger, H.R.

    2005-01-01

    This is an overview of diseases presenting with visual impairment, which aims to provide an understanding of the anatomy and pathology of the visual pathways. It discusses the relevant clinical background and neuroimaging findings on CT and standard and advanced MRI of diseases affecting the globe; optic nerve/sheath complex; optic chiasm, tract and radiation; and visual cortex. The overview covers common tumours, trauma, inflammatory and vascular pathology, and conditions such as benign intracranial hypertension and posterior reversible leukoencephalopathy syndrome. (orig.)

  5. Tools for Visualizing HIV in Cure Research.

    Science.gov (United States)

    Niessl, Julia; Baxter, Amy E; Kaufmann, Daniel E

    2018-02-01

    The long-lived HIV reservoir remains a major obstacle for an HIV cure. Current techniques to analyze this reservoir are generally population-based. We highlight recent developments in methods visualizing HIV, which offer a different, complementary view, and provide indispensable information for cure strategy development. Recent advances in fluorescence in situ hybridization techniques enabled key developments in reservoir visualization. Flow cytometric detection of HIV mRNAs, concurrently with proteins, provides a high-throughput approach to study the reservoir on a single-cell level. On a tissue level, key spatial information can be obtained detecting viral RNA and DNA in situ by fluorescence microscopy. At total-body level, advancements in non-invasive immuno-positron emission tomography (PET) detection of HIV proteins may allow an encompassing view of HIV reservoir sites. HIV imaging approaches provide important, complementary information regarding the size, phenotype, and localization of the HIV reservoir. Visualizing the reservoir may contribute to the design, assessment, and monitoring of HIV cure strategies in vitro and in vivo.

  6. The Case for Visual Analytics of Arsenic Concentrations in Foods

    Directory of Open Access Journals (Sweden)

    Omotayo R. Awofolu

    2010-04-01

    Full Text Available Arsenic is a naturally occurring toxic metal and its presence in food could be a potential risk to the health of both humans and animals. Prolonged ingestion of arsenic contaminated water may result in manifestations of toxicity in all systems of the body. Visual Analytics is a multidisciplinary field that is defined as the science of analytical reasoning facilitated by interactive visual interfaces. The concentrations of arsenic vary in foods making it impractical and impossible to provide regulatory limit for each food. This review article presents a case for the use of visual analytics approaches to provide comparative assessment of arsenic in various foods. The topics covered include (i metabolism of arsenic in the human body; (ii arsenic concentrations in various foods; (ii factors affecting arsenic uptake in plants; (ii introduction to visual analytics; and (iv benefits of visual analytics for comparative assessment of arsenic concentration in foods. Visual analytics can provide an information superstructure of arsenic in various foods to permit insightful comparative risk assessment of the diverse and continually expanding data on arsenic in food groups in the context of country of study or origin, year of study, method of analysis and arsenic species.

  7. A framework for interactive visualization of digital medical images.

    Science.gov (United States)

    Koehring, Andrew; Foo, Jung Leng; Miyano, Go; Lobe, Thom; Winer, Eliot

    2008-10-01

    The visualization of medical images obtained from scanning techniques such as computed tomography and magnetic resonance imaging is a well-researched field. However, advanced tools and methods to manipulate these data for surgical planning and other tasks have not seen widespread use among medical professionals. Radiologists have begun using more advanced visualization packages on desktop computer systems, but most physicians continue to work with basic two-dimensional grayscale images or not work directly with the data at all. In addition, new display technologies that are in use in other fields have yet to be fully applied in medicine. It is our estimation that usability is the key aspect in keeping this new technology from being more widely used by the medical community at large. Therefore, we have a software and hardware framework that not only make use of advanced visualization techniques, but also feature powerful, yet simple-to-use, interfaces. A virtual reality system was created to display volume-rendered medical models in three dimensions. It was designed to run in many configurations, from a large cluster of machines powering a multiwalled display down to a single desktop computer. An augmented reality system was also created for, literally, hands-on interaction when viewing models of medical data. Last, a desktop application was designed to provide a simple visualization tool, which can be run on nearly any computer at a user's disposal. This research is directed toward improving the capabilities of medical professionals in the tasks of preoperative planning, surgical training, diagnostic assistance, and patient education.

  8. Visual form-processing deficits: a global clinical classification.

    Science.gov (United States)

    Unzueta-Arce, J; García-García, R; Ladera-Fernández, V; Perea-Bartolomé, M V; Mora-Simón, S; Cacho-Gutiérrez, J

    2014-10-01

    Patients who have difficulties recognising visual form stimuli are usually labelled as having visual agnosia. However, recent studies let us identify different clinical manifestations corresponding to discrete diagnostic entities which reflect a variety of deficits along the continuum of cortical visual processing. We reviewed different clinical cases published in medical literature as well as proposals for classifying deficits in order to provide a global perspective of the subject. Here, we present the main findings on the neuroanatomical basis of visual form processing and discuss the criteria for evaluating processing which may be abnormal. We also include an inclusive diagram of visual form processing deficits which represents the different clinical cases described in the literature. Lastly, we propose a boosted decision tree to serve as a guide in the process of diagnosing such cases. Although the medical community largely agrees on which cortical areas and neuronal circuits are involved in visual processing, future studies making use of new functional neuroimaging techniques will provide more in-depth information. A well-structured and exhaustive assessment of the different stages of visual processing, designed with a global view of the deficit in mind, will give a better idea of the prognosis and serve as a basis for planning personalised psychostimulation and rehabilitation strategies. Copyright © 2011 Sociedad Española de Neurología. Published by Elsevier Espana. All rights reserved.

  9. Allen Brain Atlas-Driven Visualizations: a web-based gene expression energy visualization tool.

    Science.gov (United States)

    Zaldivar, Andrew; Krichmar, Jeffrey L

    2014-01-01

    The Allen Brain Atlas-Driven Visualizations (ABADV) is a publicly accessible web-based tool created to retrieve and visualize expression energy data from the Allen Brain Atlas (ABA) across multiple genes and brain structures. Though the ABA offers their own search engine and software for researchers to view their growing collection of online public data sets, including extensive gene expression and neuroanatomical data from human and mouse brain, many of their tools limit the amount of genes and brain structures researchers can view at once. To complement their work, ABADV generates multiple pie charts, bar charts and heat maps of expression energy values for any given set of genes and brain structures. Such a suite of free and easy-to-understand visualizations allows for easy comparison of gene expression across multiple brain areas. In addition, each visualization links back to the ABA so researchers may view a summary of the experimental detail. ABADV is currently supported on modern web browsers and is compatible with expression energy data from the Allen Mouse Brain Atlas in situ hybridization data. By creating this web application, researchers can immediately obtain and survey numerous amounts of expression energy data from the ABA, which they can then use to supplement their work or perform meta-analysis. In the future, we hope to enable ABADV across multiple data resources.

  10. Allen Brain Atlas-Driven Visualizations: A Web-Based Gene Expression Energy Visualization Tool

    Directory of Open Access Journals (Sweden)

    Andrew eZaldivar

    2014-05-01

    Full Text Available The Allen Brain Atlas-Driven Visualizations (ABADV is a publicly accessible web-based tool created to retrieve and visualize expression energy data from the Allen Brain Atlas (ABA across multiple genes and brain structures. Though the ABA offers their own search engine and software for researchers to view their growing collection of online public data sets, including extensive gene expression and neuroanatomical data from human and mouse brain, many of their tools limit the amount of genes and brain structures researchers can view at once. To complement their work, ABADV generates multiple pie charts, bar charts and heat maps of expression energy values for any given set of genes and brain structures. Such a suite of free and easy-to-understand visualizations allows for easy comparison of gene expression across multiple brain areas. In addition, each visualization links back to the ABA so researchers may view a summary of the experimental detail. ABADV is currently supported on modern web browsers and is compatible with expression energy data from the Allen Mouse Brain Atlas in situ hybridization data. By creating this web application, researchers can immediately obtain and survey numerous amounts of expression energy data from the ABA, which they can then use to supplement their work or perform meta-analysis. In the future, we hope to enable ABADV across multiple data resources.

  11. Adaptive optics without altering visual perception.

    Science.gov (United States)

    Koenig, D E; Hart, N W; Hofer, H J

    2014-04-01

    Adaptive optics combined with visual psychophysics creates the potential to study the relationship between visual function and the retina at the cellular scale. This potential is hampered, however, by visual interference from the wavefront-sensing beacon used during correction. For example, we have previously shown that even a dim, visible beacon can alter stimulus perception (Hofer et al., 2012). Here we describe a simple strategy employing a longer wavelength (980nm) beacon that, in conjunction with appropriate restriction on timing and placement, allowed us to perform psychophysics when dark adapted without altering visual perception. The method was verified by comparing detection and color appearance of foveally presented small spot stimuli with and without the wavefront beacon present in 5 subjects. As an important caution, we found that significant perceptual interference can occur even with a subliminal beacon when additional measures are not taken to limit exposure. Consequently, the lack of perceptual interference should be verified for a given system, and not assumed based on invisibility of the beacon. Copyright © 2014 Elsevier B.V. All rights reserved.

  12. Visual Analytics of Complex Genomics Data to Guide Effective Treatment Decisions

    Directory of Open Access Journals (Sweden)

    Quang Vinh Nguyen

    2016-09-01

    Full Text Available In cancer biology, genomics represents a big data problem that needs accurate visual data processing and analytics. The human genome is very complex with thousands of genes that contain the information about the individual patients and the biological mechanisms of their disease. Therefore, when building a framework for personalised treatment, the complexity of the genome must be captured in meaningful and actionable ways. This paper presents a novel visual analytics framework that enables effective analysis of large and complex genomics data. By providing interactive visualisations from the overview of the entire patient cohort to the detail view of individual genes, our work potentially guides effective treatment decisions for childhood cancer patients. The framework consists of multiple components enabling the complete analytics supporting personalised medicines, including similarity space construction, automated analysis, visualisation, gene-to-gene comparison and user-centric interaction and exploration based on feature selection. In addition to the traditional way to visualise data, we utilise the Unity3D platform for developing a smooth and interactive visual presentation of the information. This aims to provide better rendering, image quality, ergonomics and user experience to non-specialists or young users who are familiar with 3D gaming environments and interfaces. We illustrate the effectiveness of our approach through case studies with datasets from childhood cancers, B-cell Acute Lymphoblastic Leukaemia (ALL and Rhabdomyosarcoma (RMS patients, on how to guide the effective treatment decision in the cohort.

  13. Nonuniform Changes in the Distribution of Visual Attention from Visual Complexity and Action: A Driving Simulation Study.

    Science.gov (United States)

    Park, George D; Reed, Catherine L

    2015-02-01

    Researchers acknowledge the interplay between action and attention, but typically consider action as a response to successful attentional selection or the correlation of performance on separate action and attention tasks. We investigated how concurrent action with spatial monitoring affects the distribution of attention across the visual field. We embedded a functional field of view (FFOV) paradigm with concurrent central object recognition and peripheral target localization tasks in a simulated driving environment. Peripheral targets varied across 20-60 deg eccentricity at 11 radial spokes. Three conditions assessed the effects of visual complexity and concurrent action on the size and shape of the FFOV: (1) with no background, (2) with driving background, and (3) with driving background and vehicle steering. The addition of visual complexity slowed task performance and reduced the FFOV size but did not change the baseline shape. In contrast, the addition of steering produced not only shrinkage of the FFOV, but also changes in the FFOV shape. Nonuniform performance decrements occurred in proximal regions used for the central task and for steering, independent of interference from context elements. Multifocal attention models should consider the role of action and account for nonhomogeneities in the distribution of attention. © 2015 SAGE Publications.

  14. A Microsoft Windows version of the MCNP visual editor

    International Nuclear Information System (INIS)

    Schwarz, R.A.; Carter, L.L.; Pfohl, J.

    1999-01-01

    Work has started on a Microsoft Windows version of the MCNP visual editor. The MCNP visual editor provides a graphical user interface for displaying and creating MCNP geometries. The visual editor is currently available from the Radiation Safety Information Computational Center (RSICC) and the Nuclear Energy Agency (NEA) as software package PSR-358. It currently runs on the major UNIX platforms (IBM, SGI, HP, SUN) and Linux. Work has started on converting the visual editor to work in a Microsoft Windows environment. This initial work focuses on converting the display capabilities of the visual editor; the geometry creation capability of the visual editor may be included in future upgrades

  15. Visualization of simulated urban spaces: inferring parameterized generation of streets, parcels, and aerial imagery.

    Science.gov (United States)

    Vanegas, Carlos A; Aliaga, Daniel G; Benes, Bedrich; Waddell, Paul

    2009-01-01

    Urban simulation models and their visualization are used to help regional planning agencies evaluate alternative transportation investments, land use regulations, and environmental protection policies. Typical urban simulations provide spatially distributed data about number of inhabitants, land prices, traffic, and other variables. In this article, we build on a synergy of urban simulation, urban visualization, and computer graphics to automatically infer an urban layout for any time step of the simulation sequence. In addition to standard visualization tools, our method gathers data of the original street network, parcels, and aerial imagery and uses the available simulation results to infer changes to the original urban layout and produce a new and plausible layout for the simulation results. In contrast with previous work, our approach automatically updates the layout based on changes in the simulation data and thus can scale to a large simulation over many years. The method in this article offers a substantial step forward in building integrated visualization and behavioral simulation systems for use in community visioning, planning, and policy analysis. We demonstrate our method on several real cases using a 200 GB database for a 16,300 km2 area surrounding Seattle.

  16. The development of a visual system for the detection of obstructions for visually impaired people

    International Nuclear Information System (INIS)

    Okayasu, Mitsuhiro

    2009-01-01

    In this paper, the author presents a new visual system that can aid visually impaired people in walking. The system provides object information (that is, shape and location) through the sense of touch. This visual system depends on three different components: (i) an infrared camera sensor that detects the obstruction, (ii) a control system that measures the distance between the obstruction and the sensor, and (iii) a tooling apparatus with small pins (φ1 mm) used in forming a three-dimensional shape of the obstruction. The pins, arranged on a 6x6 matrix, move longitudinally between the retracted and extended positions based on the distance data. The pin extends individually, while the pin tip reflects the object's outer surface. The length of the pin from the base surface is proportional to the distance of the sensor from the obstruction. An ultrasonic actuator, controlled at a 15Hz frame rate, is the driving force for the pin movement. The tactile image of the 3D shape can provide information about the obstruction

  17. Functionality and Performance Visualization of the Distributed High Quality Volume Renderer (HVR)

    KAUST Repository

    Shaheen, Sara

    2012-07-01

    Volume rendering systems are designed to provide means to enable scientists and a variety of experts to interactively explore volume data through 3D views of the volume. However, volume rendering techniques are computationally intensive tasks. Moreover, parallel distributed volume rendering systems and multi-threading architectures were suggested as natural solutions to provide an acceptable volume rendering performance for very large volume data sizes, such as Electron Microscopy data (EM). This in turn adds another level of complexity when developing and manipulating volume rendering systems. Given that distributed parallel volume rendering systems are among the most complex systems to develop, trace and debug, it is obvious that traditional debugging tools do not provide enough support. As a consequence, there is a great demand to provide tools that are able to facilitate the manipulation of such systems. This can be achieved by utilizing the power of compute graphics in designing visual representations that reflect how the system works and that visualize the current performance state of the system.The work presented is categorized within the field of software Visualization, where Visualization is used to serve visualizing and understanding various software. In this thesis, a number of visual representations that reflect a number of functionality and performance aspects of the distributed HVR, a high quality volume renderer system that uses various techniques to visualize large volume sizes interactively. This work is provided to visualize different stages of the parallel volume rendering pipeline of HVR. This is along with means of performance analysis through a number of flexible and dynamic visualizations that reflect the current state of the system and enables manipulation of them at runtime. Those visualization are aimed to facilitate debugging, understanding and analyzing the distributed HVR.

  18. Automating Geospatial Visualizations with Smart Default Renderers for Data Exploration Web Applications

    Science.gov (United States)

    Ekenes, K.

    2017-12-01

    This presentation will outline the process of creating a web application for exploring large amounts of scientific geospatial data using modern automated cartographic techniques. Traditional cartographic methods, including data classification, may inadvertently hide geospatial and statistical patterns in the underlying data. This presentation demonstrates how to use smart web APIs that quickly analyze the data when it loads, and provides suggestions for the most appropriate visualizations based on the statistics of the data. Since there are just a few ways to visualize any given dataset well, it is imperative to provide smart default color schemes tailored to the dataset as opposed to static defaults. Since many users don't go beyond default values, it is imperative that they are provided with smart default visualizations. Multiple functions for automating visualizations are available in the Smart APIs, along with UI elements allowing users to create more than one visualization for a dataset since there isn't a single best way to visualize a given dataset. Since bivariate and multivariate visualizations are particularly difficult to create effectively, this automated approach removes the guesswork out of the process and provides a number of ways to generate multivariate visualizations for the same variables. This allows the user to choose which visualization is most appropriate for their presentation. The methods used in these APIs and the renderers generated by them are not available elsewhere. The presentation will show how statistics can be used as the basis for automating default visualizations of data along continuous ramps, creating more refined visualizations while revealing the spread and outliers of the data. Adding interactive components to instantaneously alter visualizations allows users to unearth spatial patterns previously unknown among one or more variables. These applications may focus on a single dataset that is frequently updated, or configurable

  19. Declarative language design for interactive visualization.

    Science.gov (United States)

    Heer, Jeffrey; Bostock, Michael

    2010-01-01

    We investigate the design of declarative, domain-specific languages for constructing interactive visualizations. By separating specification from execution, declarative languages can simplify development, enable unobtrusive optimization, and support retargeting across platforms. We describe the design of the Protovis specification language and its implementation within an object-oriented, statically-typed programming language (Java). We demonstrate how to support rich visualizations without requiring a toolkit-specific data model and extend Protovis to enable declarative specification of animated transitions. To support cross-platform deployment, we introduce rendering and event-handling infrastructures decoupled from the runtime platform, letting designers retarget visualization specifications (e.g., from desktop to mobile phone) with reduced effort. We also explore optimizations such as runtime compilation of visualization specifications, parallelized execution, and hardware-accelerated rendering. We present benchmark studies measuring the performance gains provided by these optimizations and compare performance to existing Java-based visualization tools, demonstrating scalability improvements exceeding an order of magnitude.

  20. Visual and Verbal Learning in a Genetic Metabolic Disorder

    Science.gov (United States)

    Spilkin, Amy M.; Ballantyne, Angela O.; Trauner, Doris A.

    2009-01-01

    Visual and verbal learning in a genetic metabolic disorder (cystinosis) were examined in the following three studies. The goal of Study I was to provide a normative database and establish the reliability and validity of a new test of visual learning and memory (Visual Learning and Memory Test; VLMT) that was modeled after a widely used test of…

  1. Design Visualization Internship Overview

    Science.gov (United States)

    Roberts, Trevor D.

    2014-01-01

    This is a report documenting the details of my work as a NASA KSC intern for the Summer Session from June 2nd to August 8th, 2014. This work was conducted within the Design Visualization Group, a Contractor staffed organization within the C1 division of the IT Directorate. The principle responsibilities of the KSC Design Visualization Group are the production of 3D simulations of NASA equipment and facilities for the purpose of planning complex operations such as hardware transportation and vehicle assembly. My role as an intern focused on aiding engineers in using 3D scanning equipment to obtain as-built measurements of NASA facilities, as well as using CATIA and DELMIA to process this data. My primary goals for this internship focused on expanding my CAD knowledge and capabilities, while also learning more about technologies I was previously unfamiliar with, such as 3D scanning. An additional goal of mine was to learn more about how NASA operates, and how the U.S. Space Program operates on a day-to-day basis. This opportunity provided me with a front-row seat to the daily maneuvers and operations of KSC and NASA as a whole. Each work day, I was able to witness, and even take part of, a small building block of the future systems that will take astronauts to other worlds. After my experiences this summer, not only can I say that my goals have been met, but also that this experience has been the highlight of my experience in higher education.

  2. Using Prosopagnosia to Test and Modify Visual Recognition Theory.

    Science.gov (United States)

    O'Brien, Alexander M

    2018-02-01

    Biederman's contemporary theory of basic visual object recognition (Recognition-by-Components) is based on structural descriptions of objects and presumes 36 visual primitives (geons) people can discriminate, but there has been no empirical test of the actual use of these 36 geons to visually distinguish objects. In this study, we tested for the actual use of these geons in basic visual discrimination by comparing object discrimination performance patterns (when distinguishing varied stimuli) of an acquired prosopagnosia patient (LB) and healthy control participants. LB's prosopagnosia left her heavily reliant on structural descriptions or categorical object differences in visual discrimination tasks versus the control participants' additional ability to use face recognition or coordinate systems (Coordinate Relations Hypothesis). Thus, when LB performed comparably to control participants with a given stimulus, her restricted reliance on basic or categorical discriminations meant that the stimuli must be distinguishable on the basis of a geon feature. By varying stimuli in eight separate experiments and presenting all 36 geons, we discerned that LB coded only 12 (vs. 36) distinct visual primitives (geons), apparently reflective of human visual systems generally.

  3. Visual Navigation of Complex Information Spaces

    Directory of Open Access Journals (Sweden)

    Sarah North

    1995-11-01

    Full Text Available The authors lay the foundation for the introduction of visual navigation aid to assist computer users in direct manipulation of the complex information spaces. By exploring present research on scientific data visualisation and creating a case for improved information visualisation tools, they introduce the design of an improved information visualisation interface utilizing dynamic slider, called Visual-X, incorporating icons with bindable attributes (glyphs. Exploring the improvement that these data visualisations, make to a computing environment, the authors conduct an experiment to compare the performance of subjects who use traditional interfaces and Visual-X. Methodology is presented and conclusions reveal that the use of Visual-X appears to be a promising approach in providing users with a navigation tool that does not overload their cognitive processes.

  4. Visual indicator of absorbed radiation doses

    Energy Technology Data Exchange (ETDEWEB)

    Generalova, V V; Krasovitskii, B M; Vainshtok, B A; Gurskii, M N

    1968-10-15

    A visual indicator of the absorbed doses of ionizing radiation is proposed. The indicator has a polymer base with the addition of a dye. A distinctive feature of the indicator consists of the use of polystyrene as its polymer base with the addition of halogen-containing hydrocarbon and the light-proof dye. Such combination of the radiation-resistant polymer of polystyrene and the light-proof dyestuff makes the proposed indicator highly stable.

  5. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    Energy Technology Data Exchange (ETDEWEB)

    Dong, Han; Sharma, Diksha; Badano, Aldo, E-mail: aldo.badano@fda.hhs.gov [Division of Imaging, Diagnostics, and Software Reliability, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, Maryland 20993 (United States)

    2014-12-15

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying

  6. Web-based, GPU-accelerated, Monte Carlo simulation and visualization of indirect radiation imaging detector performance

    International Nuclear Information System (INIS)

    Dong, Han; Sharma, Diksha; Badano, Aldo

    2014-01-01

    Purpose: Monte Carlo simulations play a vital role in the understanding of the fundamental limitations, design, and optimization of existing and emerging medical imaging systems. Efforts in this area have resulted in the development of a wide variety of open-source software packages. One such package, hybridMANTIS, uses a novel hybrid concept to model indirect scintillator detectors by balancing the computational load using dual CPU and graphics processing unit (GPU) processors, obtaining computational efficiency with reasonable accuracy. In this work, the authors describe two open-source visualization interfaces, webMANTIS and visualMANTIS to facilitate the setup of computational experiments via hybridMANTIS. Methods: The visualization tools visualMANTIS and webMANTIS enable the user to control simulation properties through a user interface. In the case of webMANTIS, control via a web browser allows access through mobile devices such as smartphones or tablets. webMANTIS acts as a server back-end and communicates with an NVIDIA GPU computing cluster that can support multiuser environments where users can execute different experiments in parallel. Results: The output consists of point response and pulse-height spectrum, and optical transport statistics generated by hybridMANTIS. The users can download the output images and statistics through a zip file for future reference. In addition, webMANTIS provides a visualization window that displays a few selected optical photon path as they get transported through the detector columns and allows the user to trace the history of the optical photons. Conclusions: The visualization tools visualMANTIS and webMANTIS provide features such as on the fly generation of pulse-height spectra and response functions for microcolumnar x-ray imagers while allowing users to save simulation parameters and results from prior experiments. The graphical interfaces simplify the simulation setup and allow the user to go directly from specifying

  7. Wavefront coherence area for predicting visual acuity of post-PRK and post-PARK refractive surgery patients

    Science.gov (United States)

    Garcia, Daniel D.; van de Pol, Corina; Barsky, Brian A.; Klein, Stanley A.

    1999-06-01

    Many current corneal topography instruments (called videokeratographs) provide an `acuity index' based on corneal smoothness to analyze expected visual acuity. However, post-refractive surgery patients often exhibit better acuity than is predicted by such indices. One reason for this is that visual acuity may not necessarily be determined by overall corneal smoothness but rather by having some part of the cornea able to focus light coherently onto the fovea. We present a new method of representing visual acuity by measuring the wavefront aberration, using principles from both ray and wave optics. For each point P on the cornea, we measure the size of the associated coherence area whose optical path length (OPL), from a reference plane to P's focus, is within a certain tolerance of the OPL for P. We measured the topographies and vision of 62 eyes of patients who had undergone the corneal refractive surgery procedures of photorefractive keratectomy (PRK) and photorefractive astigmatic keratectomy (PARK). In addition to high contrast visual acuity, our vision tests included low contrast and low luminance to test the contribution of the PRK transition zone. We found our metric for visual acuity to be better than all other metrics at predicting the acuity of low contrast and low luminance. However, high contrast visual acuity was poorly predicted by all of the indices we studied, including our own. The indices provided by current videokeratographs sometimes fail for corneas whose shape differs from simple ellipsoidal models. This is the case with post-PRK and post-PARK refractive surgery patients. Our alternative representation that displays the coherence area of the wavefront has considerable advantages, and promises to be a better predictor of low contrast and low luminance visual acuity than current shape measures.

  8. Field: a new meta-authoring platform for data-intensive scientific visualization

    Science.gov (United States)

    Downie, M.; Ameres, E.; Fox, P. A.; Goebel, J.; Graves, A.; Hendler, J.

    2012-12-01

    intermediate programs and this visual results are constantly being made and remade en route; "speculative", because these programs and images result out of mode of inquiry into image-making not unlike that of hypothesis formation and testing; "integrative" because this style draws deeply upon the libraries of algorithms and materials available online today; and "exploratory" because the results of these speculations are inherently open to the data and unforeseen out the outset. To this end our development environment — Field — comprises a minimal core and a powerful plug-in system that can be extended from within the environment itself. By providing a hybrid text editor that can incorporate text-based programming at the same time with graphical user-interface elements, its flexible and extensible interface provides space as necessary for notation, visualization, interface construction, and introspection. In addition, it provides an advanced GPU-accelerated graphics system ideal for large-scale data visualization. Since Field was created in the context of widely divergent interdisciplinary projects, its aim is to give its users not only the ability to work rapidly, but to shape their Field environment extensively and flexibly for their own demands.

  9. Structural and functional changes across the visual cortex of a patient with visual form agnosia.

    Science.gov (United States)

    Bridge, Holly; Thomas, Owen M; Minini, Loredana; Cavina-Pratesi, Cristiana; Milner, A David; Parker, Andrew J

    2013-07-31

    Loss of shape recognition in visual-form agnosia occurs without equivalent losses in the use of vision to guide actions, providing support for the hypothesis of two visual systems (for "perception" and "action"). The human individual DF received a toxic exposure to carbon monoxide some years ago, which resulted in a persisting visual-form agnosia that has been extensively characterized at the behavioral level. We conducted a detailed high-resolution MRI study of DF's cortex, combining structural and functional measurements. We present the first accurate quantification of the changes in thickness across DF's occipital cortex, finding the most substantial loss in the lateral occipital cortex (LOC). There are reduced white matter connections between LOC and other areas. Functional measures show pockets of activity that survive within structurally damaged areas. The topographic mapping of visual areas showed that ordered retinotopic maps were evident for DF in the ventral portions of visual cortical areas V1, V2, V3, and hV4. Although V1 shows evidence of topographic order in its dorsal portion, such maps could not be found in the dorsal parts of V2 and V3. We conclude that it is not possible to understand fully the deficits in object perception in visual-form agnosia without the exploitation of both structural and functional measurements. Our results also highlight for DF the cortical routes through which visual information is able to pass to support her well-documented abilities to use visual information to guide actions.

  10. 7 Key Challenges for Visualization in Cyber Network Defense

    Energy Technology Data Exchange (ETDEWEB)

    Best, Daniel M.; Endert, Alexander; Kidwell, Dan

    2014-12-02

    In this paper we present seven challenges, informed by two user studies, to be considered when developing a visualization for cyber security purposes. Cyber security visualizations must go beyond isolated solutions and “pretty picture” visualizations in order to make impact to users. We provide an example prototype that addresses the challenges with a description of how they are met. Our aim is to assist in increasing utility and adoption rates for visualization capabilities in cyber security.

  11. A comparative psychophysical approach to visual perception in primates.

    Science.gov (United States)

    Matsuno, Toyomi; Fujita, Kazuo

    2009-04-01

    Studies on the visual processing of primates, which have well developed visual systems, provide essential information about the perceptual bases of their higher-order cognitive abilities. Although the mechanisms underlying visual processing are largely shared between human and nonhuman primates, differences have also been reported. In this article, we review psychophysical investigations comparing the basic visual processing that operates in human and nonhuman species, and discuss the future contributions potentially deriving from such comparative psychophysical approaches to primate minds.

  12. Anatomical alterations of the visual motion processing network in migraine with and without aura.

    Directory of Open Access Journals (Sweden)

    Cristina Granziera

    2006-10-01

    Full Text Available Patients suffering from migraine with aura (MWA and migraine without aura (MWoA show abnormalities in visual motion perception during and between attacks. Whether this represents the consequences of structural changes in motion-processing networks in migraineurs is unknown. Moreover, the diagnosis of migraine relies on patient's history, and finding differences in the brain of migraineurs might help to contribute to basic research aimed at better understanding the pathophysiology of migraine.To investigate a common potential anatomical basis for these disturbances, we used high-resolution cortical thickness measurement and diffusion tensor imaging (DTI to examine the motion-processing network in 24 migraine patients (12 with MWA and 12 MWoA and 15 age-matched healthy controls (HCs. We found increased cortical thickness of motion-processing visual areas MT+ and V3A in migraineurs compared to HCs. Cortical thickness increases were accompanied by abnormalities of the subjacent white matter. In addition, DTI revealed that migraineurs have alterations in superior colliculus and the lateral geniculate nucleus, which are also involved in visual processing.A structural abnormality in the network of motion-processing areas could account for, or be the result of, the cortical hyperexcitability observed in migraineurs. The finding in patients with both MWA and MWoA of thickness abnormalities in area V3A, previously described as a source in spreading changes involved in visual aura, raises the question as to whether a "silent" cortical spreading depression develops as well in MWoA. In addition, these experimental data may provide clinicians and researchers with a noninvasively acquirable migraine biomarker.

  13. Insects and the Kafkaesque: Insectuous Re-Writings in Visual and Audio-Visual Media

    Directory of Open Access Journals (Sweden)

    Damianos Grammatikopoulos

    2017-09-01

    Full Text Available In this article, I examine techniques at work in visual and audio-visual media that deal with the creative imitation of central Kafkan themes, particularly those related to hybrid insects and bodily deformity. In addition, the opening section of my study offers a detailed and thorough discussion of the concept of the “Kafkaesque”, and an attempt will be made to circumscribe its signifying limits. The main objective of the study is to explore the relationship between Kafka’s texts and the works of contemporary cartoonists, illustrators (Charles Burns, and filmmakers (David Cronenberg and identify themes and motifs that they have in common. My approach is informed by transtextual practices and source studies, and I draw systematically on Gerard Genette’s Palimpsests and Harold Bloom’s The Anxiety of Influence.

  14. Hepatitis B virus vaccination booster does not provide additional protection in adolescents: a cross-sectional school-based study.

    Science.gov (United States)

    Chang, Yung-Chieh; Wang, Jen-Hung; Chen, Yu-Sheng; Lin, Jun-Song; Cheng, Ching-Feng; Chu, Chia-Hsiang

    2014-09-23

    Current consensus does not support the use of a universal booster of hepatitis B virus (HBV) vaccine because there is an anamnestic response in almost all children 15 years after universal infant HBV vaccination. We aimed to provide a booster strategy among adolescents as a result of their changes in lifestyle and sexual activity. This study comprised a series of cross-sectional serological surveys of HBV markers in four age groups between 2004 and 2012. The seropositivity rates of hepatitis B surface antigen (HBsAg) and its reciprocal antibody (anti-HBs) for each age group were collected. There were two parts to this study; age-specific HBV seroepidemiology and subgroup analysis, including effects of different vaccine types, booster response for immunogenicity at 15 years of age, and longitudinal follow-up to identify possible additional protection by HBV booster. Within the study period, data on serum anti-HBs and HBsAg in a total of 6950 students from four age groups were collected. The overall anti-HBs and HBsAg seropositivity rates were 44.3% and 1.2%, respectively. The anti-HBs seropositivity rate in the plasma-derived subgroup was significantly higher in both 15- and 18-year age groups. Overall response rate in the double-seronegative recipients at 15 years of age was 92.5% at 6 weeks following one recombinant HBV booster dose. Among the 24 recipients showing anti-HBs seroconversion at 6 weeks after booster, seven subjects (29.2%) had lost their anti-HBs seropositivity again within 3 years. Increased seropositivity rates and titers of anti-HBs did not provide additional protective effects among subjects comprehensively vaccinated against HBV in infancy. HBV booster strategy at 15 years of age was the main contributor to the unique age-related phenomenon of anti-HBs seropositivity rate and titer. No increase in HBsAg seropositivity rates within different age groups was observed. Vaccination with plasma-derived HBV vaccines in infancy provided higher

  15. Matisse: A Visual Analytics System for Exploring Emotion Trends in Social Media Text Streams

    Energy Technology Data Exchange (ETDEWEB)

    Steed, Chad A [ORNL; Drouhard, Margaret MEG G [ORNL; Beaver, Justin M [ORNL; Pyle, Joshua M [ORNL; BogenII, Paul L. [Google Inc.

    2015-01-01

    Dynamically mining textual information streams to gain real-time situational awareness is especially challenging with social media systems where throughput and velocity properties push the limits of a static analytical approach. In this paper, we describe an interactive visual analytics system, called Matisse, that aids with the discovery and investigation of trends in streaming text. Matisse addresses the challenges inherent to text stream mining through the following technical contributions: (1) robust stream data management, (2) automated sentiment/emotion analytics, (3) interactive coordinated visualizations, and (4) a flexible drill-down interaction scheme that accesses multiple levels of detail. In addition to positive/negative sentiment prediction, Matisse provides fine-grained emotion classification based on Valence, Arousal, and Dominance dimensions and a novel machine learning process. Information from the sentiment/emotion analytics are fused with raw data and summary information to feed temporal, geospatial, term frequency, and scatterplot visualizations using a multi-scale, coordinated interaction model. After describing these techniques, we conclude with a practical case study focused on analyzing the Twitter sample stream during the week of the 2013 Boston Marathon bombings. The case study demonstrates the effectiveness of Matisse at providing guided situational awareness of significant trends in social media streams by orchestrating computational power and human cognition.

  16. Immersive visualization of dynamic CFD model results

    International Nuclear Information System (INIS)

    Comparato, J.R.; Ringel, K.L.; Heath, D.J.

    2004-01-01

    With immersive visualization the engineer has the means for vividly understanding problem causes and discovering opportunities to improve design. Software can generate an interactive world in which collaborators experience the results of complex mathematical simulations such as computational fluid dynamic (CFD) modeling. Such software, while providing unique benefits over traditional visualization techniques, presents special development challenges. The visualization of large quantities of data interactively requires both significant computational power and shrewd data management. On the computational front, commodity hardware is outperforming large workstations in graphical quality and frame rates. Also, 64-bit commodity computing shows promise in enabling interactive visualization of large datasets. Initial interactive transient visualization methods and examples are presented, as well as development trends in commodity hardware and clustering. Interactive, immersive visualization relies on relevant data being stored in active memory for fast response to user requests. For large or transient datasets, data management becomes a key issue. Techniques for dynamic data loading and data reduction are presented as means to increase visualization performance. (author)

  17. Visualization drivers for Geant4

    International Nuclear Information System (INIS)

    Beretvas, Andy

    2005-01-01

    This document is on Geant 4 visualization tools (drivers), evaluating pros and cons of each option, including recommendations on which tools to support at Fermilab for different applications. Four visualization drivers are evaluated. They re OpenGL, HepRep, DAWN and VRML. They all have good features, OpenGL provides graphic output with out an intermediate file. HepRep provides menus to assist the user. DAWN provides high quality plots and even for large files produces output quickly. VRML uses the smallest disk space for intermediate files. Large experiments at Fermilab will want to write their own display. They should proceed to make this display graphics independent. Medium experiment will probably want to use HepRep because of it's menu support. Smaller scale experiments will want to use OpenGL in the spirit of having immediate response, good quality output and keeping things simple

  18. Visual intelligence Microsoft tools and techniques for visualizing data

    CERN Document Server

    Stacey, Mark; Jorgensen, Adam

    2013-01-01

    Go beyond design concepts and learn to build state-of-the-art visualizations The visualization experts at Microsoft's Pragmatic Works have created a full-color, step-by-step guide to building specific types of visualizations. The book thoroughly covers the Microsoft toolset for data analysis and visualization, including Excel, and explores best practices for choosing a data visualization design, selecting tools from the Microsoft stack, and building a dynamic data visualization from start to finish. You'll examine different types of visualizations, their strengths and weaknesses, a

  19. ViA: a perceptual visualization assistant

    Science.gov (United States)

    Healey, Chris G.; St. Amant, Robert; Elhaddad, Mahmoud S.

    2000-05-01

    This paper describes an automated visualized assistant called ViA. ViA is designed to help users construct perceptually optical visualizations to represent, explore, and analyze large, complex, multidimensional datasets. We have approached this problem by studying what is known about the control of human visual attention. By harnessing the low-level human visual system, we can support our dual goals of rapid and accurate visualization. Perceptual guidelines that we have built using psychophysical experiments form the basis for ViA. ViA uses modified mixed-initiative planning algorithms from artificial intelligence to search of perceptually optical data attribute to visual feature mappings. Our perceptual guidelines are integrated into evaluation engines that provide evaluation weights for a given data-feature mapping, and hints on how that mapping might be improved. ViA begins by asking users a set of simple questions about their dataset and the analysis tasks they want to perform. Answers to these questions are used in combination with the evaluation engines to identify and intelligently pursue promising data-feature mappings. The result is an automatically-generated set of mappings that are perceptually salient, but that also respect the context of the dataset and users' preferences about how they want to visualize their data.

  20. Accessing Earth Science Data Visualizations through NASA GIBS & Worldview

    Science.gov (United States)

    Cechini, M. F.; Boller, R. A.; Baynes, K.; Wong, M. M.; King, B. A.; Schmaltz, J. E.; De Luca, A. P.; King, J.; Roberts, J. T.; Rodriguez, J.; Thompson, C. K.; Pressley, N. N.

    2017-12-01

    For more than 20 years, the NASA Earth Observing System (EOS) has operated dozens of remote sensing satellites collecting nearly 15 Petabytes of data that span thousands of science parameters. Within these observations are keys the Earth Scientists have used to unlock many things that we understand about our planet. Also contained within these observations are a myriad of opportunities for learning and education. The trick is making them accessible to educators and students in convenient and simple ways so that effort can be spent on lesson enrichment and not overcoming technical hurdles. The NASA Global Imagery Browse Services (GIBS) system and NASA Worldview website provide a unique view into EOS data through daily full resolution visualizations of hundreds of earth science parameters. For many of these parameters, visualizations are available within hours of acquisition from the satellite. For others, visualizations are available for the entire mission of the satellite. Accompanying the visualizations are visual aids such as color legends, place names, and orbit tracks. By using these visualizations, educators and students can observe natural phenomena that enrich a scientific education. This poster will provide an overview of the visualizations available in NASA GIBS and Worldview and how they are accessed. We invite discussion on how the visualizations can be used or improved for educational purposes.

  1. How important is lateral masking in visual search?

    NARCIS (Netherlands)

    Wertheim, AH; Hooge, ITC; Krikke, K; Johnson, A

    Five experiments are presented, providing empirical support of the hypothesis that the sensory phenomenon of lateral masking may explain many well-known visual search phenomena that are commonly assumed to be governed by cognitive attentional mechanisms. Experiment I showed that when the same visual

  2. Visualization of ocean forecast in BYTHOS

    Science.gov (United States)

    Zhuk, E.; Zodiatis, G.; Nikolaidis, A.; Stylianou, S.; Karaolia, A.

    2016-08-01

    The Cyprus Oceanography Center has been constantly searching for new ideas for developing and implementing innovative methods and new developments concerning the use of Information Systems in Oceanography, to suit both the Center's monitoring and forecasting products. Within the frame of this scope two major online managing and visualizing data systems have been developed and utilized, those of CYCOFOS and BYTHOS. The Cyprus Coastal Ocean Forecasting and Observing System - CYCOFOS provides a variety of operational predictions such as ultra high, high and medium resolution ocean forecasts in the Levantine Basin, offshore and coastal sea state forecasts in the Mediterranean and Black Sea, tide forecasting in the Mediterranean, ocean remote sensing in the Eastern Mediterranean and coastal and offshore monitoring. As a rich internet application, BYTHOS enables scientists to search, visualize and download oceanographic data online and in real time. The recent improving of BYTHOS system is the extension with access and visualization of CYCOFOS data and overlay forecast fields and observing data. The CYCOFOS data are stored at OPENDAP Server in netCDF format. To search, process and visualize it the php and python scripts were developed. Data visualization is achieved through Mapserver. The BYTHOS forecast access interface allows to search necessary forecasting field by recognizing type, parameter, region, level and time. Also it provides opportunity to overlay different forecast and observing data that can be used for complex analyze of sea basin aspects.

  3. Auditory and visual memory in musicians and nonmusicians

    OpenAIRE

    Cohen, Michael A.; Evans, Karla K.; Horowitz, Todd S.; Wolfe, Jeremy M.

    2011-01-01

    Numerous studies have shown that musicians outperform nonmusicians on a variety of tasks. Here we provide the first evidence that musicians have superior auditory recognition memory for both musical and nonmusical stimuli, compared to nonmusicians. However, this advantage did not generalize to the visual domain. Previously, we showed that auditory recognition memory is inferior to visual recognition memory. Would this be true even for trained musicians? We compared auditory and visual memory ...

  4. Anorexia nervosa and body dysmorphic disorder are associated with abnormalities in processing visual information.

    Science.gov (United States)

    Li, W; Lai, T M; Bohon, C; Loo, S K; McCurdy, D; Strober, M; Bookheimer, S; Feusner, J

    2015-07-01

    Anorexia nervosa (AN) and body dysmorphic disorder (BDD) are characterized by distorted body image and are frequently co-morbid with each other, although their relationship remains little studied. While there is evidence of abnormalities in visual and visuospatial processing in both disorders, no study has directly compared the two. We used two complementary modalities--event-related potentials (ERPs) and functional magnetic resonance imaging (fMRI)--to test for abnormal activity associated with early visual signaling. We acquired fMRI and ERP data in separate sessions from 15 unmedicated individuals in each of three groups (weight-restored AN, BDD, and healthy controls) while they viewed images of faces and houses of different spatial frequencies. We used joint independent component analyses to compare activity in visual systems. AN and BDD groups demonstrated similar hypoactivity in early secondary visual processing regions and the dorsal visual stream when viewing low spatial frequency faces, linked to the N170 component, as well as in early secondary visual processing regions when viewing low spatial frequency houses, linked to the P100 component. Additionally, the BDD group exhibited hyperactivity in fusiform cortex when viewing high spatial frequency houses, linked to the N170 component. Greater activity in this component was associated with lower attractiveness ratings of faces. Results provide preliminary evidence of similar abnormal spatiotemporal activation in AN and BDD for configural/holistic information for appearance- and non-appearance-related stimuli. This suggests a common phenotype of abnormal early visual system functioning, which may contribute to perceptual distortions.

  5. Visual feedback alters force control and functional activity in the visuomotor network after stroke

    Directory of Open Access Journals (Sweden)

    Derek B. Archer

    2018-01-01

    Full Text Available Modulating visual feedback may be a viable option to improve motor function after stroke, but the neurophysiological basis for this improvement is not clear. Visual gain can be manipulated by increasing or decreasing the spatial amplitude of an error signal. Here, we combined a unilateral visually guided grip force task with functional MRI to understand how changes in the gain of visual feedback alter brain activity in the chronic phase after stroke. Analyses focused on brain activation when force was produced by the most impaired hand of the stroke group as compared to the non-dominant hand of the control group. Our experiment produced three novel results. First, gain-related improvements in force control were associated with an increase in activity in many regions within the visuomotor network in both the stroke and control groups. These regions include the extrastriate visual cortex, inferior parietal lobule, ventral premotor cortex, cerebellum, and supplementary motor area. Second, the stroke group showed gain-related increases in activity in additional regions of lobules VI and VIIb of the ipsilateral cerebellum. Third, relative to the control group, the stroke group showed increased activity in the ipsilateral primary motor cortex, and activity in this region did not vary as a function of visual feedback gain. The visuomotor network, cerebellum, and ipsilateral primary motor cortex have each been targeted in rehabilitation interventions after stroke. Our observations provide new insight into the role these regions play in processing visual gain during a precisely controlled visuomotor task in the chronic phase after stroke.

  6. Implied Spatial Meaning and Visuospatial Bias: Conceptual Processing Influences Processing of Visual Targets and Distractors.

    Directory of Open Access Journals (Sweden)

    Davood G Gozli

    Full Text Available Concepts with implicit spatial meaning (e.g., "hat", "boots" can bias visual attention in space. This result is typically found in experiments with a single visual target per trial, which can appear at one of two locations (e.g., above vs. below. Furthermore, the interaction is typically found in the form of speeded responses to targets appearing at the compatible location (e.g., faster responses to a target above fixation, after reading "hat". It has been argued that these concept-space interactions could also result from experimentally-induced associations between the binary set of locations and the conceptual categories with upward and downward meaning. Thus, rather than reflecting a conceptually driven spatial bias, the effect could reflect a benefit for compatible cue-target sequences that occurs only after target onset. We addressed these concerns by going beyond a binary set of locations and employing a search display consisting of four items (above, below, left, and right. Within each search trial, before performing a visual search task, participants performed a conceptual task involving concepts with implicit upward or downward meaning. The search display, in addition to including a target, could also include a salient distractor. Assuming a conceptually driven visual bias, we expected to observe, first, a benefit for target processing at the compatible location and, second, an increase in the cost of the salient distractor. The findings confirmed both predictions, suggesting that concepts do indeed generate a spatial bias. Finally, results from a control experiment, without the conceptual task, suggest the presence of an axis-specific effect, in addition to the location-specific effect, suggesting that concepts might cause both location-specific and axis-specific spatial bias. Taken together, our findings provide additional support for the involvement of spatial processing in conceptual understanding.

  7. Auditory, visual, and auditory-visual perceptions of emotions by young children with hearing loss versus children with normal hearing.

    Science.gov (United States)

    Most, Tova; Michaelis, Hilit

    2012-08-01

    This study aimed to investigate the effect of hearing loss (HL) on emotion-perception ability among young children with and without HL. A total of 26 children 4.0-6.6 years of age with prelingual sensory-neural HL ranging from moderate to profound and 14 children with normal hearing (NH) participated. They were asked to identify happiness, anger, sadness, and fear expressed by an actress when uttering the same neutral nonsense sentence. Their auditory, visual, and auditory-visual perceptions of the emotional content were assessed. The accuracy of emotion perception among children with HL was lower than that of the NH children in all 3 conditions: auditory, visual, and auditory-visual. Perception through the combined auditory-visual mode significantly surpassed the auditory or visual modes alone in both groups, indicating that children with HL utilized the auditory information for emotion perception. No significant differences in perception emerged according to degree of HL. In addition, children with profound HL and cochlear implants did not perform differently from children with less severe HL who used hearing aids. The relatively high accuracy of emotion perception by children with HL may be explained by their intensive rehabilitation, which emphasizes suprasegmental and paralinguistic aspects of verbal communication.

  8. Visual Equivalence and Amodal Completion in Cuttlefish.

    Science.gov (United States)

    Lin, I-Rong; Chiao, Chuan-Chin

    2017-01-01

    Modern cephalopods are notably the most intelligent invertebrates and this is accompanied by keen vision. Despite extensive studies investigating the visual systems of cephalopods, little is known about their visual perception and object recognition. In the present study, we investigated the visual processing of the cuttlefish Sepia pharaonis , including visual equivalence and amodal completion. Cuttlefish were trained to discriminate images of shrimp and fish using the operant conditioning paradigm. After cuttlefish reached the learning criteria, a series of discrimination tasks were conducted. In the visual equivalence experiment, several transformed versions of the training images, such as images reduced in size, images reduced in contrast, sketches of the images, the contours of the images, and silhouettes of the images, were used. In the amodal completion experiment, partially occluded views of the original images were used. The results showed that cuttlefish were able to treat the training images of reduced size and sketches as the visual equivalence. Cuttlefish were also capable of recognizing partially occluded versions of the training image. Furthermore, individual differences in performance suggest that some cuttlefish may be able to recognize objects when visual information was partly removed. These findings support the hypothesis that the visual perception of cuttlefish involves both visual equivalence and amodal completion. The results from this research also provide insights into the visual processing mechanisms used by cephalopods.

  9. TransVisuality : The Cultural Dimension of Visuality

    DEFF Research Database (Denmark)

    The Transvisuality Project In little more than a decade, visual culture has proven its status and commitment as an independent field of research, drawing on and continuing areas such as art history, cultural studies, semiotics and media research, as well as parts of visual sociology, visual...... for visual culture, transcending a number of disciplinary and geographical borders. The first volume, ‘Boundaries and Creative Openings’, explores the implications of a cultural dimension of ‘visuality’ when seen as a concept reflecting and challenging fundamental aspects of culture, from the arts to social...... anthropology and visual communication. Visual culture is now a well-established academic area of research and teaching, covering subjects in the humanities and social sciences. Readers and introductions have outlined the field, and research is mirrored in networks, journals and conferences on the national...

  10. Visual soil evaluation

    DEFF Research Database (Denmark)

    Visual Soil Evaluation (VSE) provides land users and environmental authorities with the tools to assess soil quality for crop performance. This book describes the assessment of the various structural conditions of soil, especially after quality degradation such as compaction, erosion or organic...... and nutrient leaching, and for diagnosing and rectifying erosion and compaction in soils....

  11. The phonological and visual basis of developmental dyslexia in Brazilian Portuguese reading children

    Science.gov (United States)

    Germano, Giseli D.; Reilhac, Caroline; Capellini, Simone A.; Valdois, Sylviane

    2014-01-01

    Evidence from opaque languages suggests that visual attention processing abilities in addition to phonological skills may act as cognitive underpinnings of developmental dyslexia. We explored the role of these two cognitive abilities on reading fluency in Brazilian Portuguese, a more transparent orthography than French or English. Sixty-six children with developmental dyslexia and normal Brazilian Portuguese children participated. They were administered three tasks of phonological skills (phoneme identification, phoneme, and syllable blending) and three visual tasks (a letter global report task and two non-verbal tasks of visual closure and visual constancy). Results show that Brazilian Portuguese children with developmental dyslexia are impaired not only in phonological processing but further in visual processing. The phonological and visual processing abilities significantly and independently contribute to reading fluency in the whole population. Last, different cognitively homogeneous subtypes can be identified in the Brazilian Portuguese population of children with developmental dyslexia. Two subsets of children with developmental dyslexia were identified as having a single cognitive disorder, phonological or visual; another group exhibited a double deficit and a few children showed no visual or phonological disorder. Thus the current findings extend previous data from more opaque orthographies as French and English, in showing the importance of investigating visual processing skills in addition to phonological skills in children with developmental dyslexia whatever their language orthography transparency. PMID:25352822

  12. UpSet: Visualization of Intersecting Sets

    Science.gov (United States)

    Lex, Alexander; Gehlenborg, Nils; Strobelt, Hendrik; Vuillemot, Romain; Pfister, Hanspeter

    2016-01-01

    Understanding relationships between sets is an important analysis task that has received widespread attention in the visualization community. The major challenge in this context is the combinatorial explosion of the number of set intersections if the number of sets exceeds a trivial threshold. In this paper we introduce UpSet, a novel visualization technique for the quantitative analysis of sets, their intersections, and aggregates of intersections. UpSet is focused on creating task-driven aggregates, communicating the size and properties of aggregates and intersections, and a duality between the visualization of the elements in a dataset and their set membership. UpSet visualizes set intersections in a matrix layout and introduces aggregates based on groupings and queries. The matrix layout enables the effective representation of associated data, such as the number of elements in the aggregates and intersections, as well as additional summary statistics derived from subset or element attributes. Sorting according to various measures enables a task-driven analysis of relevant intersections and aggregates. The elements represented in the sets and their associated attributes are visualized in a separate view. Queries based on containment in specific intersections, aggregates or driven by attribute filters are propagated between both views. We also introduce several advanced visual encodings and interaction methods to overcome the problems of varying scales and to address scalability. UpSet is web-based and open source. We demonstrate its general utility in multiple use cases from various domains. PMID:26356912

  13. Should visual speech cues (speechreading) be considered when fitting hearing aids?

    Science.gov (United States)

    Grant, Ken

    2002-05-01

    When talker and listener are face-to-face, visual speech cues become an important part of the communication environment, and yet, these cues are seldom considered when designing hearing aids. Models of auditory-visual speech recognition highlight the importance of complementary versus redundant speech information for predicting auditory-visual recognition performance. Thus, for hearing aids to work optimally when visual speech cues are present, it is important to know whether the cues provided by amplification and the cues provided by speechreading complement each other. In this talk, data will be reviewed that show nonmonotonicity between auditory-alone speech recognition and auditory-visual speech recognition, suggesting that efforts designed solely to improve auditory-alone recognition may not always result in improved auditory-visual recognition. Data will also be presented showing that one of the most important speech cues for enhancing auditory-visual speech recognition performance, voicing, is often the cue that benefits least from amplification.

  14. Adaptive behavior of children with visual impairment

    Directory of Open Access Journals (Sweden)

    Anđelković Marija

    2014-01-01

    Full Text Available Adaptive behavior includes a wide range of skills necessary for independent, safe and adequate performance of everyday activities. Practical, social and conceptual skills make the concept of adaptive behavior. The aim of this paper is to provide an insight into the existing studies of adaptive behavior in persons with visual impairment. The paper mainly focuses on the research on adaptive behavior in children with visual impairment. The results show that the acquisition of adaptive skills is mainly low or moderately low in children and youth with visual impairment. Children with visual impairment achieve the worst results in social skills and everyday life skills, while the most acquired are communication skills. Apart from the degree of visual impairment, difficulties in motor development also significantly influence the acquisition of practical and social skills of blind persons and persons with low vision.

  15. Visual Descriptor Learning for Predicting Grasping Affordances

    DEFF Research Database (Denmark)

    Thomsen, Mikkel Tang

    2016-01-01

    by the task of grasping unknown objects given visual sensor information. The contributions from this thesis stem from three works that all relate to the task of grasping unknown objects but with particular focus on the visual representation part of the problem. First an investigation of a visual feature space...... consisting of surface features was performed. Dimensions in the visual space were varied and the effects were evaluated with the task of grasping unknown object. The evaluation was performed using a novel probabilistic grasp prediction approach based on neighbourhood analysis. The resulting success......-rates for predicting grasps were between 75% and 90% depending on the object class. The investigations also provided insights into the importance of selecting a proper visual feature space when utilising it for predicting affordances. As a consequence of the gained insights, a semi-local surface feature, the Sliced...

  16. Reorganization of Visual Callosal Connections Following Alterations of Retinal Input and Brain Damage

    Science.gov (United States)

    Restani, Laura; Caleo, Matteo

    2016-01-01

    Vision is a very important sensory modality in humans. Visual disorders are numerous and arising from diverse and complex causes. Deficits in visual function are highly disabling from a social point of view and in addition cause a considerable economic burden. For all these reasons there is an intense effort by the scientific community to gather knowledge on visual deficit mechanisms and to find possible new strategies for recovery and treatment. In this review, we focus on an important and sometimes neglected player of the visual function, the corpus callosum (CC). The CC is the major white matter structure in the brain and is involved in information processing between the two hemispheres. In particular, visual callosal connections interconnect homologous areas of visual cortices, binding together the two halves of the visual field. This interhemispheric communication plays a significant role in visual cortical output. Here, we will first review the essential literature on the physiology of the callosal connections in normal vision. The available data support the view that the callosum contributes to both excitation and inhibition to the target hemisphere, with a dynamic adaptation to the strength of the incoming visual input. Next, we will focus on data showing how callosal connections may sense visual alterations and respond to the classical paradigm for the study of visual plasticity, i.e., monocular deprivation (MD). This is a prototypical example of a model for the study of callosal plasticity in pathological conditions (e.g., strabismus and amblyopia) characterized by unbalanced input from the two eyes. We will also discuss the findings of callosal alterations in blind subjects. Noteworthy, we will discuss data showing that inter-hemispheric transfer mediates recovery of visual responsiveness following cortical damage. Finally, we will provide an overview of how callosal projections dysfunction could contribute to pathologies such as neglect and occipital

  17. iRaster: a novel information visualization tool to explore spatiotemporal patterns in multiple spike trains.

    Science.gov (United States)

    Somerville, J; Stuart, L; Sernagor, E; Borisyuk, R

    2010-12-15

    Over the last few years, simultaneous recordings of multiple spike trains have become widely used by neuroscientists. Therefore, it is important to develop new tools for analysing multiple spike trains in order to gain new insight into the function of neural systems. This paper describes how techniques from the field of visual analytics can be used to reveal specific patterns of neural activity. An interactive raster plot called iRaster has been developed. This software incorporates a selection of statistical procedures for visualization and flexible manipulations with multiple spike trains. For example, there are several procedures for the re-ordering of spike trains which can be used to unmask activity propagation, spiking synchronization, and many other important features of multiple spike train activity. Additionally, iRaster includes a rate representation of neural activity, a combined representation of rate and spikes, spike train removal and time interval removal. Furthermore, it provides multiple coordinated views, time and spike train zooming windows, a fisheye lens distortion, and dissemination facilities. iRaster is a user friendly, interactive, flexible tool which supports a broad range of visual representations. This tool has been successfully used to analyse both synthetic and experimentally recorded datasets. In this paper, the main features of iRaster are described and its performance and effectiveness are demonstrated using various types of data including experimental multi-electrode array recordings from the ganglion cell layer in mouse retina. iRaster is part of an ongoing research project called VISA (Visualization of Inter-Spike Associations) at the Visualization Lab in the University of Plymouth. The overall aim of the VISA project is to provide neuroscientists with the ability to freely explore and analyse their data. The software is freely available from the Visualization Lab website (see www.plymouth.ac.uk/infovis). Copyright © 2010

  18. Visual face-movement sensitive cortex is relevant for auditory-only speech recognition.

    Science.gov (United States)

    Riedel, Philipp; Ragert, Patrick; Schelinski, Stefanie; Kiebel, Stefan J; von Kriegstein, Katharina

    2015-07-01

    It is commonly assumed that the recruitment of visual areas during audition is not relevant for performing auditory tasks ('auditory-only view'). According to an alternative view, however, the recruitment of visual cortices is thought to optimize auditory-only task performance ('auditory-visual view'). This alternative view is based on functional magnetic resonance imaging (fMRI) studies. These studies have shown, for example, that even if there is only auditory input available, face-movement sensitive areas within the posterior superior temporal sulcus (pSTS) are involved in understanding what is said (auditory-only speech recognition). This is particularly the case when speakers are known audio-visually, that is, after brief voice-face learning. Here we tested whether the left pSTS involvement is causally related to performance in auditory-only speech recognition when speakers are known by face. To test this hypothesis, we applied cathodal transcranial direct current stimulation (tDCS) to the pSTS during (i) visual-only speech recognition of a speaker known only visually to participants and (ii) auditory-only speech recognition of speakers they learned by voice and face. We defined the cathode as active electrode to down-regulate cortical excitability by hyperpolarization of neurons. tDCS to the pSTS interfered with visual-only speech recognition performance compared to a control group without pSTS stimulation (tDCS to BA6/44 or sham). Critically, compared to controls, pSTS stimulation additionally decreased auditory-only speech recognition performance selectively for voice-face learned speakers. These results are important in two ways. First, they provide direct evidence that the pSTS is causally involved in visual-only speech recognition; this confirms a long-standing prediction of current face-processing models. Secondly, they show that visual face-sensitive pSTS is causally involved in optimizing auditory-only speech recognition. These results are in line

  19. Sketchy Rendering for Information Visualization.

    Science.gov (United States)

    Wood, J; Isenberg, P; Isenberg, T; Dykes, J; Boukhelifa, N; Slingsby, A

    2012-12-01

    We present and evaluate a framework for constructing sketchy style information visualizations that mimic data graphics drawn by hand. We provide an alternative renderer for the Processing graphics environment that redefines core drawing primitives including line, polygon and ellipse rendering. These primitives allow higher-level graphical features such as bar charts, line charts, treemaps and node-link diagrams to be drawn in a sketchy style with a specified degree of sketchiness. The framework is designed to be easily integrated into existing visualization implementations with minimal programming modification or design effort. We show examples of use for statistical graphics, conveying spatial imprecision and for enhancing aesthetic and narrative qualities of visualization. We evaluate user perception of sketchiness of areal features through a series of stimulus-response tests in order to assess users' ability to place sketchiness on a ratio scale, and to estimate area. Results suggest relative area judgment is compromised by sketchy rendering and that its influence is dependent on the shape being rendered. They show that degree of sketchiness may be judged on an ordinal scale but that its judgement varies strongly between individuals. We evaluate higher-level impacts of sketchiness through user testing of scenarios that encourage user engagement with data visualization and willingness to critique visualization design. Results suggest that where a visualization is clearly sketchy, engagement may be increased and that attitudes to participating in visualization annotation are more positive. The results of our work have implications for effective information visualization design that go beyond the traditional role of sketching as a tool for prototyping or its use for an indication of general uncertainty.

  20. Visual memory and visual perception: when memory improves visual search.

    Science.gov (United States)

    Riou, Benoit; Lesourd, Mathieu; Brunel, Lionel; Versace, Rémy

    2011-08-01

    This study examined the relationship between memory and perception in order to identify the influence of a memory dimension in perceptual processing. Our aim was to determine whether the variation of typical size between items (i.e., the size in real life) affects visual search. In two experiments, the congruency between typical size difference and perceptual size difference was manipulated in a visual search task. We observed that congruency between the typical and perceptual size differences decreased reaction times in the visual search (Exp. 1), and noncongruency between these two differences increased reaction times in the visual search (Exp. 2). We argue that these results highlight that memory and perception share some resources and reveal the intervention of typical size difference on the computation of the perceptual size difference.

  1. Driver Distraction Using Visual-Based Sensors and Algorithms.

    Science.gov (United States)

    Fernández, Alberto; Usamentiaga, Rubén; Carús, Juan Luis; Casado, Rubén

    2016-10-28

    Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information) or even, distraction detection from specific actions (e.g., phone usage). Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed.

  2. Driver Distraction Using Visual-Based Sensors and Algorithms

    Directory of Open Access Journals (Sweden)

    Alberto Fernández

    2016-10-01

    Full Text Available Driver distraction, defined as the diversion of attention away from activities critical for safe driving toward a competing activity, is increasingly recognized as a significant source of injuries and fatalities on the roadway. Additionally, the trend towards increasing the use of in-vehicle information systems is critical because they induce visual, biomechanical and cognitive distraction and may affect driving performance in qualitatively different ways. Non-intrusive methods are strongly preferred for monitoring distraction, and vision-based systems have appeared to be attractive for both drivers and researchers. Biomechanical, visual and cognitive distractions are the most commonly detected types in video-based algorithms. Many distraction detection systems only use a single visual cue and therefore, they may be easily disturbed when occlusion or illumination changes appear. Moreover, the combination of these visual cues is a key and challenging aspect in the development of robust distraction detection systems. These visual cues can be extracted mainly by using face monitoring systems but they should be completed with more visual cues (e.g., hands or body information or even, distraction detection from specific actions (e.g., phone usage. Additionally, these algorithms should be included in an embedded device or system inside a car. This is not a trivial task and several requirements must be taken into account: reliability, real-time performance, low cost, small size, low power consumption, flexibility and short time-to-market. The key points for the development and implementation of sensors to carry out the detection of distraction will also be reviewed. This paper shows a review of the role of computer vision technology applied to the development of monitoring systems to detect distraction. Some key points considered as both future work and challenges ahead yet to be solved will also be addressed.

  3. Visualizing NASA's Planetary Data with Google Earth

    Science.gov (United States)

    Beyer, R. A.; Hancher, M. D.; Broxton, M.; Weiss-Malik, M.; Gorelick, N.; Kolb, E.

    2008-12-01

    There is a vast store of planetary geospatial data that has been collected by NASA but is difficult to access and visualize. As a 3D geospatial browser, the Google Earth client is one way to visualize planetary data. KML imagery super-overlays enable us to create a non-Earth planetary globe within Google Earth, and conversion of planetary meta-data allows display of the footprint locations of various higher-resolution data sets. Once our group, or any group, performs these data conversions the KML can be made available on the Web, where anyone can download it and begin using it in Google Earth (or any other geospatial browser), just like a Web page. Lucian Plesea at JPL offers several KML basemaps (MDIM, colorized MDIM, MOC composite, THEMIS day time infrared, and both grayscale and colorized MOLA). We have created TES Thermal Inertia maps, and a THEMIS night time infrared overlay, as well. Many data sets for Mars have already been converted to KML. We provide coverage polygons overlaid on the globe, whose icons can be clicked on and lead to the full PDS data URL. We have built coverage maps for the following data sets: MOC narrow angle, HRSC imagery and DTMs, SHARAD tracks, CTX, and HiRISE. The CRISM team is working on providing their coverage data via publicly-accessible KML. The MSL landing site process is also providing data for potential landing sites via KML. The Google Earth client and KML allow anyone to contribute data for everyone to see via the Web. The Earth sciences community is already utilizing KML and Google Earth in a variety of ways as a geospatial browser, and we hope that the planetary sciences community will do the same. Using this paradigm for sharing geospatial data will not only enable planetary scientists to more easily build and share data within the scientific community, but will also provide an easy platform for public outreach and education efforts, and will easily allow anyone to layer geospatial information on top of planetary data

  4. A visualization environment for supercomputing-based applications in computational mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Pavlakos, C.J.; Schoof, L.A.; Mareda, J.F.

    1993-06-01

    In this paper, we characterize a visualization environment that has been designed and prototyped for a large community of scientists and engineers, with an emphasis in superconducting-based computational mechanics. The proposed environment makes use of a visualization server concept to provide effective, interactive visualization to the user`s desktop. Benefits of using the visualization server approach are discussed. Some thoughts regarding desirable features for visualization server hardware architectures are also addressed. A brief discussion of the software environment is included. The paper concludes by summarizing certain observations which we have made regarding the implementation of such visualization environments.

  5. Artificial Vision, New Visual Modalities and Neuroadaptation

    Directory of Open Access Journals (Sweden)

    Hilmi Or

    2012-01-01

    Full Text Available To study the descriptions from which artificial vision derives, to explore the new visual modalities resulting from eye surgeries and diseases, and to gain awareness of the use of machine vision systems for both enhancement of visual perception and better understanding of neuroadaptation. Science could not define until today what vision is. However, some optical-based systems and definitions have been established considering some factors for the formation of seeing. The best known system includes Gabor filter and Gabor patch which work on edge perception, describing the visual perception in the best known way. These systems are used today in industry and technology of machines, robots and computers to provide their "seeing". These definitions are used beyond the machinery in humans for neuroadaptation in new visual modalities after some eye surgeries or to improve the quality of some already known visual modalities. Beside this, “the blindsight” -which was not known to exist until 35 years ago - can be stimulated with visual exercises. Gabor system is a description of visual perception definable in machine vision as well as in human visual perception. This system is used today in robotic vision. There are new visual modalities which arise after some eye surgeries or with the use of some visual optical devices. Also, blindsight is a different visual modality starting to be defined even though the exact etiology is not known. In all the new visual modalities, new vision stimulating therapies using the Gabor systems can be applied. (Turk J Oph thal mol 2012; 42: 61-5

  6. VisComposer: A Visual Programmable Composition Environment for Information Visualization

    Directory of Open Access Journals (Sweden)

    Honghui Mei

    2018-03-01

    Full Text Available As the amount of data being collected has increased, the need for tools that can enable the visual exploration of data has also grown. This has led to the development of a variety of widely used programming frameworks for information visualization. Unfortunately, such frameworks demand comprehensive visualization and coding skills and require users to develop visualization from scratch. An alternative is to create interactive visualization design environments that require little to no programming. However, these tools only supports a small portion of visual forms.We present a programmable integrated development environment (IDE, VisComposer, that supports the development of expressive visualization using a drag-and-drop visual interface. VisComposer exposes the programmability by customizing desired components within a modularized visualization composition pipeline, effectively balancing the capability gap between expert coders and visualization artists. The implemented system empowers users to compose comprehensive visualizations with real-time preview and optimization features, and supports prototyping, sharing and reuse of the effects by means of an intuitive visual composer. Visual programming and textual programming integrated in our system allow users to compose more complex visual effects while retaining the simplicity of use. We demonstrate the performance of VisComposer with a variety of examples and an informal user evaluation. Keywords: Information Visualization, Visualization authoring, Interactive development environment

  7. Visualizing uncertainties in a storm surge ensemble data assimilation and forecasting system

    KAUST Repository

    Hollt, Thomas

    2015-01-15

    We present a novel integrated visualization system that enables the interactive visual analysis of ensemble simulations and estimates of the sea surface height and other model variables that are used for storm surge prediction. Coastal inundation, caused by hurricanes and tropical storms, poses large risks for today\\'s societies. High-fidelity numerical models of water levels driven by hurricane-force winds are required to predict these events, posing a challenging computational problem, and even though computational models continue to improve, uncertainties in storm surge forecasts are inevitable. Today, this uncertainty is often exposed to the user by running the simulation many times with different parameters or inputs following a Monte-Carlo framework in which uncertainties are represented as stochastic quantities. This results in multidimensional, multivariate and multivalued data, so-called ensemble data. While the resulting datasets are very comprehensive, they are also huge in size and thus hard to visualize and interpret. In this paper, we tackle this problem by means of an interactive and integrated visual analysis system. By harnessing the power of modern graphics processing units for visualization as well as computation, our system allows the user to browse through the simulation ensembles in real time, view specific parameter settings or simulation models and move between different spatial and temporal regions without delay. In addition, our system provides advanced visualizations to highlight the uncertainty or show the complete distribution of the simulations at user-defined positions over the complete time series of the prediction. We highlight the benefits of our system by presenting its application in a real-world scenario using a simulation of Hurricane Ike.

  8. An integrated domain specific language for post-processing and visualizing electrophysiological signals in Java.

    Science.gov (United States)

    Strasser, T; Peters, T; Jagle, H; Zrenner, E; Wilke, R

    2010-01-01

    Electrophysiology of vision - especially the electroretinogram (ERG) - is used as a non-invasive way for functional testing of the visual system. The ERG is a combined electrical response generated by neural and non-neuronal cells in the retina in response to light stimulation. This response can be recorded and used for diagnosis of numerous disorders. For both clinical practice and clinical trials it is important to process those signals in an accurate and fast way and to provide the results as structured, consistent reports. Therefore, we developed a freely available and open-source framework in Java (http://www.eye.uni-tuebingen.de/project/idsI4sigproc). The framework is focused on an easy integration with existing applications. By leveraging well-established software patterns like pipes-and-filters and fluent interfaces as well as by designing the application programming interfaces (API) as an integrated domain specific language (DSL) the overall framework provides a smooth learning curve. Additionally, it already contains several processing methods and visualization features and can be extended easily by implementing the provided interfaces. In this way, not only can new processing methods be added but the framework can also be adopted for other areas of signal processing. This article describes in detail the structure and implementation of the framework and demonstrate its application through the software package used in clinical practice and clinical trials at the University Eye Hospital Tuebingen one of the largest departments in the field of visual electrophysiology in Europe.

  9. Flow visualization

    CERN Document Server

    Merzkirch, Wolfgang

    1974-01-01

    Flow Visualization describes the most widely used methods for visualizing flows. Flow visualization evaluates certain properties of a flow field directly accessible to visual perception. Organized into five chapters, this book first presents the methods that create a visible flow pattern that could be investigated by visual inspection, such as simple dye and density-sensitive visualization methods. It then deals with the application of electron beams and streaming birefringence. Optical methods for compressible flows, hydraulic analogy, and high-speed photography are discussed in other cha

  10. Python tools for Visual Studio

    CERN Document Server

    Wang, Cathy

    2014-01-01

    This is a hands-on guide that provides exemplary coverage of all the features and concepts related to PTVS.The book is intended for developers who are aiming to enhance their productivity in Python projects with automation tools that Visual Studio provides for the .Net community. Some basic knowledge of Python programming is essential.

  11. OnSight: Multi-platform Visualization of the Surface of Mars

    Science.gov (United States)

    Abercrombie, S. P.; Menzies, A.; Winter, A.; Clausen, M.; Duran, B.; Jorritsma, M.; Goddard, C.; Lidawer, A.

    2017-12-01

    A key challenge of planetary geology is to develop an understanding of an environment that humans cannot (yet) visit. Instead, scientists rely on visualizations created from images sent back by robotic explorers, such as the Curiosity Mars rover. OnSight is a multi-platform visualization tool that helps scientists and engineers to visualize the surface of Mars. Terrain visualization allows scientists to understand the scale and geometric relationships of the environment around the Curiosity rover, both for scientific understanding and for tactical consideration in safely operating the rover. OnSight includes a web-based 2D/3D visualization tool, as well as an immersive mixed reality visualization. In addition, OnSight offers a novel feature for communication among the science team. Using the multiuser feature of OnSight, scientists can meet virtually on Mars, to discuss geology in a shared spatial context. Combining web-based visualization with immersive visualization allows OnSight to leverage strengths of both platforms. This project demonstrates how 3D visualization can be adapted to either an immersive environment or a computer screen, and will discuss advantages and disadvantages of both platforms.

  12. SequenceCEROSENE: a computational method and web server to visualize spatial residue neighborhoods at the sequence level.

    Science.gov (United States)

    Heinke, Florian; Bittrich, Sebastian; Kaiser, Florian; Labudde, Dirk

    2016-01-01

    To understand the molecular function of biopolymers, studying their structural characteristics is of central importance. Graphics programs are often utilized to conceive these properties, but with the increasing number of available structures in databases or structure models produced by automated modeling frameworks this process requires assistance from tools that allow automated structure visualization. In this paper a web server and its underlying method for generating graphical sequence representations of molecular structures is presented. The method, called SequenceCEROSENE (color encoding of residues obtained by spatial neighborhood embedding), retrieves the sequence of each amino acid or nucleotide chain in a given structure and produces a color coding for each residue based on three-dimensional structure information. From this, color-highlighted sequences are obtained, where residue coloring represent three-dimensional residue locations in the structure. This color encoding thus provides a one-dimensional representation, from which spatial interactions, proximity and relations between residues or entire chains can be deduced quickly and solely from color similarity. Furthermore, additional heteroatoms and chemical compounds bound to the structure, like ligands or coenzymes, are processed and reported as well. To provide free access to SequenceCEROSENE, a web server has been implemented that allows generating color codings for structures deposited in the Protein Data Bank or structure models uploaded by the user. Besides retrieving visualizations in popular graphic formats, underlying raw data can be downloaded as well. In addition, the server provides user interactivity with generated visualizations and the three-dimensional structure in question. Color encoded sequences generated by SequenceCEROSENE can aid to quickly perceive the general characteristics of a structure of interest (or entire sets of complexes), thus supporting the researcher in the initial

  13. Direct Visual Editing of Node Attributes in Graphs

    Directory of Open Access Journals (Sweden)

    Christian Eichner

    2016-10-01

    Full Text Available There are many expressive visualization techniques for analyzing graphs. Yet, there is only little research on how existing visual representations can be employed to support data editing. An increasingly relevant task when working with graphs is the editing of node attributes. We propose an integrated visualize-and-edit approach to editing attribute values via direct interaction with the visual representation. The visualize part is based on node-link diagrams paired with attribute-dependent layouts. The edit part is as easy as moving nodes via drag-and-drop gestures. We present dedicated interaction techniques for editing quantitative as well as qualitative attribute data values. The benefit of our novel integrated approach is that one can directly edit the data while the visualization constantly provides feedback on the implications of the data modifications. Preliminary user feedback indicates that our integrated approach can be a useful complement to standard non-visual editing via external tools.

  14. Exposure to arousal-inducing sounds facilitates visual search.

    Science.gov (United States)

    Asutay, Erkin; Västfjäll, Daniel

    2017-09-04

    Exposure to affective stimuli could enhance perception and facilitate attention via increasing alertness, vigilance, and by decreasing attentional thresholds. However, evidence on the impact of affective sounds on perception and attention is scant. Here, a novel aspect of affective facilitation of attention is studied: whether arousal induced by task-irrelevant auditory stimuli could modulate attention in a visual search. In two experiments, participants performed a visual search task with and without auditory-cues that preceded the search. Participants were faster in locating high-salient targets compared to low-salient targets. Critically, search times and search slopes decreased with increasing auditory-induced arousal while searching for low-salient targets. Taken together, these findings suggest that arousal induced by sounds can facilitate attention in a subsequent visual search. This novel finding provides support for the alerting function of the auditory system by showing an auditory-phasic alerting effect in visual attention. The results also indicate that stimulus arousal modulates the alerting effect. Attention and perception are our everyday tools to navigate our surrounding world and the current findings showing that affective sounds could influence visual attention provide evidence that we make use of affective information during perceptual processing.

  15. Feature and Region Selection for Visual Learning.

    Science.gov (United States)

    Zhao, Ji; Wang, Liantao; Cabral, Ricardo; De la Torre, Fernando

    2016-03-01

    Visual learning problems, such as object classification and action recognition, are typically approached using extensions of the popular bag-of-words (BoWs) model. Despite its great success, it is unclear what visual features the BoW model is learning. Which regions in the image or video are used to discriminate among classes? Which are the most discriminative visual words? Answering these questions is fundamental for understanding existing BoW models and inspiring better models for visual recognition. To answer these questions, this paper presents a method for feature selection and region selection in the visual BoW model. This allows for an intermediate visualization of the features and regions that are important for visual learning. The main idea is to assign latent weights to the features or regions, and jointly optimize these latent variables with the parameters of a classifier (e.g., support vector machine). There are four main benefits of our approach: 1) our approach accommodates non-linear additive kernels, such as the popular χ(2) and intersection kernel; 2) our approach is able to handle both regions in images and spatio-temporal regions in videos in a unified way; 3) the feature selection problem is convex, and both problems can be solved using a scalable reduced gradient method; and 4) we point out strong connections with multiple kernel learning and multiple instance learning approaches. Experimental results in the PASCAL VOC 2007, MSR Action Dataset II and YouTube illustrate the benefits of our approach.

  16. Quantitative organ visualization using SPECT

    International Nuclear Information System (INIS)

    Kircos, L.T.; Carey, J.E. Jr.; Keyes, J.W. Jr.

    1987-01-01

    Quantitative organ visualization (QOV) was performed using single photon emission computed tomography (SPECT). Organ size was calculated from serial, contiguous ECT images taken through the organ of interest with image boundaries determined using a maximum directional gradient edge finding technique. Organ activity was calculated using ECT counts bounded by the directional gradient, imaging system efficiency, and imaging time. The technique used to perform QOV was evaluated using phantom studies, in vivo canine liver, spleen, bladder, and kidney studies, and in vivo human bladder studies. It was demonstrated that absolute organ activity and organ size could be determined with this system and total imaging time restricted to less than 45 min to an accuracy of about +/- 10% providing the minimum dimensions of the organ are greater than the FWHM of the imaging system and the total radioactivity within the organ of interest exceeds 15 nCi/cc for dog-sized torsos. In addition, effective half-lives of approximately 1.5 hr or greater could be determined

  17. Eye movements, visual search and scene memory, in an immersive virtual environment.

    Directory of Open Access Journals (Sweden)

    Dmitry Kit

    Full Text Available Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.

  18. Eye movements, visual search and scene memory, in an immersive virtual environment.

    Science.gov (United States)

    Kit, Dmitry; Katz, Leor; Sullivan, Brian; Snyder, Kat; Ballard, Dana; Hayhoe, Mary

    2014-01-01

    Visual memory has been demonstrated to play a role in both visual search and attentional prioritization in natural scenes. However, it has been studied predominantly in experimental paradigms using multiple two-dimensional images. Natural experience, however, entails prolonged immersion in a limited number of three-dimensional environments. The goal of the present experiment was to recreate circumstances comparable to natural visual experience in order to evaluate the role of scene memory in guiding eye movements in a natural environment. Subjects performed a continuous visual-search task within an immersive virtual-reality environment over three days. We found that, similar to two-dimensional contexts, viewers rapidly learn the location of objects in the environment over time, and use spatial memory to guide search. Incidental fixations did not provide obvious benefit to subsequent search, suggesting that semantic contextual cues may often be just as efficient, or that many incidentally fixated items are not held in memory in the absence of a specific task. On the third day of the experience in the environment, previous search items changed in color. These items were fixated upon with increased probability relative to control objects, suggesting that memory-guided prioritization (or Surprise) may be a robust mechanisms for attracting gaze to novel features of natural environments, in addition to task factors and simple spatial saliency.

  19. Innovative Visualizations Shed Light on Avian Nocturnal Migration.

    Directory of Open Access Journals (Sweden)

    Judy Shamoun-Baranes

    Full Text Available Globally, billions of flying animals undergo seasonal migrations, many of which occur at night. The temporal and spatial scales at which migrations occur and our inability to directly observe these nocturnal movements makes monitoring and characterizing this critical period in migratory animals' life cycles difficult. Remote sensing, therefore, has played an important role in our understanding of large-scale nocturnal bird migrations. Weather surveillance radar networks in Europe and North America have great potential for long-term low-cost monitoring of bird migration at scales that have previously been impossible to achieve. Such long-term monitoring, however, poses a number of challenges for the ornithological and ecological communities: how does one take advantage of this vast data resource, integrate information across multiple sensors and large spatial and temporal scales, and visually represent the data for interpretation and dissemination, considering the dynamic nature of migration? We assembled an interdisciplinary team of ecologists, meteorologists, computer scientists, and graphic designers to develop two different flow visualizations, which are interactive and open source, in order to create novel representations of broad-front nocturnal bird migration to address a primary impediment to long-term, large-scale nocturnal migration monitoring. We have applied these visualization techniques to mass bird migration events recorded by two different weather surveillance radar networks covering regions in Europe and North America. These applications show the flexibility and portability of such an approach. The visualizations provide an intuitive representation of the scale and dynamics of these complex systems, are easily accessible for a broad interest group, and are biologically insightful. Additionally, they facilitate fundamental ecological research, conservation, mitigation of human-wildlife conflicts, improvement of meteorological

  20. Do rufous hummingbirds (Selasphorus rufus) use visual beacons?

    Science.gov (United States)

    Hurly, T Andrew; Franz, Simone; Healy, Susan D

    2010-03-01

    Animals are often assumed to use highly conspicuous features of a goal to head directly to that goal ('beaconing'). In the field it is generally assumed that flowers serve as beacons to guide pollinators. Artificial hummingbird feeders are coloured red to serve a similar function. However, anecdotal reports suggest that hummingbirds return to feeder locations in the absence of the feeder (and thus the beacon). Here we test these reports for the first time in the field, using the natural territories of hummingbirds and manipulating flowers on a scale that is ecologically relevant to the birds. We compared the predictions from two distinct hypotheses as to how hummingbirds might use the visual features of rewards: the distant beacon hypothesis and the local cue hypothesis. In two field experiments, we found no evidence that rufous hummingbirds used a distant visual beacon to guide them to a rewarded location. In no case did birds abandon their approach to the goal location from a distance; rather they demonstrated remarkable accuracy of navigation by approaching to within about 70 cm of a rewarded flower's original location. Proximity varied depending on the size of the training flower: birds flew closer to a previously rewarded location if it had been previously signalled with a small beacon. Additionally, when provided with a beacon at a new location, birds did not fly directly to the new beacon. Taken together, we believe these data demonstrate that these hummingbirds depend little on visual characteristics to beacon to rewarded locations, but rather that they encode surrounding landmarks in order to reach the goal and then use the visual features of the goal as confirmation that they have arrived at the correct location.