WorldWideScience

Sample records for bottom-up saliency mediates

  1. Computational versus psychophysical bottom-up image saliency: A comparative evaluation study

    NARCIS (Netherlands)

    Toet, A.

    2011-01-01

    The predictions of 13 computational bottom-up saliency models and a newly introduced Multiscale Contrast Conspicuity (MCC) metric are compared with human visual conspicuity measurements. The agreement between human visual conspicuity estimates and model saliency predictions is quantified through the

  2. Modeling eye movements in visual agnosia with a saliency map approach: bottom-up guidance or top-down strategy?

    Science.gov (United States)

    Foulsham, Tom; Barton, Jason J S; Kingstone, Alan; Dewhurst, Richard; Underwood, Geoffrey

    2011-08-01

    Two recent papers (Foulsham, Barton, Kingstone, Dewhurst, & Underwood, 2009; Mannan, Kennard, & Husain, 2009) report that neuropsychological patients with a profound object recognition problem (visual agnosic subjects) show differences from healthy observers in the way their eye movements are controlled when looking at images. The interpretation of these papers is that eye movements can be modeled as the selection of points on a saliency map, and that agnosic subjects show an increased reliance on visual saliency, i.e., brightness and contrast in low-level stimulus features. Here we review this approach and present new data from our own experiments with an agnosic patient that quantifies the relationship between saliency and fixation location. In addition, we consider whether the perceptual difficulties of individual patients might be modeled by selectively weighting the different features involved in a saliency map. Our data indicate that saliency is not always a good predictor of fixation in agnosia: even for our agnosic subject, as for normal observers, the saliency-fixation relationship varied as a function of the task. This means that top-down processes still have a significant effect on the earliest stages of scanning in the setting of visual agnosia, indicating severe limitations for the saliency map model. Top-down, active strategies-which are the hallmark of our human visual system-play a vital role in eye movement control, whether we know what we are looking at or not.

  3. Adaptive genetic variation mediates bottom-up and top-down control in an aquatic ecosystem

    Science.gov (United States)

    Rudman, Seth M.; Rodriguez-Cabal, Mariano A.; Stier, Adrian; Sato, Takuya; Heavyside, Julian; El-Sabaawi, Rana W.; Crutsinger, Gregory M.

    2015-01-01

    Research in eco-evolutionary dynamics and community genetics has demonstrated that variation within a species can have strong impacts on associated communities and ecosystem processes. Yet, these studies have centred around individual focal species and at single trophic levels, ignoring the role of phenotypic variation in multiple taxa within an ecosystem. Given the ubiquitous nature of local adaptation, and thus intraspecific variation, we sought to understand how combinations of intraspecific variation in multiple species within an ecosystem impacts its ecology. Using two species that co-occur and demonstrate adaptation to their natal environments, black cottonwood (Populus trichocarpa) and three-spined stickleback (Gasterosteus aculeatus), we investigated the effects of intraspecific phenotypic variation on both top-down and bottom-up forces using a large-scale aquatic mesocosm experiment. Black cottonwood genotypes exhibit genetic variation in their productivity and consequently their leaf litter subsidies to the aquatic system, which mediates the strength of top-down effects from stickleback on prey abundances. Abundances of four common invertebrate prey species and available phosphorous, the most critically limiting nutrient in freshwater systems, are dictated by the interaction between genetic variation in cottonwood productivity and stickleback morphology. These interactive effects fit with ecological theory on the relationship between productivity and top-down control and are comparable in strength to the effects of predator addition. Our results illustrate that intraspecific variation, which can evolve rapidly, is an under-appreciated driver of community structure and ecosystem function, demonstrating that a multi-trophic perspective is essential to understanding the role of evolution in structuring ecological patterns. PMID:26203004

  4. Community context mediates the top-down vs. bottom-up effects of grazers on rocky shores.

    Science.gov (United States)

    Bracken, Matthew E S; Dolecal, Renee E; Long, Jeremy D

    2014-06-01

    Interactions between grazers and autotrophs are complex, including both top-down consumptive and bottom-up facilitative effects of grazers. Thus, in addition to consuming autotrophs, herbivores can also enhance autotroph biomass by recycling limiting nutrients, thereby increasing nutrient availability. Here, we evaluated these consumptive and facilitative interactions between snails (Littorina littorea) and seaweeds (Fucus vesiculosus and Ulva lactuca) on a rocky shore. We partitioned herbivores' total effects on seaweeds into their consumptive and facilitative effects and evaluated how community context (the presence of another seaweed species) modified the effects of Littorina on a focal seaweed species. Ulva, the more palatable species, enhanced the facilitative effects of Littorina on Fucus. Ulva did not modify the consumptive effect of Littorina on Fucus. Taken together, the consumptive and facilitative effects of snails on Fucus in the presence of Ulva balanced each other, resulting in no net effect of Littorina on Fucus. In contrast, the only effect of Fucus on Ulva was to enhance consumptive effects of Littorina on Ulva. Our results highlight the necessity of considering both consumptive and facilitative effects of herbivores on multiple autotroph species in order to gain a mechanistic understanding of grazers' top-down and bottom-up roles in structuring communities.

  5. Bottom-Up Earley Deduction

    CERN Document Server

    Erbach, G

    1995-01-01

    We propose a bottom-up variant of Earley deduction. Bottom-up deduction is preferable to top-down deduction because it allows incremental processing (even for head-driven grammars), it is data-driven, no subsumption check is needed, and preference values attached to lexical items can be used to guide best-first search. We discuss the scanning step for bottom-up Earley deduction and indexing schemes that help avoid useless deduction steps.

  6. Culture from the Bottom Up

    Science.gov (United States)

    Atkinson, Dwight; Sohn, Jija

    2013-01-01

    The culture concept has been severely criticized for its top-down nature in TESOL, leading arguably to its falling out of favor in the field. But what of the fact that people do "live culturally" (Ingold, 1994)? This article describes a case study of culture from the bottom up--culture as understood and enacted by its individual users.…

  7. Building from the Bottom Up

    Science.gov (United States)

    2003-05-01

    through billions of years of prebiotic and molecular selection and evolution, there are bio-organic by Shuguang Zhang Building from the bottom up... Health , Du Pont-MIT Alliance, and the Whitaker Foundation. I also gratefully acknowledge Intel Corporation Academic Program for the generous donation

  8. "Bottom-up" transparent electrodes.

    Science.gov (United States)

    Morag, Ahiud; Jelinek, Raz

    2016-11-15

    Transparent electrodes (TEs) have attracted significant scientific, technological, and commercial interest in recent years due to the broad and growing use of such devices in electro-optics, consumer products (touch-screens for example), solar cells, and others. Currently, almost all commercial TEs are fabricated through "top-down" approaches (primarily lithography-based techniques), with indium tin oxide (ITO) as the most common material employed. Several problems are encountered, however, in this field, including the cost and complexity of TE production using top-down technologies, the limited structural flexibility, high-cost of indium, and brittle nature and low transparency in the far-IR spectral region of ITO. Alternative routes based upon bottom-up processes, have recently emerged as viable alternatives for production of TEs. Bottom up technologies are based upon self-assembly of building blocks - atoms, molecules, or nanoparticles - generating thin patterned films that exhibit both electrical conductivity and optical transparency. In this Feature Article we discuss the recent progress in this active and exciting field, including bottom-up TE systems produced from carbon materials (carbon nanotubes, graphene, graphene-oxide), silver, gold, and other metals. The current hurdles encountered for broader use of bottom-up strategies along with their significant potential are analyzed.

  9. Bottom-up effects on attention capture and choice

    DEFF Research Database (Denmark)

    Peschel, Anne; Orquin, Jacob Lund; Mueller Loose, Simone

    Attention processes and decision making are accepted to be closely linked together because only information that is attended to can be incorporated in the decision process. Little is known however, to which extent bottom-up processes of attention affect stimulus selection and therefore...... the information available to form a decision. Does changing one visual cue in the stimulus set affect attention towards this cue and what does that mean for the choice outcome? To address this, we conducted a combined eye tracking and choice experiment in a consumer choice setting with visual shelf simulations...... of different product categories. Surface size and visual saliency of a product label were manipulated to determine bottom-up effects on attention and choice. Results show a strong and significant increase in attention in terms of fixation likelihood towards product labels which are larger and more visually...

  10. Emergence of visual saliency from natural scenes via context-mediated probability distributions coding.

    Directory of Open Access Journals (Sweden)

    Jinhua Xu

    Full Text Available Visual saliency is the perceptual quality that makes some items in visual scenes stand out from their immediate contexts. Visual saliency plays important roles in natural vision in that saliency can direct eye movements, deploy attention, and facilitate tasks like object detection and scene understanding. A central unsolved issue is: What features should be encoded in the early visual cortex for detecting salient features in natural scenes? To explore this important issue, we propose a hypothesis that visual saliency is based on efficient encoding of the probability distributions (PDs of visual variables in specific contexts in natural scenes, referred to as context-mediated PDs in natural scenes. In this concept, computational units in the model of the early visual system do not act as feature detectors but rather as estimators of the context-mediated PDs of a full range of visual variables in natural scenes, which directly give rise to a measure of visual saliency of any input stimulus. To test this hypothesis, we developed a model of the context-mediated PDs in natural scenes using a modified algorithm for independent component analysis (ICA and derived a measure of visual saliency based on these PDs estimated from a set of natural scenes. We demonstrated that visual saliency based on the context-mediated PDs in natural scenes effectively predicts human gaze in free-viewing of both static and dynamic natural scenes. This study suggests that the computation based on the context-mediated PDs of visual variables in natural scenes may underlie the neural mechanism in the early visual cortex for detecting salient features in natural scenes.

  11. Bottom-up organic integrated circuits

    NARCIS (Netherlands)

    Smits, Edsger C. P.; Mathijssen, Simon G. J.; van Hal, Paul A.; Setayesh, Sepas; Geuns, Thomas C. T.; Mutsaers, Kees A. H. A.; Cantatore, Eugenio; Wondergem, Harry J.; Werzer, Oliver; Resel, Roland; Kemerink, Martijn; Kirchmeyer, Stephan; Muzafarov, Aziz M.; Ponomarenko, Sergei A.; de Boer, Bert; Blom, Paul W. M.; de Leeuw, Dago M.

    2008-01-01

    Self- assembly - the autonomous organization of components into patterns and structures(1) - is a promising technology for the mass production of organic electronics. Making integrated circuits using a bottom- up approach involving self- assembling molecules was proposed(2) in the 1970s. The basic b

  12. Bottom-up holographic approach to QCD

    Energy Technology Data Exchange (ETDEWEB)

    Afonin, S. S. [V. A. Fock Department of Theoretical Physics, Saint Petersburg State University, 1 ul. Ulyanovskaya, 198504 (Russian Federation)

    2016-01-22

    One of the most known result of the string theory consists in the idea that some strongly coupled gauge theories may have a dual description in terms of a higher dimensional weakly coupled gravitational theory — the so-called AdS/CFT correspondence or gauge/gravity correspondence. The attempts to apply this idea to the real QCD are often referred to as “holographic QCD” or “AdS/QCD approach”. One of directions in this field is to start from the real QCD and guess a tentative dual higher dimensional weakly coupled field model following the principles of gauge/gravity correspondence. The ensuing phenomenology can be then developed and compared with experimental data and with various theoretical results. Such a bottom-up holographic approach turned out to be unexpectedly successful in many cases. In the given short review, the technical aspects of the bottom-up holographic approach to QCD are explained placing the main emphasis on the soft wall model.

  13. Bottom-up holographic approach to QCD

    Science.gov (United States)

    Afonin, S. S.

    2016-01-01

    One of the most known result of the string theory consists in the idea that some strongly coupled gauge theories may have a dual description in terms of a higher dimensional weakly coupled gravitational theory — the so-called AdS/CFT correspondence or gauge/gravity correspondence. The attempts to apply this idea to the real QCD are often referred to as "holographic QCD" or "AdS/QCD approach". One of directions in this field is to start from the real QCD and guess a tentative dual higher dimensional weakly coupled field model following the principles of gauge/gravity correspondence. The ensuing phenomenology can be then developed and compared with experimental data and with various theoretical results. Such a bottom-up holographic approach turned out to be unexpectedly successful in many cases. In the given short review, the technical aspects of the bottom-up holographic approach to QCD are explained placing the main emphasis on the soft wall model.

  14. Bottom-up assembly of metallic germanium.

    Science.gov (United States)

    Scappucci, Giordano; Klesse, Wolfgang M; Yeoh, LaReine A; Carter, Damien J; Warschkow, Oliver; Marks, Nigel A; Jaeger, David L; Capellini, Giovanni; Simmons, Michelle Y; Hamilton, Alexander R

    2015-08-10

    Extending chip performance beyond current limits of miniaturisation requires new materials and functionalities that integrate well with the silicon platform. Germanium fits these requirements and has been proposed as a high-mobility channel material, a light emitting medium in silicon-integrated lasers, and a plasmonic conductor for bio-sensing. Common to these diverse applications is the need for homogeneous, high electron densities in three-dimensions (3D). Here we use a bottom-up approach to demonstrate the 3D assembly of atomically sharp doping profiles in germanium by a repeated stacking of two-dimensional (2D) high-density phosphorus layers. This produces high-density (10(19) to 10(20) cm(-3)) low-resistivity (10(-4)Ω · cm) metallic germanium of precisely defined thickness, beyond the capabilities of diffusion-based doping technologies. We demonstrate that free electrons from distinct 2D dopant layers coalesce into a homogeneous 3D conductor using anisotropic quantum interference measurements, atom probe tomography, and density functional theory.

  15. Information theoretic preattentive saliency

    DEFF Research Database (Denmark)

    Loog, Marco

    2011-01-01

    Employing an information theoretic operational definition of bottom-up attention from the field of computational visual perception a very general expression for saliency is provided. As opposed to many of the current approaches to determining a saliency map there is no need for an explicit data....... Another choice of features is, rather loosely, inspired by the success of histogram of oriented gradient descriptors and proves to provide state-of-the-art results on a collaborative benchmark for region of interest detection. © 2011 IEEE....

  16. Top Down Chemistry Versus Bottom up Chemistry

    Science.gov (United States)

    Oka, Takeshi; Witt, Adolf N.

    2016-06-01

    The idea of interstellar top down chemistry (TDC), in which molecules are produced from decomposition of larger molecules and dust in contrast to ordinary bottom up chemistry (BUC) in which molecules are produced synthetically from smaller molecules and atoms in the ISM, has been proposed in the chemistry of PAH and carbon chain molecules both for diffusea,c and dense cloudsb,d. A simple and natural idea, it must have occurred to many people and has been in the air for sometime. The validity of this hypothesis is apparent for diffuse clouds in view of the observed low abundance of small molecules and its rapid decrease with molecular size on the one hand and the high column densities of large carbon molecules demonstrated by the many intense diffuse interstellar bands (DIBs) on the other. Recent identification of C60^+ as the carrier of 5 near infrared DIBs with a high column density of 2×1013 cm-2 by Maier and others confirms the TDC. This means that the large molecules and dust produced in the high density high temperature environment of circumstellar envelopes are sufficiently stable to survive decompositions due to stellar UV radiaiton, cosmic rays, C-shocks etc. for a long time (≥ 10^7 year) of their migration to diffuse clouds and seems to disagree with the consensus in the field of interstellar grains. The stability of molecules and aggregates in the diffuse interstellar medium will be discussed. Duley, W. W. 2006, Faraday Discuss. 133, 415 Zhen,J., Castellanos, P., Paardekooper, D. M., Linnartz, H., Tielens, A. G. G. M. 2014, ApJL, 797, L30 Huang, J., Oka, T. 2015, Mol. Phys. 113, 2159 Guzmán, V. V., Pety, J., Goicoechea, J. R., Gerin, M., Roueff, E., Gratier, P., Öberg, K. I. 2015, ApJL, 800, L33 L. Ziurys has sent us many papers beginning Ziurys, L. M. 2006, PNAS 103, 12274 indicating she had long been a proponent of the idea. Campbell, E. K., Holz, M., Maier, J. P., Gerlich, D., Walker, G. A. H., Bohlender, D, 2016, ApJ, in press Draine, B. T. 2003

  17. Bottom-up Initiatives for Photovoltaic: Incentives and Barriers

    Directory of Open Access Journals (Sweden)

    Kathrin Reinsberger

    2014-06-01

    Full Text Available When facing the challenge of restructuring the energy system, bottom-up initiatives can aid the diffusion of decentralized and clean energy technologies. We focused here on a bottom-up initiative of citizen-funded and citizen-operated photovoltaic power plants. The project follows a case study-based approach and examines two different community initiatives. The aim is to investigate the potential incentives and barriers relating to participation or non-participation in predefined community PV projects. Qualitative, as well as quantitative empirical research was used to examine the key factors in the further development of bottom-up initiatives as contributors to a general energy transition.

  18. Nanoelectronics: Thermoelectric Phenomena in «Bottom-Up» Approach

    Directory of Open Access Journals (Sweden)

    Yu.A. Kruglyak

    2014-04-01

    Full Text Available Thermoelectric phenomena of Seebeck and Peltier, quality indicators and thermoelectric optimization, ballistic and diffusive phonon heat current are discussed in the frame of the «bottom-up» approach of modern nanoelectronics.

  19. On a Bottom-Up Approach to Scientific Discovery

    Science.gov (United States)

    Huang, Xiang

    2014-03-01

    Two popular models of scientific discovery, abduction and the inference to the best explanation (IBE), presuppose that the reason for accepting a hypothetical explanation A comes from the epistemic and/or explanatory force manifested in the fact that observed fact C is an inferred consequence of A. However, not all discoveries take this top-down procedure from A to C, in which the result of discovery A implies the observed fact C. I contend that discovery can be modeled as a bottom-up procedure based on inductive and analogical rules that lead us to infer from C to A. I take the theory of Dignaga, an Indian medieval logician, as a model of this bottom-up approach. My argument has three panels: 1) this bottom-up approach applies to both commonsense and scientific discovery without the assumption that C has to be an inferred consequence of A; 2) this bottom-up approach helps us get around problems that crop up in applying abduction and/or IBE, which means that scientific discovery need not to be modeled exclusively by top-down approaches; and 3) the existence of the bottom-up approach requires a pluralist attitude towards modeling of scientific discovery.

  20. A plea for Global Health Action bottom-up

    Directory of Open Access Journals (Sweden)

    Ulrich Laaser

    2016-10-01

    Full Text Available This opinion piece focuses on global health action by hands-on bottom-up practice: Initiation of an organizational framework and securing financial efficiency are – however - essential, both clearly a domain of well trained public health professionals. Examples of action are cited in the four main areas of global threats: planetary climate change, global divides and inequity, global insecurity and violent conflicts, global instability and financial crises. In conclusion a stable health systems policy framework would greatly enhance success. However, such organisational framework dries out if not linked to public debates channelling fresh thoughts and controversial proposals: the structural stabilisation is essential but has to serve not to dominate bottom-up activities. In other words a horizontal management is required, a balanced equilibrium between bottom-up initiative and top-down support. Last not least rewarding voluntary and charity work by public acknowledgement is essential.

  1. Bottom up approaches to defining future climate mitigation commitments

    NARCIS (Netherlands)

    Elzen MGJ den; Berk MM; KMD

    2005-01-01

    Dit rapport beschrijft de resultaten van een aantal in de literatuur geopperde alternatieve, bottom-up benaderingen om verplichtingen vorm te geven,i.e. technologie en performance standaards, technologie onderzoek en ontwikkelingsafspraken, sectorale verplichtingen, S-CDM (Sectoraal CDM) en SD-P

  2. Bottom up approaches to defining future climate mitigation commitments

    NARCIS (Netherlands)

    Elzen MGJ den; Berk MM; KMD

    2005-01-01

    This report analyses a number of alternative, bottom-up approaches, i.e. technology and performance standards; technology Research and Development agreements, sectoral targets (national /transnational), sector based CDM, and sustainable development policies and measures (SD-PAMs). Included are tech

  3. A bottom-up approach to MEDLINE indexing recommendations

    DEFF Research Database (Denmark)

    Jimeno-Yepes, Antonio; Wilkowski, Bartlomiej; Mork, James/G

    2011-01-01

    MEDLINE indexing performed by the US National Library of Medicine staff describes the essence of a biomedical publication in about 14 Medical Subject Headings (MeSH). Since 2002, this task is assisted by the Medical Text Indexer (MTI) program. We present a bottom-up approach to MEDLINE indexing...

  4. Bottom-Up Analysis of Single-Case Research Designs

    Science.gov (United States)

    Parker, Richard I.; Vannest, Kimberly J.

    2012-01-01

    This paper defines and promotes the qualities of a "bottom-up" approach to single-case research (SCR) data analysis. Although "top-down" models, for example, multi-level or hierarchical linear models, are gaining momentum and have much to offer, interventionists should be cautious about analyses that are not easily understood, are not governed by…

  5. Reading Nature from a "Bottom-Up" Perspective

    Science.gov (United States)

    Magntorn, Ola; Hellden, Gustav

    2007-01-01

    This paper reports on a study of ecology teaching and learning in a Swedish primary school class (age 10-11 yrs). A teaching sequence was designed to help students read nature in a river ecosystem. The teaching sequence had a "bottom up" approach, taking as its starting point a common key organism--the freshwater shrimp. From this species and its…

  6. Shape saliency for remote sensing image

    Science.gov (United States)

    Xu, Sheng; Hong, Huo; Fang, Tao; Li, Deren

    2007-11-01

    In this paper, a shape saliency measure for only shape feature of each object in the image is described. Instead biologically-inspired bottom-up Itti model, the dissimilarity is measured by the shape feature. And, Fourier descriptor is used for measuring dissimilarity in this paper. In the model, the object is determined as a salient region, when it is far different from others. Different value of the saliency is ranged to generate a saliency map. It is shown that the attention shift processing can be recorded. Some results from psychological images and remote sensing images are shown and discussed in the paper.

  7. Una implementación computacional de un modelo de atención visual Bottom-up aplicado a escenas naturales/A Computational Implementation of a Bottom-up Visual Attention Model Applied to Natural Scenes

    Directory of Open Access Journals (Sweden)

    Juan F. Ramírez Villegas

    2011-12-01

    Full Text Available El modelo de atención visual bottom-up propuesto por Itti et al., 2000 [1], ha sido un modelo popular en tanto exhibe cierta evidencia neurobiológica de la visión en primates. Este trabajo complementa el modelo computacional de este fenómeno desde la dinámica realista de una red neuronal. Asimismo, esta aproximación se basa en la existencia de mapas topográficos que representan la prominencia de los objetos del campo visual para la formación de una representación general (mapa de prominencia, esta representación es la entrada de una red neuronal dinámica con interacciones locales y globales de colaboración y competencia que convergen sobre las principales particularidades (objetos de la escena.The bottom-up visual attention model proposed by Itti et al. 2000 [1], has been a popular model since it exhibits certain neurobiological evidence of primates’ vision. This work complements the computational model of this phenomenon using a neural network with realistic dynamics. This approximation is based on several topographical maps representing the objects saliency that construct a general representation (saliency map, which is the input for a dynamic neural network, whose local and global collaborative and competitive interactions converge to the main particularities (objects presented by the visual scene as well.

  8. A Bottom-Up Approach to SUSY Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Horn, Claus; /SLAC

    2009-08-03

    This paper proposes a new way to perform event generation and analysis in searches for new physics at the LHC. An abstract notation is used to describe the new particles on a level which better corresponds to detector resolution of LHC experiments. In this way the SUSY discovery space can be decomposed into a small number of eigenmodes each with only a few parameters, which allows to investigate the SUSY parameter space in a model-independent way. By focusing on the experimental observables for each process investigated the Bottom-Up Approach allows to systematically study the boarders of the experimental efficiencies and thus to extend the sensitivity for new physics.

  9. A Bottom-Up Approach to SUSY Analyses

    Energy Technology Data Exchange (ETDEWEB)

    Horn, Claus; /SLAC

    2011-11-11

    This paper proposes a new way to do event generation and analysis in searches for new physics at the LHC. An abstract notation is used to describe the new particles on a level which better corresponds to detector resolution of LHC experiments. In this way the SUSY discovery space can be decomposed into a small number of eigenmodes each with only a few parameters, which allows to investigate the SUSY parameter space in a model-independent way. By focusing on the experimental observables for each process investigated the Bottom-Up Approach allows to systematically study the boarders of the experimental efficiencies and thus to extend the sensitivity for new physics.

  10. Recent progress in backreacted bottom-up holographic QCD

    Energy Technology Data Exchange (ETDEWEB)

    Järvinen, Matti [Laboratoire de Physique Théorique, École Normale Supérieure, 24 rue Lhomond, 75231 Paris Cedex 05 (France)

    2016-01-22

    Recent progress in constructing holographic models for QCD is discussed, concentrating on the bottom-up models which implement holographically the renormalization group flow of QCD. The dynamics of gluons can be modeled by using a string-inspired model termed improved holographic QCD, and flavor can be added by introducing space filling branes in this model. The flavor fully backreacts to the glue in the Veneziano limit, giving rise to a class of models which are called V-QCD. The phase diagrams and spectra of V-QCD are in good agreement with results for QCD obtained by other methods.

  11. Inverse Magnetic Catalysis in Bottom-Up Holographic QCD

    CERN Document Server

    Evans, Nick; Scott, Marc

    2016-01-01

    We explore the effect of magnetic field on chiral condensation in QCD via a simple bottom up holographic model which inputs QCD dynamics through the running of the anomalous dimension of the quark bilinear. Bottom up holography is a form of effective field theory and we use it to explore the dependence on the coefficients of the two lowest order terms linking the magnetic field and the quark condensate. In the massless theory, we identify a region of parameter space where magnetic catalysis occurs at zero temperature but inverse magnetic catalysis at temperatures of order the thermal phase transition. The model shows similar non-monotonic behaviour in the condensate with B at intermediate T as the lattice data. This behaviour is due to the separation of the meson melting and chiral transitions in the holographic framework. The introduction of quark mass raises the scale of B where inverse catalysis takes over from catalysis until the inverse catalysis lies outside the regime of validity of the effective descr...

  12. Emotional face expression modulates occipital-frontal effective connectivity during memory formation in a bottom-up fashion.

    Science.gov (United States)

    Xiu, Daiming; Geiger, Maximilian J; Klaver, Peter

    2015-01-01

    This study investigated the role of bottom-up and top-down neural mechanisms in the processing of emotional face expression during memory formation. Functional brain imaging data was acquired during incidental learning of positive ("happy"), neutral and negative ("angry" or "fearful") faces. Dynamic Causal Modeling (DCM) was applied on the functional magnetic resonance imaging (fMRI) data to characterize effective connectivity within a brain network involving face perception (inferior occipital gyrus and fusiform gyrus) and successful memory formation related areas (hippocampus, superior parietal lobule, amygdala, and orbitofrontal cortex). The bottom-up models assumed processing of emotional face expression along feed forward pathways to the orbitofrontal cortex. The top-down models assumed that the orbitofrontal cortex processed emotional valence and mediated connections to the hippocampus. A subsequent recognition memory test showed an effect of negative emotion on the response bias, but not on memory performance. Our DCM findings showed that the bottom-up model family of effective connectivity best explained the data across all subjects and specified that emotion affected most bottom-up connections to the orbitofrontal cortex, especially from the occipital visual cortex and superior parietal lobule. Of those pathways to the orbitofrontal cortex the connection from the inferior occipital gyrus correlated with memory performance independently of valence. We suggest that bottom-up neural mechanisms support effects of emotional face expression and memory formation in a parallel and partially overlapping fashion.

  13. Emotional face expression modulates occipital-frontal effective connectivity during memory formation in a bottom-up fashion

    Directory of Open Access Journals (Sweden)

    Daiming eXiu

    2015-04-01

    Full Text Available This study investigated the role of bottom-up and top-down neural mechanisms in the processing of emotional face expression during memory formation. Functional brain imaging data was acquired during incidental learning of positive (‘happy’, neutral and negative (‘angry’ or ‘fearful’ faces. Dynamic Causal Modeling (DCM was applied on the fMRI data to characterize effective connectivity within a brain network involving face perception (inferior occipital gyrus and fusiform gyrus and successful memory formation related areas (hippocampus, superior parietal lobule, amygdala and orbitofrontal cortex. The bottom-up models assumed processing of emotional face expression along feed forward pathways to the orbitofrontal cortex. The top-down models assumed that the orbitofrontal cortex processed emotional valence and mediated connections to the hippocampus. A subsequent recognition memory test showed an effect of negative emotion on the response bias, but not on memory performance. Our DCM findings showed that the bottom-up model family of effective connectivity best explained the data across all subjects and specified that emotion affected most bottom-up connections to the orbitofrontal cortex, especially from the occipital visual cortex and superior parietal lobule. Of those pathways to the orbitofrontal cortex the connection from the inferior occipital gyrus correlated with memory performance independently of valence. We suggest that bottom-up neural mechanisms support effects of emotional face expression and memory formation in a parallel and partially overlapping fashion.

  14. Fast full resolution saliency detection based on incoherent imaging system

    Science.gov (United States)

    Lin, Guang; Zhao, Jufeng; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting

    2016-08-01

    Image saliency detection is widely applied in many tasks in the field of the computer vision. In this paper, we combine the saliency detection with the Fourier optics to achieve acceleration of saliency detection algorithm. An actual optical saliency detection system is constructed within the framework of incoherent imaging system. Additionally, the application of our system to implement the bottom-up rapid pre-saliency process of primate visual saliency is discussed with dual-resolution camera. A set of experiments over our system are conducted and discussed. We also demonstrate the comparisons between our method and pure computer methods. The results show our system can produce full resolution saliency maps faster and more effective.

  15. Saliency detection for videos using 3D FFT local spectra

    Science.gov (United States)

    Long, Zhiling; AlRegib, Ghassan

    2015-03-01

    Bottom-up spatio-temporal saliency detection identifies perceptually important regions of interest in video sequences. The center-surround model proves to be useful for visual saliency detection. In this work, we explore using 3D FFT local spectra as features for saliency detection within the center-surround framework. We develop a spectral location based decomposition scheme to divide a 3D FFT cube into two components, one related to temporal changes and the other related to spatial changes. Temporal saliency and spatial saliency are detected separately using features derived from each spectral component through a simple center-surround comparison method. The two detection results are then combined to yield a saliency map. We apply the same detection algorithm to different color channels (YIQ) and incorporate the results into the final saliency determination. The proposed technique is tested with the public CRCNS database. Both visual and numerical evaluations verify the promising performance of our technique.

  16. Contextualised ICT4D: a Bottom-Up Approach

    DEFF Research Database (Denmark)

    Lund, Henrik Hautop; Sutinen, Erkki

    2010-01-01

    . In a certain way, this agenda can be understood as a topdown approach which transfers technology in a hierarchical way to actual users. Complementary to the traditional approach, a bottom-up approach starts by identifying communities that are ready to participate in a process to use technology to transform......The term ICT4D refers to the opportunities of Information and Communication Technology (ICT) as an agent of development. Much of the research in the field is based on evaluating the feasibility of existing technologies, mostly of Western or Asian origin, in the context of developing countries...... their own strengths to new levels by designing appropriate technologies with experts of technology and design. The bottomup approach requires a new kind of ICT education at the undergraduate level. An example of the development of a contextualized IT degree program at Tumaini University in Tanzania shows...

  17. A bottom-up approach to MEDLINE indexing recommendations.

    Science.gov (United States)

    Jimeno-Yepes, Antonio; Wilkowski, Bartłomiej; Mork, James G; Van Lenten, Elizabeth; Fushman, Dina Demner; Aronson, Alan R

    2011-01-01

    MEDLINE indexing performed by the US National Library of Medicine staff describes the essence of a biomedical publication in about 14 Medical Subject Headings (MeSH). Since 2002, this task is assisted by the Medical Text Indexer (MTI) program. We present a bottom-up approach to MEDLINE indexing in which the abstract is searched for indicators for a specific MeSH recommendation in a two-step process. Supervised machine learning combined with triage rules improves sensitivity of recommendations while keeping the number of recommended terms relatively small. Improvement in recommendations observed in this work warrants further exploration of this approach to MTI recommendations on a larger set of MeSH headings.

  18. BitCube: A Bottom-Up Cubing Engineering

    Science.gov (United States)

    Ferro, Alfredo; Giugno, Rosalba; Puglisi, Piera Laura; Pulvirenti, Alfredo

    Enhancing on line analytical processing through efficient cube computation plays a key role in Data Warehouse management. Hashing, grouping and mining techniques are commonly used to improve cube pre-computation. BitCube, a fast cubing method which uses bitmaps as inverted indexes for grouping, is presented. It horizontally partitions data according to the values of one dimension and for each resulting fragment it performs grouping following bottom-up criteria. BitCube allows also partial materialization based on iceberg conditions to treat large datasets for which a full cube pre-computation is too expensive. Space requirement of bitmaps is optimized by applying an adaption of the WAH compression technique. Experimental analysis, on both synthetic and real datasets, shows that BitCube outperforms previous algorithms for full cube computation and results comparable on iceberg cubing.

  19. Bottom-up fabrication of graphene nanostructures on Ru(1010).

    Science.gov (United States)

    Song, Junjie; Zhang, Han-jie; Cai, Yiliang; Zhang, Yuxi; Bao, Shining; He, Pimo

    2016-02-01

    Investigations on the bottom-up fabrication of graphene nanostructures with 10, 10'-dibromo-9, 9'-bianthryl (DBBA) as a precursor on Ru(1010) were carried out using scanning tunnelling microscopy (STM) and density functional theory (DFT) calculations. Upon annealing the sample at submonolayer DBBA coverage, N = 7 graphene nanoribbons (GNRs) aligned along the [1210] direction form. Higher DBBA coverage and higher annealing temperature lead to the merging of GNRs into ribbon-like graphene nanoflakes with multiple orientations. These nanoflakes show different Moiré patterns, and their structures were determined by DFT simulations. The results showed that GNRs possess growth preference on the Ru(1010) substrate with a rectangular unit cell, and GNRs with armchair and zigzag boundaries are obtainable. Further DFT calculations suggest that the interaction between graphene and the substrate controls the orientations of the graphene overlayer and the growth of graphene on Ru(1010).

  20. Making the results of bottom-up energy savings comparable

    Directory of Open Access Journals (Sweden)

    Moser Simon

    2012-01-01

    Full Text Available The Energy Service Directive (ESD has pushed forward the issue of energy savings calculations without clarifying the methodological basis. Savings achieved in the Member States are calculated with rather non-transparent and hardly comparable Bottom-up (BU methods. This paper develops the idea of parallel evaluation tracks separating the Member States’ issue of ESD verification and comparable savings calculations. Comparability is ensured by developing a standardised BU calculation kernel for different energy efficiency improvement (EEI actions which simultaneously depicts the different calculation options in a structured way (e.g. baseline definition, system boundaries, double counting. Due to the heterogeneity of BU calculations the approach requires a central database where Member States feed in input data on BU actions according to a predefined structure. The paper demonstrates the proposed approach including a concrete example of application.

  1. Bottom-Up Discrete Symmetries for Cabibbo Mixing

    CERN Document Server

    Varzielas, Ivo de Medeiros; Talbert, Jim

    2016-01-01

    We perform a bottom-up search for discrete non-Abelian symmetries capable of quantizing the Cabibbo angle that parameterizes CKM mixing. Given a particular Abelian symmetry structure in the up and down sectors, we construct representations of the associated residual generators which explicitly depend on the degrees of freedom present in our effective mixing matrix. We then discretize those degrees of freedom and utilize the Groups, Algorithms, Programming (GAP) package to close the associated finite groups. This short study is performed in the context of recent results indicating that, without resorting to special model-dependent corrections, no small-order finite group can simultaneously predict all four parameters of the three-generation CKM matrix and that only groups of $\\mathcal{O}(10^{2})$ can predict the analogous parameters of the leptonic PMNS matrix, regardless of whether neutrinos are Dirac or Majorana particles. Therefore a natural model of flavour might instead incorporate small(er) finite groups...

  2. Spatiochromatic Context Modeling for Color Saliency Analysis.

    Science.gov (United States)

    Zhang, Jun; Wang, Meng; Zhang, Shengping; Li, Xuelong; Wu, Xindong

    2016-06-01

    Visual saliency is one of the most noteworthy perceptual abilities of human vision. Recent progress in cognitive psychology suggests that: 1) visual saliency analysis is mainly completed by the bottom-up mechanism consisting of feedforward low-level processing in primary visual cortex (area V1) and 2) color interacts with spatial cues and is influenced by the neighborhood context, and thus it plays an important role in a visual saliency analysis. From a computational perspective, the most existing saliency modeling approaches exploit multiple independent visual cues, irrespective of their interactions (or are not computed explicitly), and ignore contextual influences induced by neighboring colors. In addition, the use of color is often underestimated in the visual saliency analysis. In this paper, we propose a simple yet effective color saliency model that considers color as the only visual cue and mimics the color processing in V1. Our approach uses region-/boundary-defined color features with spatiochromatic filtering by considering local color-orientation interactions, therefore captures homogeneous color elements, subtle textures within the object and the overall salient object from the color image. To account for color contextual influences, we present a divisive normalization method for chromatic stimuli through the pooling of contrary/complementary color units. We further define a color perceptual metric over the entire scene to produce saliency maps for color regions and color boundaries individually. These maps are finally globally integrated into a one single saliency map. The final saliency map is produced by Gaussian blurring for robustness. We evaluate the proposed method on both synthetic stimuli and several benchmark saliency data sets from the visual saliency analysis to salient object detection. The experimental results demonstrate that the use of color as a unique visual cue achieves competitive results on par with or better than 12 state

  3. BUEES:a bottom-up event extraction system

    Institute of Scientific and Technical Information of China (English)

    Xiao DING; Bing QIN; Ting LIU

    2015-01-01

    Traditional event extraction systems focus mainly on event type identifi cation and event participant extraction based on pre-specifi ed event type paradigms and manually annotated corpora. However, different domains have different event type paradigms. When transferring to a new domain, we have to build a new event type paradigm and annotate a new corpus from scratch. This kind of conventional event extraction system requires massive human effort, and hence prevents event extraction from being widely applicable. In this paper, we present BUEES, a bottom-up event extraction system, which extracts events from the web in a completely unsupervised way. The system automatically builds an event type paradigm in the input corpus, and then proceeds to extract a large number of instance patterns of these events. Subsequently, the system extracts event arguments according to these patterns. By conducting a series of experiments, we demonstrate the good performance of BUEES and compare it to a state-of-the-art Chinese event extraction system, i.e., a supervised event extraction system. Experimental results show that BUEES performs comparably (5% higher F-measure in event type identifi cation and 3% higher F-measure in event argument extraction), but without any human effort.

  4. Top down and bottom up engineering of bone.

    Science.gov (United States)

    Knothe Tate, Melissa L

    2011-01-11

    The goal of this retrospective article is to place the body of my lab's multiscale mechanobiology work in context of top-down and bottom-up engineering of bone. We have used biosystems engineering, computational modeling and novel experimental approaches to understand bone physiology, in health and disease, and across time (in utero, postnatal growth, maturity, aging and death, as well as evolution) and length scales (a single bone like a femur, m; a sample of bone tissue, mm-cm; a cell and its local environment, μm; down to the length scale of the cell's own skeleton, the cytoskeleton, nm). First we introduce the concept of flow in bone and the three calibers of porosity through which fluid flows. Then we describe, in the context of organ-tissue, tissue-cell and cell-molecule length scales, both multiscale computational models and experimental methods to predict flow in bone and to understand the flow of fluid as a means to deliver chemical and mechanical cues in bone. Addressing a number of studies in the context of multiple length and time scales, the importance of appropriate boundary conditions, site specific material parameters, permeability measures and even micro-nanoanatomically correct geometries are discussed in context of model predictions and their value for understanding multiscale mechanobiology of bone. Insights from these multiscale computational modeling and experimental methods are providing us with a means to predict, engineer and manufacture bone tissue in the laboratory and in the human body.

  5. Bottom-Up Synthesis and Sensor Applications of Biomimetic Nanostructures

    Directory of Open Access Journals (Sweden)

    Li Wang

    2016-01-01

    Full Text Available The combination of nanotechnology, biology, and bioengineering greatly improved the developments of nanomaterials with unique functions and properties. Biomolecules as the nanoscale building blocks play very important roles for the final formation of functional nanostructures. Many kinds of novel nanostructures have been created by using the bioinspired self-assembly and subsequent binding with various nanoparticles. In this review, we summarized the studies on the fabrications and sensor applications of biomimetic nanostructures. The strategies for creating different bottom-up nanostructures by using biomolecules like DNA, protein, peptide, and virus, as well as microorganisms like bacteria and plant leaf are introduced. In addition, the potential applications of the synthesized biomimetic nanostructures for colorimetry, fluorescence, surface plasmon resonance, surface-enhanced Raman scattering, electrical resistance, electrochemistry, and quartz crystal microbalance sensors are presented. This review will promote the understanding of relationships between biomolecules/microorganisms and functional nanomaterials in one way, and in another way it will guide the design and synthesis of biomimetic nanomaterials with unique properties in the future.

  6. Building Models from the Bottom Up: The HOBBES Project

    Science.gov (United States)

    Medellin-Azuara, J.; Sandoval Solis, S.; Lund, J. R.; Chu, W.

    2013-12-01

    Water problems are often bigger than technical and data challenges associated in representing a water system using a model. Controversy and complexity is inherent when water is to be allocated among different uses making difficult to maintain coherent and productive discussions on addressing water problems. Quantification of a water supply system through models has proven to be helpful to improve understanding, explore and develop adaptable solutions to water problems. However, models often become too large and complex and become hostages of endless discussions of the assumptions, their algorithms and their limitations. Data management organization and documentation keep model flexible and useful over time. The UC Davis HOBBES project is a new approach, building models from the bottom up. Reversing the traditional model development, where data are arranged around a model algorithm, in Hobbes the data structure, organization and documentation are established first, followed by application of simulation or optimization modeling algorithms for a particular problem at hand. The HOBBES project establishes standards for storing, documenting and sharing datasets on California water system. This allows models to be developed and modified more easily and transparently, with greater comparability. Elements in the database have a spatial definition and can aggregate several infrastructural elements into detailed to coarse representations of the water system. Elements in the database represent reservoirs, groundwater basins, pumping stations, hydropower and water treatment facilities, demand areas and conveyance infrastructure statewide. These elements also host time series, economic and other information from hydrologic, economic, climate and other models. This presentation provides an overview of the project HOBBES project, its applications and prospects for California and elsewhere. The HOBBES Project

  7. Humans strengthen bottom-up effects and weaken trophic cascades in a terrestrial food web.

    Directory of Open Access Journals (Sweden)

    Tyler B Muhly

    Full Text Available Ongoing debate about whether food webs are primarily regulated by predators or by primary plant productivity, cast as top-down and bottom-up effects, respectively, may becoming superfluous. Given that most of the world's ecosystems are human dominated we broadened this dichotomy by considering human effects in a terrestrial food-web. We studied a multiple human-use landscape in southwest Alberta, Canada, as opposed to protected areas where previous terrestrial food-web studies have been conducted. We used structural equation models (SEMs to assess the strength and direction of relationships between the density and distribution of: (1 humans, measured using a density index; (2 wolves (Canis lupus, elk (Cervus elapahus and domestic cattle (Bos taurus, measured using resource selection functions, and; (3 forage quality, quantity and utilization (measured at vegetation sampling plots. Relationships were evaluated by taking advantage of temporal and spatial variation in human density, including day versus night, and two landscapes with the highest and lowest human density in the study area. Here we show that forage-mediated effects of humans had primacy over predator-mediated effects in the food web. In our parsimonious SEM, occurrence of humans was most correlated with occurrence of forage (β = 0.637, p<0.0001. Elk and cattle distribution were correlated with forage (elk day: β = 0.400, p<0.0001; elk night: β = 0.369, p<0.0001; cattle day: β = 0.403, p<0.0001; cattle, night: β = 0.436, p<0.0001, and the distribution of elk or cattle and wolves were positively correlated during daytime (elk: β = 0.293, p <0.0001, cattle: β = 0.303, p<0.0001 and nighttime (elk: β = 0.460, p<0.0001, cattle: β = 0.482, p<0.0001. Our results contrast with research conducted in protected areas that suggested human effects in the food web are primarily predator-mediated. Instead, human influence on vegetation may strengthen

  8. Atomically Precise Bottom-up Fabrication of Graphene Nanoribbons

    Science.gov (United States)

    Cai, Jinming

    2011-03-01

    Graphene nanoribbons (GNRs) -- narrow stripes of graphene -- are predicted to exhibit remarkable properties making them suitable for future electronic applications. Contrary to their two-dimensional (2D) parent material graphene, which exhibits semimetallic behavior, GNRs with widths smaller than 10 nm are predicted to be semiconductors due to quantum confinement and edge effects. Despite significant advances in GNR fabrication using chemical, sonochemical and lithographic methods as well as recent reports on the successful unzipping of carbon nanotubes into GNRs, the production of sub-10 nm GNRs with chemical precision remains a major challenge. In this talk, we will present a simple GNR fabrication method that allows for the production of atomically precise GNRs of different topologies and widths. Our bottom-up approach consists in the surface-assisted coupling of suitably designed molecular precursors into linear polyphenylenes and their subsequent cyclodehydrogenation, and results in GNRs whose topology, width and edge periphery are defined by the precursor monomers. By means of STM and Raman characterization, we demonstrate that this fabrication process allows for the atomically precise fabrication of complex GNR topologies. Furthermore, we have developed a reliable procedure to transfer GNRs fabricated on metal surfaces onto other substrates. It will for example be shown that millimeter sized sheets of crosslinked GNRs can be transferred onto silicon wafers, making them available for further processing, e.g. by lithography, prototype device fabrication and characterization. Coauthors: Pascal Ruffieux, Rached Jaafar, Marco Bieri, Thomas Braun, and Stephan Blankenburg, Empa, Swiss Federal Laboratories for Materials Science and Technology, 3602 Thun and 8600 Dübendorf, Switzerland; Matthias Muoth, ETH Zurich, Department of Mechanical and Process Engineering, 8092 Zurich, Switzerland; Ari P. Seitsonen, University of Zurich, Physical Chemistry Institute, 8057

  9. Bottom-up capacity building for data providers in RITMARE

    Science.gov (United States)

    Pepe, Monica; Basoni, Anna; Bastianini, Mauro; Fugazza, Cristiano; Menegon, Stefano; Oggioni, Alessandro; Pavesi, Fabio; Sarretta, Alessandro; Carrara, Paola

    2014-05-01

    RITMARE is a Flagship Project by the Italian Ministry of Research, coordinated by the National Research Council (CNR). It aims at the interdisciplinary integration of Italian marine research. Sub-project 7 shall create an interoperable infrastructure for the project, capable of interconnecting the whole community of researchers involved. It will allow coordinating and sharing of data, processes, and information produced by the other sub-projects [1]. Spatial Data Infrastructures (SDIs) allow for interoperable sharing among heterogeneous, distributed spatial content providers. The INSPIRE Directive [2] regulates the development of a pan-european SDI despite the great variety of national approaches in managing spatial data. However, six years after its adoption, its growth is still hampered by technological, cultural, and methodological gaps. In particular, in the research sector, actors may not be prone to comply with INSPIRE (or feel not compelled to) because they are too concentrated on domain-specific activities or hindered by technological issues. Indeed, the available technologies and tools for enabling standard-based discovery and access services are far from being user-friendly and requires time-consuming activities, such as metadata creation. Moreover, the INSPIRE implementation guidelines do not accommodate an essential component in environmental research, that is, in situ observations. In order to overcome most of the aforementioned issues and to enable researchers to actively give their contribution in the creation of the project infrastructure, a bottom-up approach has been adopted: a software suite has been developed, called Starter Kit, which is offered to research data production units, so that they can become autonomous, independent nodes of data provision. The Starter Kit enables the provision of geospatial resources, either geodata (e.g., maps and layers) or observations pulled from sensors, which are made accessible according to the OGC standards

  10. Charge transport in bottom-up inorganic-organic and quantum-coherent nanostructures

    NARCIS (Netherlands)

    Makarenko, Ksenia Sergeevna

    2015-01-01

    This thesis is based on results obtained from experiments designed for a consistent study of charge transport in bottom-up inorganic-organic and quantum-coherent nanostructures. New unconventional ways to build elements of electrical circuits (like dielectrophoresis, wedging transfer and bottom-up f

  11. Saccade generation by the frontal eye fields in rhesus monkeys is separable from visual detection and bottom-up attention shift.

    Directory of Open Access Journals (Sweden)

    Kyoung-Min Lee

    Full Text Available The frontal eye fields (FEF, originally identified as an oculomotor cortex, have also been implicated in perceptual functions, such as constructing a visual saliency map and shifting visual attention. Further dissecting the area's role in the transformation from visual input to oculomotor command has been difficult because of spatial confounding between stimuli and responses and consequently between intermediate cognitive processes, such as attention shift and saccade preparation. Here we developed two tasks in which the visual stimulus and the saccade response were dissociated in space (the extended memory-guided saccade task, and bottom-up attention shift and saccade target selection were independent (the four-alternative delayed saccade task. Reversible inactivation of the FEF in rhesus monkeys disrupted, as expected, contralateral memory-guided saccades, but visual detection was demonstrated to be intact at the same field. Moreover, saccade behavior was impaired when a bottom-up shift of attention was not a prerequisite for saccade target selection, indicating that the inactivation effect was independent of the previously reported dysfunctions in bottom-up attention control. These findings underscore the motor aspect of the area's functions, especially in situations where saccades are generated by internal cognitive processes, including visual short-term memory and long-term associative memory.

  12. Saccade generation by the frontal eye fields in rhesus monkeys is separable from visual detection and bottom-up attention shift.

    Science.gov (United States)

    Lee, Kyoung-Min; Ahn, Kyung-Ha; Keller, Edward L

    2012-01-01

    The frontal eye fields (FEF), originally identified as an oculomotor cortex, have also been implicated in perceptual functions, such as constructing a visual saliency map and shifting visual attention. Further dissecting the area's role in the transformation from visual input to oculomotor command has been difficult because of spatial confounding between stimuli and responses and consequently between intermediate cognitive processes, such as attention shift and saccade preparation. Here we developed two tasks in which the visual stimulus and the saccade response were dissociated in space (the extended memory-guided saccade task), and bottom-up attention shift and saccade target selection were independent (the four-alternative delayed saccade task). Reversible inactivation of the FEF in rhesus monkeys disrupted, as expected, contralateral memory-guided saccades, but visual detection was demonstrated to be intact at the same field. Moreover, saccade behavior was impaired when a bottom-up shift of attention was not a prerequisite for saccade target selection, indicating that the inactivation effect was independent of the previously reported dysfunctions in bottom-up attention control. These findings underscore the motor aspect of the area's functions, especially in situations where saccades are generated by internal cognitive processes, including visual short-term memory and long-term associative memory.

  13. Mapping practices of project management – merging top-down and bottom-up perspectives

    DEFF Research Database (Denmark)

    Thuesen, Christian

    2015-01-01

    This paper presents a new methodology for studying different accounts of project management practices based on network mapping and analysis. Drawing upon network mapping and visualization as an analytical strategy top-down and bottom-up accounts of project management practice are analysed...... and compared. The analysis initially reveals a substantial difference between the top-down and bottom-up accounts of practice. Furthermore it identifies a soft side of project management that is central in the bottom-up account but absent from the top-down. Finally, the study shows that network mapping...... is a promising strategy for visualizing and analysing different accounts of project management practices....

  14. Social and ethical checkpoints for bottom-up synthetic biology, or protocells.

    Science.gov (United States)

    Bedau, Mark A; Parke, Emily C; Tangen, Uwe; Hantsche-Tangen, Brigitte

    2009-12-01

    An alternative to creating novel organisms through the traditional "top-down" approach to synthetic biology involves creating them from the "bottom up" by assembling them from non-living components; the products of this approach are called "protocells." In this paper we describe how bottom-up and top-down synthetic biology differ, review the current state of protocell research and development, and examine the unique ethical, social, and regulatory issues raised by bottom-up synthetic biology. Protocells have not yet been developed, but many expect this to happen within the next five to ten years. Accordingly, we identify six key checkpoints in protocell development at which particular attention should be given to specific ethical, social and regulatory issues concerning bottom-up synthetic biology, and make ten recommendations for responsible protocell science that are tied to the achievement of these checkpoints.

  15. Painful faces-induced attentional blink modulated by top-down and bottom-up mechanisms

    OpenAIRE

    2015-01-01

    Pain-related stimuli can capture attention in an automatic (bottom-up) or intentional (top-down) fashion. Previous studies have examined attentional capture by pain-related information using spatial attention paradigms that involve mainly a bottom-up mechanism. In the current study, we investigated the pain information–induced attentional blink (AB) using a rapid serial visual presentation (RSVP) task, and compared the effects of task-irrelevant and task-relevant pain distractors. Relationshi...

  16. Saliency-Based Fidelity Adaptation Preprocessing for Video Coding

    Institute of Scientific and Technical Information of China (English)

    Shao-Ping Lu; Song-Hai Zhang

    2011-01-01

    In this paper, we present a video coding scheme which applies the technique of visual saliency computation to adjust image fidelity before compression. To extract visually salient features, we construct a spatio-temporal saliency map by analyzing the video using a combined bottom-up and top-down visual saliency model. We then use an extended bilateral filter, in which the local intensity and spatial scales are adjusted according to visual saliency, to adaptively alter the image fidelity. Our implementation is based on the H.264 video encoder JM12.0. Besides evaluating our scheme with the H.264 reference software, we also compare it to a more traditional foreground-background segmentation-based method and a foveation-based approach which employs Gaussian blurring. Our results show that the proposed algorithm can improve the compression ratio significantly while effectively preserving perceptual visual quality.

  17. Bottoms Up

    Institute of Scientific and Technical Information of China (English)

    2011-01-01

    China’s high-end liquor is becoming a luxury item and a favorite among collectors spring Festival, the most important festival for the Chinese, is a time for celebration—and what would a celebration be without bottles of holi-

  18. the effect of intergroup threat and social identity salience on the belief in conspiracy theories over terrorism in indonesia: collective angst as a mediator

    Directory of Open Access Journals (Sweden)

    Ali Mashuri

    2015-01-01

    Full Text Available The present study tested how intergroup threat (high versus low and social identity as a Muslim (salient versus non-salient affected belief in conspiracy theories. Data among Indonesian Muslim students (N = 139 from this study demonstrated that intergroup threat and social identity salience interacted to influence belief in conspiracy theories. High intergroup threat triggered greater belief in conspiracy theories than low intergroup threat, more prominently in the condition in which participants’ Muslim identity was made salient. Collective angst also proved to mediate the effect of intergroup threat on the belief. However, in line with the prediction, evidence of this mediation effect of collective angst was only on the salient social identity condition. Discussions on these research findings build on both theoretical and practical implications.

  19. Saliency Mapping Enhanced by Structure Tensor

    Directory of Open Access Journals (Sweden)

    Zhiyong He

    2015-01-01

    Full Text Available We propose a novel efficient algorithm for computing visual saliency, which is based on the computation architecture of Itti model. As one of well-known bottom-up visual saliency models, Itti method evaluates three low-level features, color, intensity, and orientation, and then generates multiscale activation maps. Finally, a saliency map is aggregated with multiscale fusion. In our method, the orientation feature is replaced by edge and corner features extracted by a linear structure tensor. Following it, these features are used to generate contour activation map, and then all activation maps are directly combined into a saliency map. Compared to Itti method, our method is more computationally efficient because structure tensor is more computationally efficient than Gabor filter that is used to compute the orientation feature and our aggregation is a direct method instead of the multiscale operator. Experiments on Bruce’s dataset show that our method is a strong contender for the state of the art.

  20. Self-assembled nanostructured resistive switching memory devices fabricated by templated bottom-up growth.

    Science.gov (United States)

    Song, Ji-Min; Lee, Jang-Sik

    2016-01-07

    Metal-oxide-based resistive switching memory device has been studied intensively due to its potential to satisfy the requirements of next-generation memory devices. Active research has been done on the materials and device structures of resistive switching memory devices that meet the requirements of high density, fast switching speed, and reliable data storage. In this study, resistive switching memory devices were fabricated with nano-template-assisted bottom up growth. The electrochemical deposition was adopted to achieve the bottom-up growth of nickel nanodot electrodes. Nickel oxide layer was formed by oxygen plasma treatment of nickel nanodots at low temperature. The structures of fabricated nanoscale memory devices were analyzed with scanning electron microscope and atomic force microscope (AFM). The electrical characteristics of the devices were directly measured using conductive AFM. This work demonstrates the fabrication of resistive switching memory devices using self-assembled nanoscale masks and nanomateirals growth from bottom-up electrochemical deposition.

  1. Social and ethical checkpoints for bottom-up synthetic biology, or protocells

    OpenAIRE

    2009-01-01

    An alternative to creating novel organisms through the traditional “top-down” approach to synthetic biology involves creating them from the “bottom up” by assembling them from non-living components; the products of this approach are called “protocells.” In this paper we describe how bottom-up and top-down synthetic biology differ, review the current state of protocell research and development, and examine the unique ethical, social, and regulatory issues raised by bottom-up synthetic biology....

  2. Warming shifts top-down and bottom-up control of pond food web structure and function.

    Science.gov (United States)

    Shurin, Jonathan B; Clasen, Jessica L; Greig, Hamish S; Kratina, Pavel; Thompson, Patrick L

    2012-11-05

    The effects of global and local environmental changes are transmitted through networks of interacting organisms to shape the structure of communities and the dynamics of ecosystems. We tested the impact of elevated temperature on the top-down and bottom-up forces structuring experimental freshwater pond food webs in western Canada over 16 months. Experimental warming was crossed with treatments manipulating the presence of planktivorous fish and eutrophication through enhanced nutrient supply. We found that higher temperatures produced top-heavy food webs with lower biomass of benthic and pelagic producers, equivalent biomass of zooplankton, zoobenthos and pelagic bacteria, and more pelagic viruses. Eutrophication increased the biomass of all organisms studied, while fish had cascading positive effects on periphyton, phytoplankton and bacteria, and reduced biomass of invertebrates. Surprisingly, virus biomass was reduced in the presence of fish, suggesting the possibility for complex mechanisms of top-down control of the lytic cycle. Warming reduced the effects of eutrophication on periphyton, and magnified the already strong effects of fish on phytoplankton and bacteria. Warming, fish and nutrients all increased whole-system rates of net production despite their distinct impacts on the distribution of biomass between producers and consumers, plankton and benthos, and microbes and macrobes. Our results indicate that warming exerts a host of indirect effects on aquatic food webs mediated through shifts in the magnitudes of top-down and bottom-up forcing.

  3. An integrated top-down and bottom-up strategy for characterization protein isoforms and modifications

    Energy Technology Data Exchange (ETDEWEB)

    Wu, Si; Tolic, Nikola; Tian, Zhixin; Robinson, Errol W.; Pasa-Tolic, Ljiljana

    2011-04-15

    Bottom-up and top-down strategies are two commonly used methods for mass spectrometry (MS) based protein identification; each method has its own advantages and disadvantages. In this chapter, we describe an integrated top-down and bottom-up approach facilitated by concurrent liquid chromatography-mass spectrometry (LC-MS) analysis and fraction collection for comprehensive high-throughput intact protein profiling. The approach employs a high resolution reversed phase (RP) LC separation coupled with LC eluent fraction collection and concurrent on-line MS with a high field (12 Tesla) Fourier-transform ion cyclotron resonance (FTICR) mass spectrometer. Protein elusion profiles and tentative modified protein identification are made using detected intact protein mass in conjunction with bottom-up protein identifications from the enzymatic digestion and analysis of corresponding LC fractions. Specific proteins of biological interest are incorporated into a target ion list for subsequent off-line gas-phase fragmentation that uses an aliquot of the original collected LC fraction, an aliquot of which was also used for bottom-up analysis.

  4. Oriented bottom-up growth of armchair graphene nanoribbons on germanium

    Science.gov (United States)

    Arnold, Michael Scott; Jacobberger, Robert Michael

    2016-03-15

    Graphene nanoribbon arrays, methods of growing graphene nanoribbon arrays and electronic and photonic devices incorporating the graphene nanoribbon arrays are provided. The graphene nanoribbons in the arrays are formed using a scalable, bottom-up, chemical vapor deposition (CVD) technique in which the (001) facet of the germanium is used to orient the graphene nanoribbon crystals along the [110] directions of the germanium.

  5. Bottom-Up Molecular Tunneling Junctions Formed by Self-Assembly

    NARCIS (Netherlands)

    Zhang, Yanxi; Zhao, Zhiyuan; Fracasso, Davide; Chiechi, Ryan C

    2014-01-01

    This Minireview focuses on bottom-up molecular tunneling junctions - a fundamental component of molecular electronics - that are formed by self-assembly. These junctions are part of devices that, in part, fabricate themselves, and therefore, are particularly dependent on the chemistry of the molecul

  6. Coupling 2D Finite Element Models and Circuit Equations Using a Bottom-Up Methodology

    Science.gov (United States)

    2002-11-01

    EQUATIONS USING A BOTTOM-UP METHODOLOGY E. G6mezl, J. Roger-Folch2 , A. Gabald6nt and A. Molina’ ’Dpto. de Ingenieria Eldctrica. Universidad Polit...de Ingenieria Elictrica. ETSII. Universidad Politdcnica de Valencia. PO Box 22012, 46071. Valencia, Spain. E-mail: iroger adie.upv.es ABSTRACT The

  7. A Bottom-Up Approach for Implementing Electronic Portfolios in a Teacher Education Program

    Science.gov (United States)

    An, Heejung; Wilder, Hilary

    2010-01-01

    In an effort to generate a bottom-up approach for the program-wide implementation of electronic portfolios, this article first reports on the ways in which teacher candidates perceived the benefits and setbacks of this experience, after an initial course. Second, this article reports on whether and how the teacher candidates continued to develop…

  8. Bottom-up GGM algorithm for constructing multiple layered hierarchical gene regulatory networks

    Science.gov (United States)

    Multilayered hierarchical gene regulatory networks (ML-hGRNs) are very important for understanding genetics regulation of biological pathways. However, there are currently no computational algorithms available for directly building ML-hGRNs that regulate biological pathways. A bottom-up graphic Gaus...

  9. Achieving Campus Sustainability: Top-Down, Bottom-Up, or Neither?

    Science.gov (United States)

    Brinkhurst, Marena; Rose, Peter; Maurice, Gillian; Ackerman, Josef Daniel

    2011-01-01

    Purpose: The dynamics of organizational change related to environmental sustainability on university campuses are examined in this article. Whereas case studies of campus sustainability efforts tend to classify leadership as either "top-down" or "bottom-up", this classification neglects consideration of the leadership roles of…

  10. Managing Bottom up Strategizing : Collective Sensemaking of Strategic Issues in a Dutch Bank

    NARCIS (Netherlands)

    van der Steen, Martijn

    2016-01-01

    This paper discusses a bottom-up approach to strategizing in two member banks of a Dutch cooperative bank. In both banks, through a collective process of sensemaking, organisational participants evaluated their day-to-day experiences in order to identify strategic issues. The potential benefits of s

  11. Co-financing of bottom-up approaches towards Broadband Infrastructure Development

    DEFF Research Database (Denmark)

    Williams, Idongesit

    2016-01-01

    Bottom – up Broadband infrastructure development facilitated by the civil societies and social enterprises are on the increase. However, the problem plaguing the development of these bottom-up approaches in developing countries is the financial capacity to expand their small networks into larger...

  12. A Bottom up Initiative: Meditation & Mindfulness 'Eastern' Practices in the "Western" Academia

    DEFF Research Database (Denmark)

    Singla, Rashmi

    The process of globalisation has also influenced the curriculum of Psychology discipline in the Nordic countries to some extent. There are new sub disciplines and themes in the contemporary courses which have been brought about by both top down as well as bottom up initiative. This paper covers...... a case of bottom up initiative, where the students themselves have demanded inclusion of non- conventional psychosocial interventions illustrated by meditation and mindfulness as Eastern psychological practices, thus filling the gap related to the existential, spiritual approaches. The western...... psychological hegemony has made such transformations difficult and contentious in some universities in Denmark, whereas others are more open towards an integrated form of knowledge originating from different geographical contexts. The initiative taken by the psychology students in Århus University, the specific...

  13. Mindfulness meditation associated with alterations in bottom-up processing: psychophysiological evidence for reduced reactivity.

    Science.gov (United States)

    van den Hurk, Paul A M; Janssen, Barbara H; Giommi, Fabio; Barendregt, Henk P; Gielen, Stan C

    2010-11-01

    Mental training by meditation has been related to changes in high-level cognitive functions that involve top-down processing. The aim of this study was to investigate whether the practice of meditation is also related to alterations in low-level, bottom-up processing. Therefore, intersensory facilitation (IF) effects in a group of mindfulness meditators (MM) were compared to IF effects in an age- and gender-matched control group. Smaller and even absent IF effects were found in the MM group, which suggests that changes in bottom-up processing are associated with MM. Furthermore, reduced interference of a visual warning stimulus with the IF effects was found, which suggests an improved allocation of attentional resources in mindfulness meditators, even across modalities.

  14. Bottom-up mining of XML query patterns to improve XML querying

    Institute of Scientific and Technical Information of China (English)

    Yi-jun BEI; Gang CHEN; Jin-xiang DONG; Ke CHEN

    2008-01-01

    Querying XML data is a computationally expensive process due to the complex nature of both the XML data and the XML queries. In this paper we propose an approach to expedite XML query processing by caching the results of frequent queries. We discover frequent query patterns from user-issued queries using an efficient bottom-up mining approach called VBUXMiner. VBUXMiner consists of two main steps. First, all queries are merged into a summary structure named "compressed global tree guide" (CGTG). Second, a bottom-up traversal scheme based on the CGTG is employed to generate frequent query patterns. We use the frequent query patterns in a cache mechanism to improve the XML query performance. Experimental results show that our proposed mining approach outperforms the previous mining algorithms for XML queries, such as XQPMinerTID and FastXMiner, and that by caching the results of frequent query patterns, XML query performance can be dramatically improved.

  15. Scaled CMOS Reliability and Considerations for Spacecraft Systems: Bottom-Up and Top-Down Perspective

    Science.gov (United States)

    White, Mark

    2012-01-01

    New space missions will increasingly rely on more advanced technologies because of system requirements for higher performance, particularly in instruments and high-speed processing. Component-level reliability challenges with scaled CMOS in spacecraft systems from a bottom-up perspective have been presented. Fundamental Front-end and Back-end processing reliability issues with more aggressively scaled parts have been discussed. Effective thermal management from system-level to the componentlevel (top-down) is a key element in overall design of reliable systems. Thermal management in space systems must consider a wide range of issues, including thermal loading of many different components, and frequent temperature cycling of some systems. Both perspectives (top-down and bottom-up) play a large role in robust, reliable spacecraft system design.

  16. A balance of bottom-up and top-down in linking climate policies

    Science.gov (United States)

    Green, Jessica F.; Sterner, Thomas; Wagner, Gernot

    2014-12-01

    Top-down climate negotiations embodied by the Kyoto Protocol have all but stalled, chiefly because of disagreements over targets and objections to financial transfers. To avoid those problems, many have shifted their focus to linkage of bottom-up climate policies such as regional carbon markets. This approach is appealing, but we identify four obstacles to successful linkage: different levels of ambition; competing domestic policy objectives; objections to financial transfers; and the difficulty of close regulatory coordination. Even with a more decentralized approach, overcoming the 'global warming gridlock' of the intergovernmental negotiations will require close international coordination. We demonstrate how a balance of bottom-up and top-down elements can create a path toward an effective global climate architecture.

  17. The generation of myricetin-nicotinamide nanococrystals by top down and bottom up technologies

    Science.gov (United States)

    Liu, Mingyu; Hong, Chao; Li, Guowen; Ma, Ping; Xie, Yan

    2016-09-01

    Myricetin-nicotinamide (MYR-NIC) nanococrystal preparation methods were developed and optimized using both top down and bottom up approaches. The grinding (top down) method successfully achieved nanococrystals, but there were some micrometer range particles and aggregation. The key consideration of the grinding technology was to control the milling time to determine a balance between the particle size and distribution. In contrast, a modified bottom up approach based on a solution method in conjunction with sonochemistry resulted in a uniform MYR-NIC nanococrystal that was confirmed by powder x-ray diffraction, scanning electron microscopy, dynamic light scattering, and differential scanning calorimeter, and the particle dissolution rate and amount were significantly greater than that of MYR-NIC cocrystal. Notably, this was a simple method without the addition of any non-solvent. We anticipate our findings will provide some guidance for future nanococrystal preparation as well as its application in both chemical and pharmaceutical area.

  18. Bottom-up laboratory testing of the DKIST Visible Broadband Imager (VBI)

    Science.gov (United States)

    Ferayorni, Andrew; Beard, Andrew; Cole, Wes; Gregory, Scott; Wöeger, Friedrich

    2016-08-01

    The Daniel K. Inouye Solar Telescope (DKIST) is a 4-meter solar observatory under construction at Haleakala, Hawaii [1]. The Visible Broadband Imager (VBI) is a first light instrument that will record images at the highest possible spatial and temporal resolution of the DKIST at a number of scientifically important wavelengths [2]. The VBI is a pathfinder for DKIST instrumentation and a test bed for developing processes and procedures in the areas of unit, systems integration, and user acceptance testing. These test procedures have been developed and repeatedly executed during VBI construction in the lab as part of a "test early and test often" philosophy aimed at identifying and resolving issues early thus saving cost during integration test and commissioning on summit. The VBI team recently completed a bottom up end-to-end system test of the instrument in the lab that allowed the instrument's functionality, performance, and usability to be validated against documented system requirements. The bottom up testing approach includes four levels of testing, each introducing another layer in the control hierarchy that is tested before moving to the next level. First the instrument mechanisms are tested for positioning accuracy and repeatability using a laboratory position-sensing detector (PSD). Second the real-time motion controls are used to drive the mechanisms to verify speed and timing synchronization requirements are being met. Next the high-level software is introduced and the instrument is driven through a series of end-to-end tests that exercise the mechanisms, cameras, and simulated data processing. Finally, user acceptance testing is performed on operational and engineering use cases through the use of the instrument engineering graphical user interface (GUI). In this paper we present the VBI bottom up test plan, procedures, example test cases and tools used, as well as results from test execution in the laboratory. We will also discuss the benefits realized

  19. A constraint-based bottom-up counterpart to definite clause grammars

    DEFF Research Database (Denmark)

    Christiansen, Henning

    2004-01-01

    A new grammar formalism, CHR Grammars (CHRG), is proposed that provides a constraint-solving approach to language analysis, built on top of the programming language of Constraint Handling Rules in the same way as Definite Clause Grammars (DCG) on Prolog. CHRG works bottom-up and adds the following......, integrity constraints, operators a la assumption grammars, and to incorporate other constraint solvers. (iv)~Context-sensitive rules that apply for disambiguation, coordination in natural language and tagger-like rules....

  20. Bottom-Up Cost Analysis of a High Concentration PV Module

    Energy Technology Data Exchange (ETDEWEB)

    Horowitz, Kelsey A. W.; Woodhouse, Michael; Lee, Hohyun; Smestad, Greg P.

    2016-03-31

    We present a bottom-up model of III-V multi-junction cells, as well as a high concentration PV (HCPV) module. We calculate $0.59/W(DC) manufacturing costs for our model HCPV module design with today's capabilities, and find that reducing cell costs and increasing module efficiency offer the most promising paths for future cost reductions. Cell costs could be significantly reduced via substrate reuse and improved manufacturing yields.

  1. Identifying prognostic features by bottom-up approach and correlating to drug repositioning.

    Directory of Open Access Journals (Sweden)

    Wei Li

    Full Text Available Traditionally top-down method was used to identify prognostic features in cancer research. That is to say, differentially expressed genes usually in cancer versus normal were identified to see if they possess survival prediction power. The problem is that prognostic features identified from one set of patient samples can rarely be transferred to other datasets. We apply bottom-up approach in this study: survival correlated or clinical stage correlated genes were selected first and prioritized by their network topology additionally, then a small set of features can be used as a prognostic signature.Gene expression profiles of a cohort of 221 hepatocellular carcinoma (HCC patients were used as a training set, 'bottom-up' approach was applied to discover gene-expression signatures associated with survival in both tumor and adjacent non-tumor tissues, and compared with 'top-down' approach. The results were validated in a second cohort of 82 patients which was used as a testing set.Two sets of gene signatures separately identified in tumor and adjacent non-tumor tissues by bottom-up approach were developed in the training cohort. These two signatures were associated with overall survival times of HCC patients and the robustness of each was validated in the testing set, and each predictive performance was better than gene expression signatures reported previously. Moreover, genes in these two prognosis signature gave some indications for drug-repositioning on HCC. Some approved drugs targeting these markers have the alternative indications on hepatocellular carcinoma.Using the bottom-up approach, we have developed two prognostic gene signatures with a limited number of genes that associated with overall survival times of patients with HCC. Furthermore, prognostic markers in these two signatures have the potential to be therapeutic targets.

  2. Facilitating a research culture in an academic library: top down and bottom up approaches

    OpenAIRE

    Pickton, Miggie

    2016-01-01

    Purpose:\\ud The purpose of this paper is to consider why and how a research culture might be established in an academic library and to describe and evaluate efforts to achieve this at the University of Northampton. \\ud Design/methodology/approach:\\ud Contextualised within current literature on this topic, the paper examines the top down and bottom up approaches taken to facilitate practitioner research in one academic library. \\ud Findings:\\ud The approaches taken have led to a significant in...

  3. On the interactions between top-down anticipation and bottom-up regression

    Directory of Open Access Journals (Sweden)

    Jun Tani

    2007-11-01

    Full Text Available This paper discusses the importance of anticipation and regression in modeling cognitive behavior. The meanings of these cognitive functions are explained by describing our proposed neural network model which has been implemented on a set of cognitive robotics experiments. The reviews of these experiments suggest that the essences of embodied cognition may reside in the phenomena of the break-down between the top-down anticipation and the bottom-up regression and in its recovery process.

  4. The Application of Bottom-up and Top-down Processing in L2 Listening Comprehension

    Institute of Scientific and Technical Information of China (English)

    温颖茜

    2008-01-01

    Listening comprehension is one of the four basic skills for language learning and is also one of the most difficult tasks L2 learners ever experienced.L2 listening comprehemion is a cognitvive process,in which listeners use both bottom-up andtop-downprocessing to comprehend the auraltext.Thepaper focmes on the applicationof the two approaches in L2 lis-tening comprehemiom

  5. External Costs of Road, Rail and Air Transport - a Bottom-Up Approach

    OpenAIRE

    Weinreich, Sigurd; Rennings, Klaus; Schlomann, Barbara; Geßner, Christian; Engel, Thomas

    1998-01-01

    This paper aims to describe the calculation of environmental and health externalities caused by air pollutants, accidents and noise from different transport modes (road, rail, air) on the route Frankfurt-Milan. The investigation is part of the QUITS project (QUITS = Quality Indicators for Transport Systems), commissioned by the European Commission DG VII. The evaluation of the external costs is based on a bottom-up approach. The calculation involves four stages: emissions, dispersion, impacts...

  6. Catalyst-Free Bottom-Up Synthesis of Few-Layer Hexagonal Boron Nitride Nanosheets

    Directory of Open Access Journals (Sweden)

    Shena M. Stanley

    2015-01-01

    Full Text Available A novel catalyst-free methodology has been developed to prepare few-layer hexagonal boron nitride nanosheets using a bottom-up process. Scanning electron microscopy and transmission electron microscopy (both high and low resolution exhibit evidence of less than ten layers of nanosheets with uniform dimension. X-ray diffraction pattern and other additional characterization techniques prove crystallinity and purity of the product.

  7. Bottom-up graphene-nanoribbon fabrication reveals chiral edges and enantioselectivity.

    Science.gov (United States)

    Han, Patrick; Akagi, Kazuto; Federici Canova, Filippo; Mutoh, Hirotaka; Shiraki, Susumu; Iwaya, Katsuya; Weiss, Paul S; Asao, Naoki; Hitosugi, Taro

    2014-09-23

    We produce precise chiral-edge graphene nanoribbons on Cu{111} using self-assembly and surface-directed chemical reactions. We show that, using specific properties of the substrate, we can change the edge conformation of the nanoribbons, segregate their adsorption chiralities, and restrict their growth directions at low surface coverage. By elucidating the molecular-assembly mechanism, we demonstrate that our method constitutes an alternative bottom-up strategy toward synthesizing defect-free zigzag-edge graphene nanoribbons.

  8. Piezoresistive characterization of bottom-up, n-type silicon microwires undergoing bend deformation

    Energy Technology Data Exchange (ETDEWEB)

    McClarty, Megan M.; Oliver, Derek R., E-mail: Michael.Freund@umanitoba.ca, E-mail: Derek.Oliver@umanitoba.ca [Department of Electrical and Computer Engineering, University of Manitoba, Winnipeg R3T 5V6 (Canada); Bruce, Jared P.; Freund, Michael S., E-mail: Michael.Freund@umanitoba.ca, E-mail: Derek.Oliver@umanitoba.ca [Department of Chemistry, University of Manitoba, Winnipeg R3T 2N2 (Canada)

    2015-01-12

    The piezoresistance of silicon has been studied over the past few decades in order to characterize the material's unique electromechanical properties and investigate their wider applicability. While bulk and top-down (etched) micro- and nano-wires have been studied extensively, less work exists regarding bottom-up grown microwires. A facile method is presented for characterizing the piezoresistance of released, phosphorus-doped silicon microwires that have been grown, bottom-up, via a chemical vapour deposition, vapour-liquid-solid process. The method uses conductive tungsten probes to simultaneously make electrical measurements via direct ohmic contact and apply mechanical strain via bend deformation. These microwires display piezoresistive coefficients within an order of magnitude of those expected for bulk n-type silicon; however, they show an anomalous response at degenerate doping concentrations (∼10{sup 20 }cm{sup −3}) when compared to lower doping concentrations (∼10{sup 17 }cm{sup −3}), with a stronger piezoresistive coefficient exhibited for the more highly doped wires. This response is postulated to be due to the different growth mechanism of bottom-up microwires as compared to top-down.

  9. A Bottom-up Trend in Research of Management of Technology

    Directory of Open Access Journals (Sweden)

    Yoko Ishino

    2014-12-01

    Full Text Available Management of Technology (MOT is defined as an academic discipline of management that enables organizations to manage their technological fundamentals to create competitive advantage. MOT covers a wide range of contents including administrative strategy, R&D management, manufacturing management, technology transfer, production control, marketing, accounting, finance, business ethics, and others. For each topic, researchers have conducted their MOT research at various levels. However, a practical and pragmatic side of MOT surely affects its research trends. Finding changes of MOT research trends, or the chronological transitions of principal subjects, can help understand the key concepts of current MOT. This paper studied a bottom-up trend in research fields in MOT by applying a text-mining method to the conference proceedings of IAMOT (International Association for Management of Technology. First, focusing on only nouns found several keywords, which more frequently emerge over time in the IAMOT proceedings. Then, expanding the scope into other parts of speech viewed the keywords in a natural context. Finally, it was found that the use of an important keyword has qualitatively and quantitatively extended over time. In conclusion, a bottom-up trend in MOT research was detected and the effects of the social situation on the trend were discussed.Keywords: Management of Technology; Text Mining; Research Trend; Bottom-up Trend; Patent

  10. A bottom-up approach of stochastic demand allocation in water quality modelling

    Directory of Open Access Journals (Sweden)

    E. J. M. Blokker

    2010-01-01

    Full Text Available An "all pipes" hydraulic model of a DMA-sized drinking water distribution system was constructed with two types of demand allocations. One is constructed with the conventional top-down approach, i.e. a demand multiplier pattern from the booster station is allocated to all demand nodes with a correction factor to account for the average water demand on that node. The other is constructed with a bottom-up approach of demand allocation, i.e., each individual home is represented by one demand node with its own stochastic water demand pattern.

    The stochastic water demand patterns are constructed with an end-use model on a per second basis and per individual home. The flow entering the test area was measured and a tracer test with sodium chloride was performed to measure travel times. The two models were evaluated on the predicted sum of demands and travel times, compared with what was measured in the test area.

    The new bottom-up approach performs at least as well as the conventional top-down approach with respect to total demand and travel times, without the need for any flow measurements or calibration measurements. The bottom-up approach leads to a stochastic method of hydraulic modelling and gives insight into the variability of travel times as an added feature beyond the conventional way of modelling.

  11. Top-down vs. bottom-up control on vegetation composition in a tidal marsh depends on scale

    NARCIS (Netherlands)

    Elschot, Kelly; Vermeulen, Anke; Vandenbruwaene, Wouter; Bakker, Jan P.; Bouma, Tjeerd J.; Stahl, Julia; Castelijns, Henk; Temmerman, Stijn

    2017-01-01

    The relative impact of top-down control by herbivores and bottom-up control by environmental conditions on vegetation is a subject of debate in ecology. In this study, we hypothesize that top-down control by goose foraging and bottom-up control by sediment accretion on vegetation composition with

  12. BoB: Best of Both in Compiler Construction Bottom-up Parsing with Top-down Semantic Evaluation

    Directory of Open Access Journals (Sweden)

    Wolfgang Dichler

    Full Text Available Compilers typically use either a top-down or a bottom-up strategy for parsing as well as semantic evaluation. Both strategies have advantages and disadvantages: bottom-up parsing supports LR(k grammars but is limited to S- or LR-attribution while top-dow ...

  13. Bottom-Up or Top-Down: English as a Foreign Language Vocabulary Instruction for Chinese University Students

    Science.gov (United States)

    Moskovsky, Christo; Jiang, Guowu; Libert, Alan; Fagan, Seamus

    2015-01-01

    Whereas there has been some research on the role of bottom-up and top-down processing in the learning of a second or foreign language, very little attention has been given to bottom-up and top-down instructional approaches to language teaching. The research reported here used a quasi-experimental design to assess the relative effectiveness of two…

  14. Visual anticipation biases conscious perception but not bottom-up visual processing

    Directory of Open Access Journals (Sweden)

    Paul F.M.J. Verschure

    2015-01-01

    Full Text Available Theories of consciousness can be grouped with respect to their stance on embodiment, sensori-motor contingencies, prediction and integration. In this list prediction plays a key role and it is not clear which aspects of prediction are most prominent in the conscious scene. An evolving view on the brain is that it can be seen as a prediction machine that optimizes its ability to predict states of the world and the self through the top-down propagation of predictions and the bottom-up presentation of prediction errors. There are competing views though on whether prediction or prediction errors dominate the conscious scene. Yet, due to the lack of efficient indirect measures, the dynamic effects of prediction on perception, decision making and consciousness have been difficult to assess and to model. We propose a novel mathematical framework and psychophysical paradigm that allows us to assess both the hierarchical structuring of perceptual consciousness, its content and the impact of predictions and / or errors on the conscious scene. Using a displacement detection task combined with reverse correlation we reveal signatures of the usage of prediction at three different levels of perception: bottom-up early saccades, top-down driven late saccades and conscious decisions. Our results suggest that the brain employs multiple parallel mechanisms at different levels of information processing to restrict the sensory field using predictions. We observe that cognitive load has a quantifiable effect on this dissociation of the bottom-up sensory and top-down predictive processes. We propose a probabilistic data association model from dynamical systems theory to model this predictive bias in different information processing levels.

  15. Visual anticipation biases conscious decision making but not bottom-up visual processing.

    Science.gov (United States)

    Mathews, Zenon; Cetnarski, Ryszard; Verschure, Paul F M J

    2014-01-01

    Prediction plays a key role in control of attention but it is not clear which aspects of prediction are most prominent in conscious experience. An evolving view on the brain is that it can be seen as a prediction machine that optimizes its ability to predict states of the world and the self through the top-down propagation of predictions and the bottom-up presentation of prediction errors. There are competing views though on whether prediction or prediction errors dominate the formation of conscious experience. Yet, the dynamic effects of prediction on perception, decision making and consciousness have been difficult to assess and to model. We propose a novel mathematical framework and a psychophysical paradigm that allows us to assess both the hierarchical structuring of perceptual consciousness, its content and the impact of predictions and/or errors on conscious experience, attention and decision-making. Using a displacement detection task combined with reverse correlation, we reveal signatures of the usage of prediction at three different levels of perceptual processing: bottom-up fast saccades, top-down driven slow saccades and consciousnes decisions. Our results suggest that the brain employs multiple parallel mechanism at different levels of perceptual processing in order to shape effective sensory consciousness within a predicted perceptual scene. We further observe that bottom-up sensory and top-down predictive processes can be dissociated through cognitive load. We propose a probabilistic data association model from dynamical systems theory to model the predictive multi-scale bias in perceptual processing that we observe and its role in the formation of conscious experience. We propose that these results support the hypothesis that consciousness provides a time-delayed description of a task that is used to prospectively optimize real time control structures, rather than being engaged in the real-time control of behavior itself.

  16. Increased performance in a bottom-up designed robot by experimentally guided redesign

    DEFF Research Database (Denmark)

    Larsen, Jørgen Christian

    2013-01-01

    the bottom-up, mode-free approach, the authors used the robotic construction kit, LocoKit. This construction kit allows researchers to construct legged robots, without having a mathematical model beforehand. The authors used no specific mathematical model to design the robot, but instead used intuition...... and took inspiration from biology. The results were afterwards compared with results gained from biology, to see if the robot has some of the key elements the authors were looking for. Findings – With the use of LocoKit as the experimental platform, combined with known experimental measurement methods from...

  17. Bottom-up metamaterials with an isotropic magnetic response in the visible

    Science.gov (United States)

    Mühlig, Stefan; Dintinger, José; Cunningham, Alastair; Scharf, Toralf; Bürgi, Thomas; Rockstuhl, Carsten; Lederer, Falk

    A theoretical framework to analyze the optical properties of amorphous metamaterials made from meta-atoms which are amenable for a fabrication with bottom-up technologies is introduced. The achievement of an isotropic magnetic resonance in the visible is investigated by suggesting suitable designs for the meta-atoms. Furthermore, two meta-atoms are discussed in detail that were fabricated by self-assembling plasmonic nanoparticles using techniques from the field of colloidal nanochemistry. The metamaterials are experimentally characterized by spectroscopic means and the excitation of the magnetic dipole moment is clearly revealed. Advantages and disadvantages of metamaterials made from such meta-atoms are discussed.

  18. First-principles study on bottom-up fabrication process of atomically precise graphene nanoribbons

    Science.gov (United States)

    Kaneko, Tomoaki; Tajima, Nobuo; Ohno, Takahisa

    2016-06-01

    We investigate the energetics of a polyanthracene formation in the bottom-up fabrication of atomically precise graphene nanoribbons on Au(111) using first-principles calculations based on the density functional theory. We show that the structure of precursor molecules plays a decisive role in the C-C coupling reaction. The reaction energy of the dimerization of anthracene dimers is a larger negative value than that of the dimerization of anthracene monomers, suggesting that the precursor molecule used in experiments has a favorable structure for graphene nanoribbon fabrication.

  19. Toward the atomic structure of the nuclear pore complex: when top down meets bottom up.

    Science.gov (United States)

    Hoelz, André; Glavy, Joseph S; Beck, Martin

    2016-07-01

    Elucidating the structure of the nuclear pore complex (NPC) is a prerequisite for understanding the molecular mechanism of nucleocytoplasmic transport. However, owing to its sheer size and flexibility, the NPC is unapproachable by classical structure determination techniques and requires a joint effort of complementary methods. Whereas bottom-up approaches rely on biochemical interaction studies and crystal-structure determination of NPC components, top-down approaches attempt to determine the structure of the intact NPC in situ. Recently, both approaches have converged, thereby bridging the resolution gap from the higher-order scaffold structure to near-atomic resolution and opening the door for structure-guided experimental interrogations of NPC function.

  20. Scaled CMOS Reliability and Considerations for Spacecraft Systems : Bottom-Up and Top-Down Perspectives

    Science.gov (United States)

    White, Mark

    2012-01-01

    The recently launched Mars Science Laboratory (MSL) flagship mission, named Curiosity, is the most complex rover ever built by NASA and is scheduled to touch down on the red planet in August, 2012 in Gale Crater. The rover and its instruments will have to endure the harsh environments of the surface of Mars to fulfill its main science objectives. Such complex systems require reliable microelectronic components coupled with adequate component and system-level design margins. Reliability aspects of these elements of the spacecraft system are presented from bottom- up and top-down perspectives.

  1. Unsupervised tattoo segmentation combining bottom-up and top-down cues

    Science.gov (United States)

    Allen, Josef D.; Zhao, Nan; Yuan, Jiangbo; Liu, Xiuwen

    2011-06-01

    Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for finding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a figureground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is efficient and suitable for further tattoo classification and retrieval purpose.

  2. Automated Urban Travel Interpretation: A Bottom-up Approach for Trajectory Segmentation.

    Science.gov (United States)

    Das, Rahul Deb; Winter, Stephan

    2016-11-23

    Understanding travel behavior is critical for an effective urban planning as well as for enabling various context-aware service provisions to support mobility as a service (MaaS). Both applications rely on the sensor traces generated by travellers' smartphones. These traces can be used to interpret travel modes, both for generating automated travel diaries as well as for real-time travel mode detection. Current approaches segment a trajectory by certain criteria, e.g., drop in speed. However, these criteria are heuristic, and, thus, existing approaches are subjective and involve significant vagueness and uncertainty in activity transitions in space and time. Also, segmentation approaches are not suited for real time interpretation of open-ended segments, and cannot cope with the frequent gaps in the location traces. In order to address all these challenges a novel, state based bottom-up approach is proposed. This approach assumes a fixed atomic segment of a homogeneous state, instead of an event-based segment, and a progressive iteration until a new state is found. The research investigates how an atomic state-based approach can be developed in such a way that can work in real time, near-real time and offline mode and in different environmental conditions with their varying quality of sensor traces. The results show the proposed bottom-up model outperforms the existing event-based segmentation models in terms of adaptivity, flexibility, accuracy and richness in information delivery pertinent to automated travel behavior interpretation.

  3. Top-down (Prior Knowledge) and Bottom-up (Perceptual Modality) Influences on Spontaneous Interpersonal Synchronization.

    Science.gov (United States)

    Gipson, Christina L; Gorman, Jamie C; Hessler, Eric E

    2016-04-01

    Coordination with others is such a fundamental part of human activity that it can happen unintentionally. This unintentional coordination can manifest as synchronization and is observed in physical and human systems alike. We investigated the role of top-down influences (prior knowledge of the perceptual modality their partner is using) and bottom-up factors (perceptual modality combination) on spontaneous interpersonal synchronization. We examine this phenomena with respect to two different theoretical perspectives that differently emphasize top-down and bottom-up factors in interpersonal synchronization: joint-action/shared cognition theories and ecological-interactive theories. In an empirical study twelve dyads performed a finger oscillation task while attending to each other's movements through either visual, auditory, or visual and auditory perceptual modalities. Half of the participants were given prior knowledge of their partner's perceptual capabilities for coordinating across these different perceptual modality combinations. We found that the effect of top-down influence depends on the perceptual modality combination between two individuals. When people used the same perceptual modalities, top-down influence resulted in less synchronization and when people used different perceptual modalities, top-down influence resulted in more synchronization. Furthermore, persistence in the change in behavior as a result of having perceptual information about each other ('social memory') was stronger when this top-down influence was present.

  4. Sex differences in mental rotation: top-down versus bottom-up processing.

    Science.gov (United States)

    Butler, Tracy; Imperato-McGinley, Julianne; Pan, Hong; Voyer, Daniel; Cordero, Juan; Zhu, Yuan-Shan; Stern, Emily; Silbersweig, David

    2006-08-01

    Functional MRI during performance of a validated mental rotation task was used to assess a neurobiological basis for sex differences in visuospatial processing. Between-sex group analysis demonstrated greater activity in women than in men in dorsalmedial prefrontal and other high-order heteromodal association cortices, suggesting women performed mental rotation in an effortful, "top-down" fashion. In contrast, men activated primary sensory cortices as well as regions involved in implicit learning (basal ganglia) and mental imagery (precuneus), consistent with a more automatic, "bottom-up" strategy. Functional connectivity analysis in association with a measure of behavioral performance showed that, in men (but not women), accurate performance was associated with deactivation of parieto-insular vestibular cortex (PIVC) as part of a visual-vestibular network. Automatic evocation by men to a greater extent than women of this network during mental rotation may represent an effective, unconscious, bottom-up neural strategy which could reasonably account for men's traditional visuospatial performance advantage.

  5. A combined bottom-up/top-down approach to prepare a sterile injectable nanosuspension.

    Science.gov (United States)

    Hu, Xi; Chen, Xi; Zhang, Ling; Lin, Xia; Zhang, Yu; Tang, Xing; Wang, Yanjiao

    2014-09-10

    To prepare a uniform nanosuspension of strongly hydrophobic riboflavin laurate (RFL) allowing sterile filtration, physical modification (bottom-up) was combined with high-pressure homogenization (top-down) method. Unlike other bottom-up approaches, physical modification with surfactants (TPGS and PL-100) by lyophilization controlled crystallization and compensated for the poor wettability of RFL. On one hand, crystal growth and aggregation during freezing was restricted by a stabilizer-layer adsorbed on the drug surface by hydrophobic interaction. On the other hand, subsequent crystallization of drug in the sublimation process was limited to the interstitial spaces between solvent crystals. After lyophilization, modified drug with a smaller particle size and better wettability was obtained. When adding surfactant solution, water molecules passed between the hydrophilic groups of surface active molecules and activated the polymer chains allowing them to stretch into water. The coarse suspension was crushed into a nanosuspension (MP=162 nm) by high-pressure homogenization. For long term stability, lyophilization was applied again to solidify the nanosuspension (sorbitol as cryoprotectant). A slight crystal growth to about 600 nm was obtained to allow slow release for a sustained effect after muscular administration. Moreover, no paw-licking responses and very slight muscular inflammation demonstrated the excellent biocompatibility of this long-acting RFL injection.

  6. A bottom-up model to describe consumers’ preferences towards late season peaches

    Energy Technology Data Exchange (ETDEWEB)

    Groot, E.; Albisu, L.M.

    2015-07-01

    Peaches are consumed in Mediterranean countries since ancient times. Nowadays there are few areas in Europe that produce peaches with Protected Designation of Origin (PDO), and the Calanda area is one of them. The aim of this work is to describe consumers’ preferences towards late season PDO Calanda peaches in the city of Zaragoza, Spain, by a bottom-up model. The bottom-up model proves greater amount of information than top-down models. In this approach it is estimated one utility function per consumer. Thus, it is not necessary to make assumptions about preference distributions and correlations across respondents. It was observed that preference distributions were neither normal nor independently distributed. If those preferences were estimated by top-down models, conclusions would be biased. This paper also explores a new way to describe preferences through individual utility functions. Results show that the largest behavioural group gathered origin sensitive consumers. Their utility increased if the peaches were produced in the Calanda area and, especially, when peaches had the PDO Calanda brand. In sequence, the second most valuable attribute for consumers was the price. Peach size and packaging were not so important on purchase choice decision. Nevertheless, it is advisable to avoid trading smallest size peaches (weighting around 160 g/fruit). Traders also have to be careful by using active packaging. It was found that a group of consumers disliked this kind of product, probably, because they perceived it as less natural. (Author)

  7. Top-down and bottom-up forces interact at thermal range extremes on American lobster.

    Science.gov (United States)

    Boudreau, Stephanie A; Anderson, Sean C; Worm, Boris

    2015-05-01

    Exploited marine populations are thought to be regulated by the effects of fishing, species interactions and climate. Yet, it is unclear how these forces interact and vary across a species' range. We conducted a meta-analysis of American lobster (Homarus americanus) abundance data throughout the entirety of the species' range, testing competing hypotheses about bottom-up (climate, temperature) vs. top-down (predation, fishing) regulation along a strong thermal gradient. Our results suggest an interaction between predation and thermal range - predation effects dominated at the cold and warm extremes, but not at the centre of the species' range. Similarly, there was consistent support for a positive climate effect on lobster recruitment at warm range extremes. In contrast, fishing effort followed, rather than led changes in lobster abundance over time. Our analysis suggests that the relative effects of top-down and bottom-up forcing in regulating marine populations may intensify at thermal range boundaries and weaken at the core of a species' range.

  8. Painful faces-induced attentional blink modulated by top-down and bottom-up mechanisms

    Directory of Open Access Journals (Sweden)

    Chun eZheng

    2015-06-01

    Full Text Available Pain-related stimuli can capture attention in an automatic (bottom-up or intentional (top-down fashion. Previous studies have examined attentional capture by pain-related information using spatial attention paradigms that involve mainly a bottom-up mechanism. In the current study, we investigated the pain information–induced attentional blink (AB using a rapid serial visual presentation (RSVP task, and compared the effects of task-irrelevant and task-relevant pain distractors. Relationships between accuracy of target identification and individual traits (i.e., empathy and catastrophizing thinking about pain were also examined. The results demonstrated that task-relevant painful faces had a significant pain information–induced AB effect, whereas task-irrelevant faces a near-significant trend of this effect, supporting the notion that pain-related stimuli can influence the temporal dynamics of attention. Furthermore, we found a significant negative correlation between response accuracy and pain catastrophizing score in task-relevant trials. These findings suggest that active scanning of environmental information related to pain produces greater deficits in cognition than does unintentional attention toward pain, which may represent the different ways in which healthy individuals and patients with chronic pain process pain-relevant information. These results may provide insight into the understanding of maladaptive attentional processing in patients with chronic pain.

  9. Painful faces-induced attentional blink modulated by top-down and bottom-up mechanisms.

    Science.gov (United States)

    Zheng, Chun; Wang, Jin-Yan; Luo, Fei

    2015-01-01

    Pain-related stimuli can capture attention in an automatic (bottom-up) or intentional (top-down) fashion. Previous studies have examined attentional capture by pain-related information using spatial attention paradigms that involve mainly a bottom-up mechanism. In the current study, we investigated the pain information-induced attentional blink (AB) using a rapid serial visual presentation (RSVP) task, and compared the effects of task-irrelevant and task-relevant pain distractors. Relationships between accuracy of target identification and individual traits (i.e., empathy and catastrophizing thinking about pain) were also examined. The results demonstrated that task-relevant painful faces had a significant pain information-induced AB effect, whereas task-irrelevant faces showed a near-significant trend of this effect, supporting the notion that pain-related stimuli can influence the temporal dynamics of attention. Furthermore, we found a significant negative correlation between response accuracy and pain catastrophizing score in task-relevant trials. These findings suggest that active scanning of environmental information related to pain produces greater deficits in cognition than does unintentional attention toward pain, which may represent the different ways in which healthy individuals and patients with chronic pain process pain-relevant information. These results may provide insight into the understanding of maladaptive attentional processing in patients with chronic pain.

  10. Top-down and bottom-up definitions of human failure events in human reliability analysis

    Energy Technology Data Exchange (ETDEWEB)

    Boring, Ronald Laurids [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2014-10-01

    In the probabilistic risk assessments (PRAs) used in the nuclear industry, human failure events (HFEs) are determined as a subset of hardware failures, namely those hardware failures that could be triggered by human action or inaction. This approach is top-down, starting with hardware faults and deducing human contributions to those faults. Elsewhere, more traditionally human factors driven approaches would tend to look at opportunities for human errors first in a task analysis and then identify which of those errors is risk significant. The intersection of top-down and bottom-up approaches to defining HFEs has not been carefully studied. Ideally, both approaches should arrive at the same set of HFEs. This question is crucial, however, as human reliability analysis (HRA) methods are generalized to new domains like oil and gas. The HFEs used in nuclear PRAs tend to be top-down—defined as a subset of the PRA—whereas the HFEs used in petroleum quantitative risk assessments (QRAs) often tend to be bottom-up—derived from a task analysis conducted by human factors experts. The marriage of these approaches is necessary in order to ensure that HRA methods developed for top-down HFEs are also sufficient for bottom-up applications.

  11. Bottom-up processing and low temperature transport properties of polycrystalline SnSe

    Energy Technology Data Exchange (ETDEWEB)

    Ge, Zhen-Hua; Wei, Kaya; Lewis, Hutton [Department of Physics, University of South Florida, Tampa, FL 33620 (United States); Martin, Joshua [Materials Measurement Science Division, National Institute of Standards and Technology, Gaithersburg, MD 20899 (United States); Nolas, George S., E-mail: gnolas@usf.edu [Department of Physics, University of South Florida, Tampa, FL 33620 (United States)

    2015-05-15

    A hydrothermal approach was employed to efficiently synthesize SnSe nanorods. The nanorods were consolidated into polycrystalline SnSe by spark plasma sintering for low temperature electrical and thermal properties characterization. The low temperature transport properties indicate semiconducting behavior with a typical dielectric temperature dependence of the thermal conductivity. The transport properties are discussed in light of the recent interest in this material for thermoelectric applications. The nanorod growth mechanism is also discussed in detail. - Graphical abstract: SnSe nanorods were synthesized by a simple hydrothermal method through a bottom-up approach. Micron sized flower-like crystals changed to nanorods with increasing hydrothermal temperature. Low temperature transport properties of polycrystalline SnSe, after SPS densification, were reported for the first time. This bottom-up synthetic approach can be used to produce phase-pure dense polycrystalline materials for thermoelectrics applications. - Highlights: • SnSe nanorods were synthesized by a simple and efficient hydrothermal approach. • The role of temperature, time and NaOH content was investigated. • SPS densification allowed for low temperature transport properties measurements. • Transport measurements indicate semiconducting behavior.

  12. A bottom-up approach for the synthesis of highly ordered fullerene-intercalated graphene hybrids

    Directory of Open Access Journals (Sweden)

    Dimitrios eGournis

    2015-02-01

    Full Text Available Much of the research effort on graphene focuses on its use as a building block for the development of new hybrid nanostructures with well-defined dimensions and properties suitable for applications such as gas storage, heterogeneous catalysis, gas/liquid separations, nanosensing and biomedicine. Towards this aim, here we describe a new bottom-up approach, which combines self-assembly with the Langmuir Schaefer deposition technique to synthesize graphene-based layered hybrid materials hosting fullerene molecules within the interlayer space. Our film preparation consists in a bottom-up layer-by-layer process that proceeds via the formation of a hybrid organo-graphene oxide Langmuir film. The structure and composition of these hybrid fullerene-containing thin multilayers deposited on hydrophobic substrates were characterized by a combination of X-ray diffraction, Raman and X-ray photoelectron spectroscopies, atomic force microscopy and conductivity measurements. The latter revealed that the presence of C60 within the interlayer spacing leads to an increase in electrical conductivity of the hybrid material as compared to the organo-graphene matrix alone.

  13. A bottom up approach for engineering catchments through sustainable runoff management

    Science.gov (United States)

    Wilkinson, M.; Quinn, P. F.; Jonczyk, J.; Burke, S.

    2010-12-01

    There is no doubt that our catchments are under great stress. There have been many accounts around the world of severe flood events and water quality issues within channels. As a result of these, ecological habitats in rivers are also under pressure. Within the United Kingdom, all these issues have been identified as key target areas for policy. Traditionally this has been managed by a policy driven top down approach which is usually ineffective. A one ‘size fits all’ attitude often does not work. This paper presents a case study in northern England whereby a bottom up approach is applied to multipurpose managing of catchments at the source (in the order of 1-10km2). This includes simultaneous tackling of water quality, flooding and ecological issues by creating sustainable runoff management solutions such as storage ponds, wetlands, beaver dams and willow riparian features. In order to identify the prevailing issues in a specific catchment, full and transparent stakeholder engagement is essential, with everybody who has a vested interest in the catchment being involved from the beginning. These problems can then be dealt with through the use of a novel catchment management toolkit, which is transferable to similar scale catchments. However, evidence collected on the ground also allows for upscaling of the toolkit. The process gathers the scientific evidence about the effectiveness of existing or new measures, which can really change the catchment functions. Still, we need to get better at communicating the science to policy makers and policy therefore must facilitate a bottom up approach to land and water management. We show a test site for this approach in the Belford burn catchment (6km2), northern England. This catchment has problems with flooding and water quality. Increased sediment loads are affecting the nearby estuary which is an important ecological zone and numerous floods have affected the local village. A catchment engineering toolkit has been

  14. Bottom-up communication. Identifying opportunities and limitations through an exploratory field-based evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Wilson, C.; Irvine, K.N. [Institute of Energy and Sustainable Development, De Montfort University, Leicester, LE1 9BH (United Kingdom)

    2013-02-15

    Communication to promote behaviours like energy saving can use significant resources. What is less clear is the comparative value of different approaches available to communicators. While it is generally agreed that 'bottom-up' approaches, where individuals are actively involved rather than passive, are preferable to 'top-down' authority-led projects, there is a dearth of evidence that verifies why this should be. Additionally, while the literature has examined the mechanics of the different approaches, there has been less attention paid to the associated psychological implications. This paper reports on an exploratory comparative study that examined the effects of six distinct communication activities. The activities used different communication approaches, some participative and others more top-down informational. Two theories, from behavioural studies and communication, were used to identify key variables for consideration in this field-based evaluation. The evaluation aimed to assess not just which activity might be most successful, as this has limited generalisability, but to also gain insight into what psychological impacts might contribute to success. Analysis found support for the general hypothesis that bottom-up approaches have more impact on behaviour change than top-down. The study also identified that, in this instance, the difference in reported behaviour across the activities related partly to the extent to which intentions to change behaviour were implemented. One possible explanation for the difference in reported behaviour change across the activities is that a bottom-up approach may offer a supportive environment where participants can discuss progress with like-minded individuals. A further possible explanation is that despite controlling for intention at an individual level, the pre-existence of strong intentions may have an effect on group success. These suggestive findings point toward the critical need for additional and larger-scale studies

  15. Metatranscriptomic Evidence for Co-Occurring Top-Down and Bottom-Up Controls on Toxic Cyanobacterial Communities

    Science.gov (United States)

    Steffen, Morgan M.; Belisle, B. Shafer; Watson, Sue B.; Boyer, Gregory L.; Bourbonniere, Richard A.

    2015-01-01

    Little is known about the molecular and physiological function of co-occurring microbes within freshwater cyanobacterial harmful algal blooms (cHABs). To address this, community metatranscriptomes collected from the western basin of Lake Erie during August 2012 were examined. Using sequence data, we tested the hypothesis that the activity of the microbial community members is independent of community structure. Predicted metabolic and physiological functional profiles from spatially distinct metatranscriptomes were determined to be ≥90% similar between sites. Targeted analysis of Microcystis aeruginosa, the historical causative agent of cyanobacterial harmful algal blooms over the past ∼20 years, as well as analysis of Planktothrix agardhii and Anabaena cylindrica, revealed ongoing transcription of genes involved in microcystin toxin synthesis as well as the acquisition of both nitrogen and phosphorus, nutrients often implicated as independent bottom-up drivers of eutrophication in aquatic systems. Transcription of genes involved in carbon dioxide (CO2) concentration and metabolism also provided support for the alternate hypothesis that high-pH conditions and dense algal biomass result in CO2-limiting conditions that further favor cyanobacterial dominance. Additionally, the presence of Microcystis-specific cyanophage sequences provided preliminary evidence of possible top-down virus-mediated control of cHAB populations. Overall, these data provide insight into the complex series of constraints associated with Microcystis blooms that dominate the western basin of Lake Erie during summer months, demonstrating that multiple environmental factors work to shape the microbial community. PMID:25662977

  16. Biocompatible PEGylated MoS2 nanosheets: controllable bottom-up synthesis and highly efficient photothermal regression of tumor.

    Science.gov (United States)

    Wang, Shige; Li, Kai; Chen, Yu; Chen, Hangrong; Ma, Ming; Feng, Jingwei; Zhao, Qinghua; Shi, Jianlin

    2015-01-01

    Two-dimensional transition metal dichalcogenides, particularly MoS2 nanosheets, have been deemed as a novel category of NIR photothermal transducing agent. Herein, an efficient and versatile one-pot solvothermal synthesis based on "bottom-up" strategy has been, for the first time, proposed for the controlled synthesis of PEGylated MoS2 nanosheets by using a novel "integrated" precursor containing both Mo and S elements. This facile but unique PEG-mediated solvothermal procedure endowed MoS2 nanosheets with controlled size, increased crystallinity and excellent colloidal stability. The photothermal performance of nanosheets was optimized via modulating the particulate size and surface PEGylation. PEGylated MoS2 nanosheets with desired photothermal conversion performance and excellent colloidal and photothermal stability were further utilized for highly efficient photothermal therapy of cancer in a tumor-bearing mouse xenograft. Without showing observable in vitro and in vivo hemolysis, coagulation and toxicity, the optimized MoS2-PEG nanosheets showed promising in vitro and in vivo anti-cancer efficacy.

  17. Enhancing bottom-up and top-down proteomic measurements with ion mobility separations.

    Science.gov (United States)

    Baker, Erin Shammel; Burnum-Johnson, Kristin E; Ibrahim, Yehia M; Orton, Daniel J; Monroe, Matthew E; Kelly, Ryan T; Moore, Ronald J; Zhang, Xing; Théberge, Roger; Costello, Catherine E; Smith, Richard D

    2015-08-01

    Proteomic measurements with greater throughput, sensitivity, and structural information are essential for improving both in-depth characterization of complex mixtures and targeted studies. While LC separation coupled with MS (LC-MS) measurements have provided information on thousands of proteins in different sample types, the introduction of a separation stage that provides further component resolution and rapid structural information has many benefits in proteomic analyses. Technical advances in ion transmission and data acquisition have made ion mobility separations an opportune technology to be easily and effectively incorporated into LC-MS proteomic measurements for enhancing their information content. Herein, we report on applications illustrating increased sensitivity, throughput, and structural information by utilizing IMS-MS and LC-IMS-MS measurements for both bottom-up and top-down proteomics measurements.

  18. A bottom-up perspective on leadership of collaborative innovation in the public sector

    DEFF Research Database (Denmark)

    Hansen, Jesper Rohr

    . A crucial condition for success is iterative leadership adaptation. In conclusion, the thesis finds that specialized professionals are indeed able to develop politically viable, innovative and collaborative solutions to wicked problems; and that such professionals are able to transform themselves......The thesis investigates how new forms of public leadership can contribute to solving complex problems in today’s welfare societies through innovation. A bottom-up type of leadership for collaborative innovation addressing wicked problems is theorised, displaying a social constructive process...... approach to leadership; a theoretical model emphasises that leadership emerges through social processes of recognition. Leadership is recognised by utilising the uncertainty of a wicked problem and innovation to influence collaborators’ sensemaking processes. The empirical setting is the City of Copenhagen...

  19. Bottom-up synthesis of chiral covalent organic frameworks and their bound capillaries for chiral separation.

    Science.gov (United States)

    Qian, Hai-Long; Yang, Cheng-Xiong; Yan, Xiu-Ping

    2016-07-12

    Covalent organic frameworks (COFs) are a novel class of porous materials, and offer great potential for various applications. However, the applications of COFs in chiral separation and chiral catalysis are largely underexplored due to the very limited chiral COFs available and their challenging synthesis. Here we show a bottom-up strategy to construct chiral COFs and an in situ growth approach to fabricate chiral COF-bound capillary columns for chiral gas chromatography. We incorporate the chiral centres into one of the organic ligands for the synthesis of the chiral COFs. We subsequently in situ prepare the COF-bound capillary columns. The prepared chiral COFs and their bound capillary columns give high resolution for the separation of enantiomers with excellent repeatability and reproducibility. The proposed strategy provides a promising platform for the synthesis of chiral COFs and their chiral separation application.

  20. Bottom-Up Meets Top-Down: Patchy Hybrid Nonwovens as an Efficient Catalysis Platform.

    Science.gov (United States)

    Schöbel, Judith; Burgard, Matthias; Hils, Christian; Dersch, Roland; Dulle, Martin; Volk, Kirsten; Karg, Matthias; Greiner, Andreas; Schmalz, Holger

    2017-01-02

    Heterogeneous catalysis with supported nanoparticles (NPs) is a highly active field of research. However, the efficient stabilization of NPs without deteriorating their catalytic activity is challenging. By combining top-down (coaxial electrospinning) and bottom-up (crystallization-driven self-assembly) approaches, we prepared patchy nonwovens with functional, nanometer-sized patches on the surface. These patches can selectively bind and efficiently stabilize gold nanoparticles (AuNPs). The use of these AuNP-loaded patchy nonwovens in the alcoholysis of dimethylphenylsilane led to full conversion under comparably mild conditions and in short reaction times. The absence of gold leaching or a slowing down of the reaction even after ten subsequent cycles manifests the excellent reusability of this catalyst system. The flexibility of the presented approach allows for easy transfer to other nonwoven supports and catalytically active NPs, which promises broad applicability.

  1. Bottom-up fabrication of zwitterionic polymer brushes on intraocular lens for improved biocompatibility

    Science.gov (United States)

    Han, Yuemei; Xu, Xu; Tang, Junmei; Shen, Chenghui; Lin, Quankui; Chen, Hao

    2017-01-01

    Intraocular lens (IOL) is an efficient implantable device commonly used for treating cataracts. However, bioadhesion of bacteria or residual lens epithelial cells on the IOL surface after surgery causes postoperative complications, such as endophthalmitis or posterior capsular opacification, and leads to loss of sight again. In the present study, zwitterionic polymer brushes were fabricated on the IOL surface via bottom-up grafting procedure. The attenuated total reflection-Fourier transform infrared and contact angle measurements indicated successful surface modification, as well as excellent hydrophilicity. The coating of hydrophilic zwitterionic polymer effectively decreased the bioadhesion of lens epithelial cells or bacteria. In vivo intraocular implantation results showed good in vivo biocompatibility of zwitterionic IOL and its effectiveness against postoperative complications. PMID:28053528

  2. Bottom-up and top-down controls on picoplankton in the East China Sea

    Science.gov (United States)

    Guo, C.; Liu, H.; Zheng, L.; Song, S.; Chen, B.; Huang, B.

    2013-05-01

    Dynamics of picoplankton population distribution in the East China Sea (ECS), a marginal sea in the western North Pacific Ocean, were studied during two "CHOICE-C" cruises in August 2009 (summer) and January 2010 (winter). Dilution experiments were conducted during the two cruises to investigate the growth and grazing among picophytoplantkon populations. Picoplankton accounted for an average of ~29% (2% to 88%) of community carbon biomass in the ECS on average, with lower percentages in plume region than in shelf and kuroshio regions. Averaged growth rates (μ) for Prochlorococcus (Pro), Synechococcus (Syn) and picoeukaryotes (peuk) were 0.36, 0.89, 0.90 d-1, respectively, in summer, and 0.46, 0.58, 0.56 d-1, respectively, in winter. Seawater salinity and nutrient availability exerted significant controls on picoplankton growth rate. Averaged grazing mortality (m) were 0.46, 0.63, 0.68 d-1 in summer, and 0.22, 0.32, 0.22 d-1 in winter for Pro, Syn and peuk respectively. The three populations demonstrated very different distribution patterns regionally and seasonally affected by both bottom-up and top-down controls. In summer, Pro, Syn and peuk were dominant in Kuroshio, transitional and plume regions respectively. Protist grazing consumed 84%, 78%, 73% and 45%, 47%, 57% of production for Pro, Syn and peuk in summer and winter respectively, suggesting more significant top-down controls in summer. In winter, all three populations tended to distribute in offshore regions, although the area of coverage was different (peuk > Syn > Pro). Bottom-up factors can explain as much as 91.5%, 82% and 81.2% of Pro, Syn and peuk abundance variance in winter, while only 59.1% and 43.7% for Pro and peuk in summer. Regionally, Yangtze River discharge plays a significant role in affecting the intensity of top-down control, indicated by significant and negative association between salinity and grazing mortality of all three populations and higher grazing mortality to growth rate ratio

  3. Bottom-up and top-down controls on picoplankton in the East China Sea

    Directory of Open Access Journals (Sweden)

    C. Guo

    2013-05-01

    Full Text Available Dynamics of picoplankton population distribution in the East China Sea (ECS, a marginal sea in the western North Pacific Ocean, were studied during two "CHOICE-C" cruises in August 2009 (summer and January 2010 (winter. Dilution experiments were conducted during the two cruises to investigate the growth and grazing among picophytoplantkon populations. Picoplankton accounted for an average of ~29% (2% to 88% of community carbon biomass in the ECS on average, with lower percentages in plume region than in shelf and kuroshio regions. Averaged growth rates (μ for Prochlorococcus (Pro, Synechococcus (Syn and picoeukaryotes (peuk were 0.36, 0.89, 0.90 d−1, respectively, in summer, and 0.46, 0.58, 0.56 d−1, respectively, in winter. Seawater salinity and nutrient availability exerted significant controls on picoplankton growth rate. Averaged grazing mortality (m were 0.46, 0.63, 0.68 d−1 in summer, and 0.22, 0.32, 0.22 d−1 in winter for Pro, Syn and peuk respectively. The three populations demonstrated very different distribution patterns regionally and seasonally affected by both bottom-up and top-down controls. In summer, Pro, Syn and peuk were dominant in Kuroshio, transitional and plume regions respectively. Protist grazing consumed 84%, 78%, 73% and 45%, 47%, 57% of production for Pro, Syn and peuk in summer and winter respectively, suggesting more significant top-down controls in summer. In winter, all three populations tended to distribute in offshore regions, although the area of coverage was different (peuk > Syn > Pro. Bottom-up factors can explain as much as 91.5%, 82% and 81.2% of Pro, Syn and peuk abundance variance in winter, while only 59.1% and 43.7% for Pro and peuk in summer. Regionally, Yangtze River discharge plays a significant role in affecting the intensity of top-down control, indicated by significant and negative association between salinity and grazing mortality of all three populations and higher grazing mortality to

  4. Collective Inclusioning: A Grounded Theory of a Bottom-Up Approach to Innovation and Leading

    Directory of Open Access Journals (Sweden)

    Michal Lysek

    2016-06-01

    Full Text Available This paper is a grounded theory study of how leaders (e.g., entrepreneurs, managers, etc. engage people in challenging undertakings (e.g., innovation that require everyone’s commitment to such a degree that they would have to go beyond what could be reasonably expected in order to succeed. Company leaders sometimes wonder why their employees no longer show the same responsibility towards their work, and why they are more concerned with internal politics than solving customer problems. It is because company leaders no longer apply collective inclusioning to the same extent as they did in the past. Collective inclusioning can be applied in four ways by convincing, afinitizing, goal congruencing, and engaging. It can lead to fostering strong units of people for taking on challenging undertakings. Collective inclusioning is a complementing theory to other strategic management and leading theories. It offers a new perspective on how to implement a bottom-up approach to innovation.

  5. Bottom-up synthesis of chiral covalent organic frameworks and their bound capillaries for chiral separation

    Science.gov (United States)

    Qian, Hai-Long; Yang, Cheng-Xiong; Yan, Xiu-Ping

    2016-07-01

    Covalent organic frameworks (COFs) are a novel class of porous materials, and offer great potential for various applications. However, the applications of COFs in chiral separation and chiral catalysis are largely underexplored due to the very limited chiral COFs available and their challenging synthesis. Here we show a bottom-up strategy to construct chiral COFs and an in situ growth approach to fabricate chiral COF-bound capillary columns for chiral gas chromatography. We incorporate the chiral centres into one of the organic ligands for the synthesis of the chiral COFs. We subsequently in situ prepare the COF-bound capillary columns. The prepared chiral COFs and their bound capillary columns give high resolution for the separation of enantiomers with excellent repeatability and reproducibility. The proposed strategy provides a promising platform for the synthesis of chiral COFs and their chiral separation application.

  6. Differential recolonization of Atlantic intertidal habitats after disturbance reveals potential bottom-up community regulation

    Science.gov (United States)

    Petzold, Willy; Scrosati, Ricardo A.

    2014-01-01

    In the spring of 2014, abundant sea ice that drifted out of the Gulf of St. Lawrence caused extensive disturbance in rocky intertidal habitats on the northern Atlantic coast of mainland Nova Scotia, Canada. To monitor recovery of intertidal communities, we surveyed two wave-exposed locations in the early summer of 2014. Barnacle recruitment and the abundance of predatory dogwhelks were low at one location (Tor Bay Provincial Park) but more than 20 times higher at the other location (Whitehead). Satellite data indicated that the abundance of coastal phytoplankton (the main food source for barnacle larvae) was consistently higher at Whitehead just before the barnacle recruitment season, when barnacle larvae were in the water column. These observations suggest bottom-up forcing of intertidal communities. The underlying mechanisms and their intensity along the NW Atlantic coast could be investigated through studies done at local and regional scales. PMID:26213609

  7. Single-molecule spectroscopy for plastic electronics: materials analysis from the bottom-up.

    Science.gov (United States)

    Lupton, John M

    2010-04-18

    pi-conjugated polymers find a range of applications in electronic devices. These materials are generally highly disordered in terms of chain length and chain conformation, besides being influenced by a variety of chemical and physical defects. Although this characteristic can be of benefit in certain device applications, disorder severely complicates materials analysis. Accurate analytical techniques are, however, crucial to optimising synthetic procedures and assessing overall material purity. Fortunately, single-molecule spectroscopic techniques have emerged as an unlikely but uniquely powerful approach to unraveling intrinsic material properties from the bottom up. Building on the success of such techniques in the life sciences, single-molecule spectroscopy is finding increasing applicability in materials science, effectively enabling the dissection of the bulk down to the level of the individual molecular constituent. This article reviews recent progress in single molecule spectroscopy of conjugated polymers as used in organic electronics.

  8. Ion mobility tandem mass spectrometry enhances performance of bottom-up proteomics.

    Science.gov (United States)

    Helm, Dominic; Vissers, Johannes P C; Hughes, Christopher J; Hahne, Hannes; Ruprecht, Benjamin; Pachl, Fiona; Grzyb, Arkadiusz; Richardson, Keith; Wildgoose, Jason; Maier, Stefan K; Marx, Harald; Wilhelm, Mathias; Becher, Isabelle; Lemeer, Simone; Bantscheff, Marcus; Langridge, James I; Kuster, Bernhard

    2014-12-01

    One of the limiting factors in determining the sensitivity of tandem mass spectrometry using hybrid quadrupole orthogonal acceleration time-of-flight instruments is the duty cycle of the orthogonal ion injection system. As a consequence, only a fraction of the generated fragment ion beam is collected by the time-of-flight analyzer. Here we describe a method utilizing postfragmentation ion mobility spectrometry of peptide fragment ions in conjunction with mobility time synchronized orthogonal ion injection leading to a substantially improved duty cycle and a concomitant improvement in sensitivity of up to 10-fold for bottom-up proteomic experiments. This enabled the identification of 7500 human proteins within 1 day and 8600 phosphorylation sites within 5 h of LC-MS/MS time. The method also proved powerful for multiplexed quantification experiments using tandem mass tags exemplified by the chemoproteomic interaction analysis of histone deacetylases with Trichostatin A.

  9. Strain Response of Hot-Mix Asphalt Overlays for Bottom-Up Reflective Cracking

    CERN Document Server

    Ghauch, Ziad G

    2011-01-01

    This paper examines the strain response of typical HMA overlays above jointed PCC slabs prone to bottom-up reflective cracking. The occurrence of reflective cracking under the combined effect of traffic and environmental loading significantly reduces the design life of the HMA overlays and can lead to its premature failure. In this context, viscoelastic material properties combined with cyclic vehicle loadings and pavement temperature distribution were implemented in a series of FE models in order to study the evolution of horizontal tensile and shear strains at the bottom of the HMA overlay. The effect of several design parameters, such as subbase and subgrade moduli, vehicle speed, overlay thickness, and temperature condition, on the horizontal and shear strain response was investigated. Results obtained show that the rate of horizontal and shear strain increase at the bottom of the HMA overlay drop with higher vehicle speed, higher subgrade modulus, and higher subbase modulus. Moreover, the rate of horizon...

  10. Unsupervised Tattoo Segmentation Combining Bottom-Up and Top-Down Cues

    Energy Technology Data Exchange (ETDEWEB)

    Allen, Josef D [ORNL

    2011-01-01

    Tattoo segmentation is challenging due to the complexity and large variance in tattoo structures. We have developed a segmentation algorithm for nding tattoos in an image. Our basic idea is split-merge: split each tattoo image into clusters through a bottom-up process, learn to merge the clusters containing skin and then distinguish tattoo from the other skin via top-down prior in the image itself. Tattoo segmentation with unknown number of clusters is transferred to a gure-ground segmentation. We have applied our segmentation algorithm on a tattoo dataset and the results have shown that our tattoo segmentation system is e cient and suitable for further tattoo classi cation and retrieval purpose.

  11. The ideological divide and climate change opinion: "top-down" and "bottom-up" approaches.

    Science.gov (United States)

    Jacquet, Jennifer; Dietrich, Monica; Jost, John T

    2014-01-01

    The United States wields disproportionate global influence in terms of carbon dioxide emissions and international climate policy. This makes it an especially important context in which to examine the interplay among social, psychological, and political factors in shaping attitudes and behaviors related to climate change. In this article, we review the emerging literature addressing the liberal-conservative divide in the U.S. with respect to thought, communication, and action concerning climate change. Because of its theoretical and practical significance, we focus on the motivational basis for skepticism and inaction on the part of some, including "top-down" institutional forces, such as corporate strategy, and "bottom-up" psychological factors, such as ego, group, and system justification. Although more research is needed to elucidate fully the social, cognitive, and motivational bases of environmental attitudes and behavior, a great deal has been learned in just a few years by focusing on specific ideological factors in addition to general psychological principles.

  12. Differential recolonization of Atlantic intertidal habitats after disturbance reveals potential bottom-up community regulation.

    Science.gov (United States)

    Petzold, Willy; Scrosati, Ricardo A

    2014-01-01

    In the spring of 2014, abundant sea ice that drifted out of the Gulf of St. Lawrence caused extensive disturbance in rocky intertidal habitats on the northern Atlantic coast of mainland Nova Scotia, Canada. To monitor recovery of intertidal communities, we surveyed two wave-exposed locations in the early summer of 2014. Barnacle recruitment and the abundance of predatory dogwhelks were low at one location (Tor Bay Provincial Park) but more than 20 times higher at the other location (Whitehead). Satellite data indicated that the abundance of coastal phytoplankton (the main food source for barnacle larvae) was consistently higher at Whitehead just before the barnacle recruitment season, when barnacle larvae were in the water column. These observations suggest bottom-up forcing of intertidal communities. The underlying mechanisms and their intensity along the NW Atlantic coast could be investigated through studies done at local and regional scales.

  13. Bottom-up effects of a no-take zone on endangered penguin demographics.

    Science.gov (United States)

    Sherley, Richard B; Winker, Henning; Altwegg, Res; van der Lingen, Carl D; Votier, Stephen C; Crawford, Robert J M

    2015-07-01

    Marine no-take zones can have positive impacts for target species and are increasingly important management tools. However, whether they indirectly benefit higher order predators remains unclear. The endangered African penguin (Spheniscus demersus) depends on commercially exploited forage fish. We examined how chick survival responded to an experimental 3-year fishery closure around Robben Island, South Africa, controlling for variation in prey biomass and fishery catches. Chick survival increased by 18% when the closure was initiated, which alone led to a predicted 27% higher population compared with continued fishing. However, the modelled population continued to decline, probably because of high adult mortality linked to poor prey availability over larger spatial scales. Our results illustrate that small no-take zones can have bottom-up benefits for highly mobile marine predators, but are only one component of holistic, ecosystem-based management regimes.

  14. Comparing top-down and bottom-up costing approaches for economic evaluation within social welfare.

    Science.gov (United States)

    Olsson, Tina M

    2011-10-01

    This study compares two approaches to the estimation of social welfare intervention costs: one "top-down" and the other "bottom-up" for a group of social welfare clients with severe problem behavior participating in a randomized trial. Intervention costs ranging over a two-year period were compared by intervention category (foster care placement, institutional placement, mentorship services, individual support services and structured support services), estimation method (price, micro costing, average cost) and treatment group (intervention, control). Analyses are based upon 2007 costs for 156 individuals receiving 404 interventions. Overall, both approaches were found to produce reliable estimates of intervention costs at the group level but not at the individual level. As choice of approach can greatly impact the estimate of mean difference, adjustment based on estimation approach should be incorporated into sensitivity analyses. Analysts must take care in assessing the purpose and perspective of the analysis when choosing a costing approach for use within economic evaluation.

  15. Bottom-Up Cost Analysis of a High Concentration PV Module; NREL (National Renewable Energy Laboratory)

    Energy Technology Data Exchange (ETDEWEB)

    Horowitz, K.; Woodhouse, M.; Lee, H.; Smestad, G.

    2015-04-13

    We present a bottom-up model of III-V multi-junction cells, as well as a high concentration PV (HCPV) module. We calculate $0.65/Wp(DC) manufacturing costs for our model HCPV module design with today’s capabilities, and find that reducing cell costs and increasing module efficiency offer the promising pathways for future cost reductions. Cell costs could be significantly reduced via an increase in manufacturing scale, substrate reuse, and improved manufacturing yields. We also identify several other significant drivers of HCPV module costs, including the Fresnel lens primary optic, module housing, thermal management, and the receiver board. These costs could potentially be lowered by employing innovative module designs.

  16. Integration of top-down and bottom-up information for audio organization and retrieval

    DEFF Research Database (Denmark)

    Jensen, Bjørn Sand

    The increasing availability of digital audio and music calls for methods and systems to analyse and organize these digital objects. This thesis investigates three elements related to such systems focusing on the ability to represent and elicit the user's view on the multimedia object and the system...... sources based on latent Dirichlet allocation (LDA). The model is used to integrate bottom-up features (reflecting timbre, loudness, tempo and chroma), meta-data aspects (lyrics) and top-down aspects, namely user generated open vocabulary tags. The model and representation is evaluated on the auxiliary...... task of genre and style classification. Eliciting the subjective representation and opinion of users is an important aspect in building personalized systems. The thesis contributes with a setup for modelling and elicitation of preference and other cognitive aspects with focus on audio applications...

  17. Highly directional bottom-up 3D nanoantenna for visible light.

    Science.gov (United States)

    Tong, L; Pakizeh, T; Feuz, L; Dmitriev, A

    2013-01-01

    Controlling light at the nanoscale is of fundamental importance and is essential for applications ranging from optical sensing and metrology to information processing, communications, and quantum optics. Considerable efforts are currently directed towards optical nanoantennas that directionally convert light into strongly localized energy and vice versa. Here we present highly directional 3D nanoantenna operating with visible light. We demonstrate a simple bottom-up approach to produce macroscopic arrays of such nanoantennas and present a way to address their functionality via interaction with quantum dots (QDs), properly embedded in the structure of the nanoantenna. The ease and accessibility of this structurally robust optical antenna device prompts its use as an affordable test bed for concepts in nano-optics and nanophotonics applications.

  18. Bottom-Up Engineering of Well-Defined 3D Microtissues Using Microplatforms and Biomedical Applications.

    Science.gov (United States)

    Lee, Geon Hui; Lee, Jae Seo; Wang, Xiaohong; Lee, Sang Hoon

    2016-01-07

    During the last decades, the engineering of well-defined 3D tissues has attracted great attention because it provides in vivo mimicking environment and can be a building block for the engineering of bioartificial organs. In this Review, diverse engineering methods of 3D tissues using microscale devices are introduced. Recent progress of microtechnologies has enabled the development of microplatforms for bottom-up assembly of diverse shaped 3D tissues consisting of various cells. Micro hanging-drop plates, microfluidic chips, and arrayed microwells are the typical examples. The encapsulation of cells in hydrogel microspheres and microfibers allows the engineering of 3D microtissues with diverse shapes. Applications of 3D microtissues in biomedical fields are described, and the future direction of microplatform-based engineering of 3D micro-tissues is discussed.

  19. Implementing collaborative improvement - top-down, bottom-up or both?

    DEFF Research Database (Denmark)

    Kaltoft, Rasmus; Boer, Harry; Caniato, Federico

    2007-01-01

    , the study identifies three different implementation approaches. The bottom-up learning-by-doing approach starts at a practical level, with simple improvement activities, and aims at gradually developing a wide range of CoI knowledge, skills and initiatives. The top-down directive approach starts......The research presented in this article was aimed at increasing the current understanding of the process of developing Collaborative Improvement (CoI) in Extended Manufacturing Enterprises (EME). Based on action research in three EMEs involving a total of 13 companies from five European countries...... with aligning the partners' CoI objectives and an assessment of their collaboration and CoI maturity in order to provide a common platform before actually starting improvement activities. The laissez-faire approach builds on shared goals/vision, meetings on equal terms and joint work, in a non-directive and non...

  20. Reconciling Top Down and Bottom Up Approaches to Understand Land Carbon Cycle Variability

    Science.gov (United States)

    Collatz, G. J.; Gurney, K. R.; Denning, A. S.; Randerson, J. T.; van der Werf, G. R.

    2004-12-01

    Cycle Variability Two fundamentally different approaches for estimating global carbon sources and sinks have been used over the past 15 years. The so-called "Top-down" approach involves analysis of atmospheric composition and often includes inversions of atmospheric transport. Bottom-up approaches, on the other hand, involve using carbon cycle process models driven by various observational data. Reconciling the results of these two approaches can provide powerful constraints on each but is challenging because of the large uncertainties in atmospheric measurements and transport and in our understanding of the processes controlling biogeochemical cycling of carbon. Recently, the Atmospheric Carbon Inversion Intercomparison (TransCom 3) completed mean seasonal cycle and interannual variability inversions using 12 transport models. Their results include predictions of biogeochemically driven net carbon fluxes with associated uncertainties for the globe divided into 22 regions, half of which are land regions. The cyclo-stationary inversions predicted the mean seasonal cycle as well as the mean sink/source of each region. The interannual inversions predicted the interannual variability in the sources and sinks for each region between 1980 and 2000. This study describes an analysis of the processes controlling biogeochemically driven net carbon fluxes over the seasonal cycle for each of the Transcom land regions. The processes considered are those included in the CASA biogeochemical model. The seasonally variable model inputs include NDVI, temperature, precipitation and solar radiation and burned area. The contributions of NPP, heterotrophic respiration and fire season to the seasonal cycle are evaluated for each of the 11 TransCom 3 land regions. We prescribed plausible scenarios in the biogeochemical model to evaluate the mechanisms responsible for the size and seasonality of the mean annual carbon sinks reported by TransCom 3. Initial results will also be presented for

  1. Bottom-Up Energy Analysis System (BUENAS). An international appliance efficiency policy tool

    Energy Technology Data Exchange (ETDEWEB)

    McNeil, M.A.; Letschert, V.E.; De la Rue du Can, S.; Ke, Jing [Lawrence Berkeley National Laboratory LBNL, 1 Cyclotron Rd, Berkeley, CA (United States)

    2013-02-15

    The Bottom-Up Energy Analysis System (BUENAS) calculates potential energy and greenhouse gas emission impacts of efficiency policies for lighting, heating, ventilation, and air conditioning, appliances, and industrial equipment through 2030. The model includes 16 end use categories and covers 11 individual countries plus the European Union. BUENAS is a bottom-up stock accounting model that predicts energy consumption for each type of equipment in each country according to engineering-based estimates of annual unit energy consumption, scaled by projections of equipment stock. Energy demand in each scenario is determined by equipment stock, usage, intensity, and efficiency. When available, BUENAS uses sales forecasts taken from country studies to project equipment stock. Otherwise, BUENAS uses an econometric model of household appliance uptake developed by the authors. Once the business as usual scenario is established, a high-efficiency policy scenario is constructed that includes an improvement in the efficiency of equipment installed in 2015 or later. Policy case efficiency targets represent current 'best practice' and include standards already established in a major economy or well-defined levels known to enjoy a significant market share in a major economy. BUENAS calculates energy savings according to the difference in energy demand in the two scenarios. Greenhouse gas emission mitigation is then calculated using a forecast of electricity carbon factor. We find that mitigation of 1075 mt annual CO2 emissions is possible by 2030 from adopting current best practices of appliance efficiency policies. This represents a 17 % reduction in emissions in the business as usual case in that year.

  2. Achieving social-ecological fit through bottom-up collaborative governance: an empirical investigation

    Directory of Open Access Journals (Sweden)

    Angela M. Guerrero

    2015-12-01

    Full Text Available Significant benefits can arise from collaborative forms of governance that foster self-organization and flexibility. Likewise, governance systems that fit with the extent and complexity of the system under management are considered essential to our ability to solve environmental problems. However, from an empirical perspective the fundamental question of whether self-organized (bottom-up collaborative forms of governance are able to accomplish adequate fit is unresolved. We used new theory and methodological approaches underpinned by interdisciplinary network analysis to address this gap by investigating three governance challenges that relate to the problem of fit: shared management of ecological resources, management of interconnected ecological resources, and cross-scale management. We first identified a set of social-ecological network configurations that represent the hypothesized ways in which collaborative arrangements can contribute to addressing these challenges. Using social and ecological data from a large-scale biodiversity conservation initiative in Australia, we empirically determined how well the observed patterns of stakeholder interactions reflect these network configurations. We found that stakeholders collaborate to manage individual parcels of native vegetation, but not for the management of interconnected parcels. In addition, our data show that the collaborative arrangements enable management across different scales (local, regional, supraregional. Our study provides empirical support for the ability of collaborative forms of governance to address the problem of fit, but also suggests that in some cases the establishment of bottom-up collaborative arrangements would likely benefit from specific guidance to facilitate the establishment of collaborations that better align with the ways ecological resources are interconnected across the landscape. In our case study region, this would improve the capacity of stakeholders to

  3. A bottom-up approach of stochastic demand allocation in water quality modelling

    Directory of Open Access Journals (Sweden)

    E. J. M. Blokker

    2010-04-01

    Full Text Available An "all pipes" hydraulic model of a drinking water distribution system was constructed with two types of demand allocations. One is constructed with the conventional top-down approach, i.e. a demand multiplier pattern from the booster station is allocated to all demand nodes with a correction factor to account for the average water demand on that node. The other is constructed with a bottom-up approach of demand allocation, i.e., each individual home is represented by one demand node with its own stochastic water demand pattern. This was done for a drinking water distribution system of approximately 10 km of mains and serving ca. 1000 homes. The system was tested in a real life situation.

    The stochastic water demand patterns were constructed with the end-use model SIMDEUM on a per second basis and per individual home. Before applying the demand patterns in a network model, some temporal aggregation was done. The flow entering the test area was measured and a tracer test with sodium chloride was performed to determine travel times. The two models were validated on the total sum of demands and on travel times.

    The study showed that the bottom-up approach leads to realistic water demand patterns and travel times, without the need for any flow measurements or calibration. In the periphery of the drinking water distribution system it is not possible to calibrate models on pressure, because head losses are too low. The study shows that in the periphery it is also difficult to calibrate on water quality (e.g. with tracer measurements, as a consequence of the high variability between days. The stochastic approach of hydraulic modelling gives insight into the variability of travel times as an added feature beyond the conventional way of modelling.

  4. Preferential effect of isoflurane on top-down versus bottom-up pathways in sensory cortex

    Directory of Open Access Journals (Sweden)

    Aeyal eRaz

    2014-10-01

    Full Text Available The mechanism of loss of consciousness (LOC under anesthesia is unknown. Because consciousness depends on activity in the cortico-thalamic network, anesthetic actions on this network are likely critical for LOC. Competing theories stress the importance of anesthetic actions on bottom-up ‘core’ thalamo-cortical (TC versus top-down cortico-cortical (CC and matrix TC connections. We tested these models using laminar recordings in rat auditory cortex in-vivo and murine brain slices. We selectively activated bottom-up vs. top-down afferent pathways using sensory stimuli in vivo and electrical stimulation in brain slices, and compared effects of isoflurane on responses evoked via the two pathways. Auditory stimuli in vivo and core TC afferent stimulation in brain slices evoked short latency current sinks in middle layers, consistent with activation of core TC afferents. By contrast, visual stimuli in vivo and stimulation of CC and matrix TC afferents in brain slices evoked responses mainly in superficial and deep layers, consistent with projection patterns of top-down afferents that carry visual information to auditory cortex. Responses to auditory stimuli in vivo and core TC afferents in brain slices were significantly less affected by isoflurane compared to responses triggered by visual stimuli in vivo and CC/matrix TC afferents in slices. At a just-hypnotic dose in vivo, auditory responses were enhanced by isoflurane, whereas visual responses were dramatically reduced. At a comparable concentration in slices, isoflurane suppressed both core TC and CC/matrix TC responses, but the effect on the latter responses was far greater than on core TC responses, indicating that at least part of the differential effects observed in vivo were due to local actions of isoflurane in auditory cortex. These data support a model in which disruption of top-down connectivity contributes to anesthesia-induced LOC, and have implications for understanding the neural

  5. How is visual salience computed in the brain? Insights from behaviour, neurobiology and modelling

    Science.gov (United States)

    Veale, Richard; Hafed, Ziad M.

    2017-01-01

    Inherent in visual scene analysis is a bottleneck associated with the need to sequentially sample locations with foveating eye movements. The concept of a ‘saliency map’ topographically encoding stimulus conspicuity over the visual scene has proven to be an efficient predictor of eye movements. Our work reviews insights into the neurobiological implementation of visual salience computation. We start by summarizing the role that different visual brain areas play in salience computation, whether at the level of feature analysis for bottom-up salience or at the level of goal-directed priority maps for output behaviour. We then delve into how a subcortical structure, the superior colliculus (SC), participates in salience computation. The SC represents a visual saliency map via a centre-surround inhibition mechanism in the superficial layers, which feeds into priority selection mechanisms in the deeper layers, thereby affecting saccadic and microsaccadic eye movements. Lateral interactions in the local SC circuit are particularly important for controlling active populations of neurons. This, in turn, might help explain long-range effects, such as those of peripheral cues on tiny microsaccades. Finally, we show how a combination of in vitro neurophysiology and large-scale computational modelling is able to clarify how salience computation is implemented in the local circuit of the SC. This article is part of the themed issue ‘Auditory and visual scene analysis’. PMID:28044023

  6. Olfaction spontaneously highlights visual saliency map.

    Science.gov (United States)

    Chen, Kepu; Zhou, Bin; Chen, Shan; He, Sheng; Zhou, Wen

    2013-10-01

    Attention is intrinsic to our perceptual representations of sensory inputs. Best characterized in the visual domain, it is typically depicted as a spotlight moving over a saliency map that topographically encodes strengths of visual features and feedback modulations over the visual scene. By introducing smells to two well-established attentional paradigms, the dot-probe and the visual-search paradigms, we find that a smell reflexively directs attention to the congruent visual image and facilitates visual search of that image without the mediation of visual imagery. Furthermore, such effect is independent of, and can override, top-down bias. We thus propose that smell quality acts as an object feature whose presence enhances the perceptual saliency of that object, thereby guiding the spotlight of visual attention. Our discoveries provide robust empirical evidence for a multimodal saliency map that weighs not only visual but also olfactory inputs.

  7. Quantifying the uncertainties of a bottom-up emission inventory of anthropogenic atmospheric pollutants in China

    Science.gov (United States)

    Zhao, Y.; Nielsen, C. P.; Lei, Y.; McElroy, M. B.; Hao, J.

    2011-03-01

    The uncertainties of a national, bottom-up inventory of Chinese emissions of anthropogenic SO2, NOx, and particulate matter (PM) of different size classes and carbonaceous species are comprehensively quantified, for the first time, using Monte Carlo simulation. The inventory is structured by seven dominant sectors: coal-fired electric power, cement, iron and steel, other industry (boiler combustion), other industry (non-combustion processes), transportation, and residential. For each parameter related to emission factors or activity-level calculations, the uncertainties, represented as probability distributions, are either statistically fitted using results of domestic field tests or, when these are lacking, estimated based on foreign or other domestic data. The uncertainties (i.e., 95% confidence intervals around the central estimates) of Chinese emissions of SO2, NOx, total PM, PM10, PM2.5, black carbon (BC), and organic carbon (OC) in 2005 are estimated to be -14%~13%, -13%~37%, -11%~38%, -14%~45%, -17%~54%, -25%~136%, and -40%~121%, respectively. Variations at activity levels (e.g., energy consumption or industrial production) are not the main source of emission uncertainties. Due to narrow classification of source types, large sample sizes, and relatively high data quality, the coal-fired power sector is estimated to have the smallest emission uncertainties for all species except BC and OC. Due to poorer source classifications and a wider range of estimated emission factors, considerable uncertainties of NOx and PM emissions from cement production and boiler combustion in other industries are found. The probability distributions of emission factors for biomass burning, the largest source of BC and OC, are fitted based on very limited domestic field measurements, and special caution should thus be taken interpreting these emission uncertainties. Although Monte Carlo simulation yields narrowed estimates of uncertainties compared to previous bottom-up emission

  8. A bottom-up approach to estimating cost elements of REDD+ pilot projects in Tanzania

    Directory of Open Access Journals (Sweden)

    Merger Eduard

    2012-08-01

    Full Text Available Abstract Background Several previous global REDD+ cost studies have been conducted, demonstrating that payments for maintaining forest carbon stocks have significant potential to be a cost-effective mechanism for climate change mitigation. These studies have mostly followed highly aggregated top-down approaches without estimating the full range of REDD+ costs elements, thus underestimating the actual costs of REDD+. Based on three REDD+ pilot projects in Tanzania, representing an area of 327,825 ha, this study explicitly adopts a bottom-up approach to data assessment. By estimating opportunity, implementation, transaction and institutional costs of REDD+ we develop a practical and replicable methodological framework to consistently assess REDD+ cost elements. Results Based on historical land use change patterns, current region-specific economic conditions and carbon stocks, project-specific opportunity costs ranged between US$ -7.8 and 28.8 tCOxxxx for deforestation and forest degradation drivers such as agriculture, fuel wood production, unsustainable timber extraction and pasture expansion. The mean opportunity costs for the three projects ranged between US$ 10.1 – 12.5 tCO2. Implementation costs comprised between 89% and 95% of total project costs (excluding opportunity costs ranging between US$ 4.5 - 12.2 tCO2 for a period of 30 years. Transaction costs for measurement, reporting, verification (MRV, and other carbon market related compliance costs comprised a minor share, between US$ 0.21 - 1.46 tCO2. Similarly, the institutional costs comprised around 1% of total REDD+ costs in a range of US$ 0.06 – 0.11 tCO2. Conclusions The use of bottom-up approaches to estimate REDD+ economics by considering regional variations in economic conditions and carbon stocks has been shown to be an appropriate approach to provide policy and decision-makers robust economic information on REDD+. The assessment of opportunity costs is a crucial first step to

  9. Pressurized Pepsin Digestion in Proteomics: An Automatable Alternative to Trypsin for Integrated Top-down Bottom-up Proteomics

    Energy Technology Data Exchange (ETDEWEB)

    Lopez-Ferrer, Daniel; Petritis, Konstantinos; Robinson, Errol W.; Hixson, Kim K.; Tian, Zhixin; Lee, Jung Hwa; Lee, Sang-Won; Tolic, Nikola; Weitz, Karl K.; Belov, Mikhail E.; Smith, Richard D.; Pasa-Tolic, Ljiljana

    2011-02-01

    Integrated top-down bottom-up proteomics combined with online digestion has great potential to improve the characterization of protein isoforms in biological systems and is amendable to highthroughput proteomics experiments. Bottom-up proteomics ultimately provides the peptide sequences derived from the tandem MS analyses of peptides after the proteome has been digested. Top-down proteomics conversely entails the MS analyses of intact proteins for more effective characterization of genetic variations and/or post-translational modifications (PTMs). Herein, we describe recent efforts towards efficient integration of bottom-up and top-down LCMS based proteomic strategies. Since most proteomic platforms (i.e. LC systems) operate in acidic environments, we exploited the compatibility of the pepsin (i.e. the enzyme’s natural acidic activity) for the integration of bottom-up and top-down proteomics. Pressure enhanced pepsin digestions were successfully performed and characterized with several standard proteins in either an offline mode using a Barocycler or an online mode using a modified high pressure LC system referred to as a fast online digestion system (FOLDS). FOLDS was tested using pepsin and a whole microbial proteome, and the results compared against traditional trypsin digestions on the same platform. Additionally, FOLDS was integrated with a RePlay configuration to demonstrate an ultra-rapid integrated bottom-up top-down proteomic strategy employing a standard mixture of proteins and a monkey pox virus proteome.

  10. A Novel GBM Saliency Detection Model Using Multi-Channel MRI.

    Directory of Open Access Journals (Sweden)

    Subhashis Banerjee

    Full Text Available The automatic computerized detection of regions of interest (ROI is an important step in the process of medical image processing and analysis. The reasons are many, and include an increasing amount of available medical imaging data, existence of inter-observer and inter-scanner variability, and to improve the accuracy in automatic detection in order to assist doctors in diagnosing faster and on time. A novel algorithm, based on visual saliency, is developed here for the identification of tumor regions from MR images of the brain. The GBM saliency detection model is designed by taking cue from the concept of visual saliency in natural scenes. A visually salient region is typically rare in an image, and contains highly discriminating information, with attention getting immediately focused upon it. Although color is typically considered as the most important feature in a bottom-up saliency detection model, we circumvent this issue in the inherently gray scale MR framework. We develop a novel pseudo-coloring scheme, based on the three MRI sequences, viz. FLAIR, T2 and T1C (contrast enhanced with Gadolinium. A bottom-up strategy, based on a new pseudo-color distance and spatial distance between image patches, is defined for highlighting the salient regions in the image. This multi-channel representation of the image and saliency detection model help in automatically and quickly isolating the tumor region, for subsequent delineation, as is necessary in medical diagnosis. The effectiveness of the proposed model is evaluated on MRI of 80 subjects from the BRATS database in terms of the saliency map values. Using ground truth of the tumor regions for both high- and low- grade gliomas, the results are compared with four highly referred saliency detection models from literature. In all cases the AUC scores from the ROC analysis are found to be more than 0.999 ± 0.001 over different tumor grades, sizes and positions.

  11. Bottom-Up Abstract Modelling of Optical Networks-on-Chip: From Physical to Architectural Layer

    Directory of Open Access Journals (Sweden)

    Alberto Parini

    2012-01-01

    Full Text Available This work presents a bottom-up abstraction procedure based on the design-flow FDTD + SystemC suitable for the modelling of optical Networks-on-Chip. In this procedure, a complex network is decomposed into elementary switching elements whose input-output behavior is described by means of scattering parameters models. The parameters of each elementary block are then determined through 2D-FDTD simulation, and the resulting analytical models are exported within functional blocks in SystemC environment. The inherent modularity and scalability of the S-matrix formalism are preserved inside SystemC, thus allowing the incremental composition and successive characterization of complex topologies typically out of reach for full-vectorial electromagnetic simulators. The consistency of the outlined approach is verified, in the first instance, by performing a SystemC analysis of a four-input, four-output ports switch and making a comparison with the results of 2D-FDTD simulations of the same device. Finally, a further complex network encompassing 160 microrings is investigated, the losses over each routing path are calculated, and the minimum amount of power needed to guarantee an assigned BER is determined. This work is a basic step in the direction of an automatic technology-aware network-level simulation framework capable of assembling complex optical switching fabrics, while at the same time assessing the practical feasibility and effectiveness at the physical/technological level.

  12. Bottom-Up, Wet Chemical Technique for the Continuous Synthesis of Inorganic Nanoparticles

    Directory of Open Access Journals (Sweden)

    Annika Betke

    2014-01-01

    Full Text Available Continuous wet chemical approaches for the production of inorganic nanoparticles are important for large scale production of nanoparticles. Here we describe a bottom-up, wet chemical method applying a microjet reactor. This technique allows the separation between nucleation and growth in a continuous reactor environment. Zinc oxide (ZnO, magnetite (Fe3O4, as well as brushite (CaHPO4·2H2O, particles with a small particle size distribution can be obtained continuously by using the rapid mixing of two precursor solutions and the fast removal of the nuclei from the reaction environment. The final particles were characterized by FT-IR, TGA, DLS, XRD and SEM techniques. Systematic studies on the influence of the different process parameters, such as flow rate and process temperature, show that the particle size can be influenced. Zinc oxide was obtained with particle sizes between 44 nm and 102 nm. The obtained magnetite particles have particle sizes in the range of 46 nm to 132 nm. Brushite behaves differently; the obtained particles were shaped like small plates with edge lengths between 100 nm and 500 nm.

  13. Two Paths to Transforming Markets through Public Sector EnergyEfficiency: Bottom Up versus Top Down

    Energy Technology Data Exchange (ETDEWEB)

    Van Wie McGrory, Laura; Coleman, Philip; Fridley, David; Harris,Jeffrey; Villasenor Franco, Edgar

    2006-05-10

    The evolution of government purchasing initiatives in Mexicoand China, part of the PEPS (Promoting an Energy-efficient Public Sector)program, demonstrates the need for flexibility in designingenergy-efficiency strategies in the public sector. Several years ofpursuing a top-down (federally led) strategy in Mexico produced fewresults, and it was not until the program was restructured in 2004 tofocus on municipal-level purchasing that the program gained momentum.Today, a new partnership with the Mexican federal government is leadingto an intergovernmental initiative with strong support at the federallevel. By contrast, the PEPS purchasing initiative in China wassuccessfully initiated and led at the central government level withstrategic support from international experts. The very different successtrajectories in these two countries provide valuable lessons fordesigning country-specific public sector energy-efficiency initiatives.Enabling conditions for any successful public sector purchasinginitiative include the existence of mandatory energy-efficiencyperformance standards, an effective energy-efficiency endorsementlabeling program, an immediate need for energy conservation, a simplepilot phase (focusing on a limited number of strategically chosenproducts), and specialized technical assistance. Top-down purchasingprograms are likely to be more successful where there is high-levelpolitical endorsement and a national procurement law in place, supportedby a network of trained purchasers. Bottom-up (municipally led)purchasing programs require that municipalities have the authority to settheir own purchasing policies, and also benefit from existing networks ofcities, supported by motivated municipal leaders and trained purchasingofficials.

  14. Spinodal nanotechnology as a new class of bottom-up one and applications

    Science.gov (United States)

    Katayama-Yoshida, Hiroshi; Fukushima, Tetsuya; Kizaki, Hidetoshi; Oshitani, Masamune; Sato, Kazunori

    2010-03-01

    We discuss the nano-materials design of spinodal nano-decomposition as a new class of bottom-up nanotechnology by combining ab initio calculations and kinetic Monte Carlo simulations. We include all the complexity in the fabrication process of spinodal nano-decomposition (Konbu- and Dairiseki-phase) into advanced materials design with inhomogeneous materials. We compare the theoretical predictions with available experiments, such as (i)semiconductor nano-spintronics in dilute magnetic semiconductors, (ii)colossal thermoelectric-power responses of spincaloritronics, (iii)self-repaired nano-catalysis in La(Fe,Pd)O3, (iv)high-efficiency solar-cells, (v)high-efficiency light-emitting diode and Lasers. (1) K. Sato, et al., Reviews of Modern Physics, in printing (2009). (2) H. Katayama-Yoshida, et al.,Handbook of Spintronic Semiconductors, (Pan Stanford Pub.), p.1-79, (2009). (3) H. Katayama-Yoshida, et al.,Semiconductors and Semimetals, 82,433 (2008). (4) H. Katayama-Yoshida, et al.,Jpn. J. Appl. Phys. 46, L777 (2007). (5) H. Kizaki, et al.,Applied Physics Express 1, 104001, (2008).

  15. The Early Anthropogenic Hypothesis: Top-Down and Bottom-up Evidence

    Science.gov (United States)

    Ruddiman, W. F.

    2014-12-01

    Two complementary lines of evidence support the early anthropogenic hypothesis. Top-down evidence comes from comparing Holocene greenhouse-gas trends with those during equivalent intervals of previous interglaciations. The increases in CO2 and CH4 during the late Holocene are anomalous compared to the decreasing trends in a stacked average of previous interglaciations, thereby supporting an anthropogenic origin. During interglacial stage 19, the closest Holocene insolation analog, CO2 fell to 245 ppm by the time equivalent to the present, in contrast to the observed pre-industrial rise to 280-285 ppm. The 245-ppm level measured in stage 19 falls at the top of the natural range predicted by the original anthropogenic hypothesis of Ruddiman (2003). Bottom-up evidence comes from a growing list of archeological and other compilations showing major early anthropogenic transformations of Earth's surface. Key examples include: efforts by Dorian Fuller and colleagues mapping the spread of irrigated rice agriculture across southern Asia and its effects on CH4 emissions prior to the industrial era; an additional effort by Fuller showing the spread of methane-emitting domesticated livestock across Asia and Africa (coincident with the spread of fertile crescent livestock across Europe); historical compilations by Jed Kaplan and colleagues documenting very high early per-capita forest clearance in Europe, thus underpinning simulations of extensive pre-industrial clearance and large CO2 emissions; and wide-ranging studies by Erle Ellis and colleagues of early anthropogenic land transformations in China and elsewhere.

  16. A "bottom up" governance framework for developing Australia's marine Spatial Data Infrastructure (SDI

    Directory of Open Access Journals (Sweden)

    K T Finney

    2007-07-01

    Full Text Available Spatial Data Infrastructures (SDIs have been developing in some countries for over 10 years but still suffer from having a relatively small installed base. Most SDIs will soon converge around a service-oriented-architecture (SOA using IT standards promulgated primarily by the Open Geospatial Consortium (OGC and ISO Technical Committee 211. There are very few examples of these types of architected SDIs in action, and as a result little detailed information exists on suitable governance models. This paper discusses the governance issues that are posed by SOA-based SDIs, particularly those issues surrounding standards and services management, with reference to an Australian marine case study and the general literature. A generalised governance framework is then postulated using an idealised use case model which is applicable for "bottom-up," community-based initiatives. This model incorporates guiding principles and motivational and self-regulation instruments that are characteristically found in successful open source development activities. It is argued that harnessing an open development model, using a voluntary workforce, could rapidly increase the size of the SDI installed base and importantly defray infrastructure build costs.

  17. A Bottom-Up Approach for Automatically Grouping Sensor Data Layers by their Observed Property

    Directory of Open Access Journals (Sweden)

    Steve H.L. Liang

    2013-01-01

    Full Text Available The Sensor Web is a growing phenomenon where an increasing number of sensors are collecting data in the physical world, to be made available over the Internet. To help realize the Sensor Web, the Open Geospatial Consortium (OGC has developed open standards to standardize the communication protocols for sharing sensor data. Spatial Data Infrastructures (SDIs are systems that have been developed to access, process, and visualize geospatial data from heterogeneous sources, and SDIs can be designed specifically for the Sensor Web. However, there are problems with interoperability associated with a lack of standardized naming, even with data collected using the same open standard. The objective of this research is to automatically group similar sensor data layers. We propose a methodology to automatically group similar sensor data layers based on the phenomenon they measure. Our methodology is based on a unique bottom-up approach that uses text processing, approximate string matching, and semantic string matching of data layers. We use WordNet as a lexical database to compute word pair similarities and derive a set-based dissimilarity function using those scores. Two approaches are taken to group data layers: mapping is defined between all the data layers, and clustering is performed to group similar data layers. We evaluate the results of our methodology.

  18. Preparation of hydrocortisone nanosuspension through a bottom-up nanoprecipitation technique using microfluidic reactors.

    Science.gov (United States)

    Ali, Hany S M; York, Peter; Blagden, Nicholas

    2009-06-22

    In this work, the possibility of bottom-up creation of a relatively stable aqueous hydrocortisone nanosuspension using microfluidic reactors was examined. The first part of the work involved a study of the parameters of the microfluidic precipitation process that affect the size of generated drug particles. These parameters included flow rates of drug solution and antisolvent, microfluidic channel diameters, microreactors inlet angles and drug concentrations. The experimental results revealed that hydrocortisone nano-sized dispersions in the range of 80-450 nm were obtained and the mean particle size could be changed by modifying the experimental parameters and design of microreactors. The second part of the work studied the possibility of preparing a hydrocortisone nanosuspension using microfluidic reactors. The nano-sized particles generated from a microreactor were rapidly introduced into an aqueous solution of stabilizers stirred at high speed with a propeller mixer. A tangential flow filtration system was then used to concentrate the prepared nanosuspension. The nanosuspension produced was then characterized using photon correlation spectroscopy (PCS), Zeta potential measurement, transmission electron microscopy (TEM), differential scanning calorimetry (DSC) and X-ray analysis. Results showed that a narrow sized nanosuspension composed of amorphous spherical particles with a mean particle size of 500+/-64 nm, a polydispersity index of 0.21+/-0.026 and a zeta potential of -18+/-2.84 mV was obtained. Physical stability studies showed that the hydrocortisone nanosuspension remained homogeneous with slight increase in mean particle size and polydispersity index over a 3-month period.

  19. Programmable chemical reaction networks: emulating regulatory functions in living cells using a bottom-up approach.

    Science.gov (United States)

    van Roekel, Hendrik W H; Rosier, Bas J H M; Meijer, Lenny H H; Hilbers, Peter A J; Markvoort, Albert J; Huck, Wilhelm T S; de Greef, Tom F A

    2015-11-07

    Living cells are able to produce a wide variety of biological responses when subjected to biochemical stimuli. It has become apparent that these biological responses are regulated by complex chemical reaction networks (CRNs). Unravelling the function of these circuits is a key topic of both systems biology and synthetic biology. Recent progress at the interface of chemistry and biology together with the realisation that current experimental tools are insufficient to quantitatively understand the molecular logic of pathways inside living cells has triggered renewed interest in the bottom-up development of CRNs. This builds upon earlier work of physical chemists who extensively studied inorganic CRNs and showed how a system of chemical reactions can give rise to complex spatiotemporal responses such as oscillations and pattern formation. Using purified biochemical components, in vitro synthetic biologists have started to engineer simplified model systems with the goal of mimicking biological responses of intracellular circuits. Emulation and reconstruction of system-level properties of intracellular networks using simplified circuits are able to reveal key design principles and molecular programs that underlie the biological function of interest. In this Tutorial Review, we present an accessible overview of this emerging field starting with key studies on inorganic CRNs followed by a discussion of recent work involving purified biochemical components. Finally, we review recent work showing the versatility of programmable biochemical reaction networks (BRNs) in analytical and diagnostic applications.

  20. Ursgal, Universal Python Module Combining Common Bottom-Up Proteomics Tools for Large-Scale Analysis.

    Science.gov (United States)

    Kremer, Lukas P M; Leufken, Johannes; Oyunchimeg, Purevdulam; Schulze, Stefan; Fufezan, Christian

    2016-03-04

    Proteomics data integration has become a broad field with a variety of programs offering innovative algorithms to analyze increasing amounts of data. Unfortunately, this software diversity leads to many problems as soon as the data is analyzed using more than one algorithm for the same task. Although it was shown that the combination of multiple peptide identification algorithms yields more robust results, it is only recently that unified approaches are emerging; however, workflows that, for example, aim to optimize search parameters or that employ cascaded style searches can only be made accessible if data analysis becomes not only unified but also and most importantly scriptable. Here we introduce Ursgal, a Python interface to many commonly used bottom-up proteomics tools and to additional auxiliary programs. Complex workflows can thus be composed using the Python scripting language using a few lines of code. Ursgal is easily extensible, and we have made several database search engines (X!Tandem, OMSSA, MS-GF+, Myrimatch, MS Amanda), statistical postprocessing algorithms (qvality, Percolator), and one algorithm that combines statistically postprocessed outputs from multiple search engines ("combined FDR") accessible as an interface in Python. Furthermore, we have implemented a new algorithm ("combined PEP") that combines multiple search engines employing elements of "combined FDR", PeptideShaker, and Bayes' theorem.

  1. Sustainability and Uncertainty: Bottom-Up and Top-Down Approaches

    Directory of Open Access Journals (Sweden)

    K. Klint Jensen

    2010-04-01

    Full Text Available The widely used concept of sustainability is seldom precisely defined, and its clarification involves making up one’s mind about a range of difficult questions. One line of research (bottom-up takes sustaining a system over time as its starting point and then infers prescriptions from this requirement. Another line (top-down takes an economical interpretation of the Brundtland Commission’s suggestion that the present generation’s needsatisfaction should not compromise the need-satisfaction of future generations as its starting point. It then measures sustainability at the level of society and infers prescriptions from this requirement. These two approaches may conflict, and in this conflict the top-down approach has the upper hand, ethically speaking. However, the implicit goal in the top-down approach of justice between generations needs to be refined in several dimensions. But even given a clarified ethical goal, disagreements can arise. At present we do not know what substitutions will be possible in the future. This uncertainty clearly affects the prescriptions that follow from the measure of sustainability. Consequently, decisions about how to make future agriculture sustainable are decisions under uncertainty. There might be different judgments on likelihoods; but even given some set of probabilities, there might be disagreement on the right level of precaution in face of the uncertainty.

  2. Rational design of modular circuits for gene transcription: A test of the bottom-up approach

    Directory of Open Access Journals (Sweden)

    Giordano Emanuele

    2010-11-01

    Full Text Available Abstract Background Most of synthetic circuits developed so far have been designed by an ad hoc approach, using a small number of components (i.e. LacI, TetR and a trial and error strategy. We are at the point where an increasing number of modular, inter-changeable and well-characterized components is needed to expand the construction of synthetic devices and to allow a rational approach to the design. Results We used interchangeable modular biological parts to create a set of novel synthetic devices for controlling gene transcription, and we developed a mathematical model of the modular circuits. Model parameters were identified by experimental measurements from a subset of modular combinations. The model revealed an unexpected feature of the lactose repressor system, i.e. a residual binding affinity for the operator site by induced lactose repressor molecules. Once this residual affinity was taken into account, the model properly reproduced the experimental data from the training set. The parameters identified in the training set allowed the prediction of the behavior of networks not included in the identification procedure. Conclusions This study provides new quantitative evidences that the use of independent and well-characterized biological parts and mathematical modeling, what is called a bottom-up approach to the construction of gene networks, can allow the design of new and different devices re-using the same modular parts.

  3. Top-down and bottom-up analysis of commercial enoxaparins.

    Science.gov (United States)

    Liu, Xinyue; St Ange, Kalib; Lin, Lei; Zhang, Fuming; Chi, Lianli; Linhardt, Robert J

    2017-01-13

    A strategy for the comprehensive analysis of low molecular weight (LMW) heparins is described that relies on using an integrated top-down and bottom-up approach. Liquid chromatography-mass spectrometry, an essential component of this approach, is rapid, robust, and amenable to automated processing and interpretation. Nuclear magnetic resonance spectroscopy provides complementary top-down information on the chirality of the uronic acid residues comprising a low molecular weight heparin. Using our integrated approach four different low molecular weight heparins prepared from porcine heparin through chemical β-eliminative cleavage were comprehensively analyzed. Lovenox™ and Clexane™, the innovator versions of enoxaparin marketed in the US and Europe, respectively, and two generic enoxaparins, from Sandoz and Teva, were analyzed. The results which were supported by analysis of variation (ANOVA), while showing remarkable similarities between different versions of the product and good lot-to-lot consistency of each product, also detects subtle differences that may result from differences in their manufacturing processes or differences in the source (or parent) porcine heparin from which each product is prepared.

  4. Um Modelo de inovação bottom up: Museu de Favela (MUF

    Directory of Open Access Journals (Sweden)

    Natália Nakano

    2013-12-01

    Full Text Available Este artigo tem como objetivo apresentar, descrever e discutir o modelo de inovação do primeiro museu territorial ao ar livre, concebido em uma favela no Rio de Janeiro, o Museu de Favela (MUF. Nele são introduzidos os conceitos de favela, e diferenciados museu tradicional e os ecomuseus, a fim de contextualizar o universo do MUF. Discute-se o conceito de coleção de um museu territorial ao ar livre e como se dá o trabalho de curadoria nesse contexto, bem como os tipos de interação possíveis com a diversidade de indivíduos atendidos por um museu como o MUF. Discute-se ainda o papel dessa nova tipologia museológica na sociedade, a partir de entidades criadas pela inovação do tipo bottom up realizada pela iniciativa do MUF dentro da nova museologia de ação. Conclui-se com considerações a respeito da mudança de foco do papel desempenhado pelo MUF como agente de desenvolvimento social e cultural.

  5. A 'bottom-up' approach to aetiological research in autism spectrum disorders

    Directory of Open Access Journals (Sweden)

    Lisa Marie Unwin

    2013-09-01

    Full Text Available Autism Spectrum Disorders (ASD are currently diagnosed in the presence of impairments in social interaction and communication, and a restricted range of activities and interests. However, there is considerable variability in the behaviours of different individuals with an ASD diagnosis. The heterogeneity spans the entire range of IQ and language abilities, as well as other behavioural, communicative and social functions. While any psychiatric condition is likely to incorporate a degree of heterogeneity, the variability in the nature and severity of behaviours observed in ASD is thought to exceed that of other disorders. The current paper aims to provide a model for future research into ASD subgroups. In doing so, we examined whether two proposed risk factors – low birth weight (LBW, and in-utero exposure to selective serotonin reuptake inhibitors (SSRIs – are associated with greater behavioural homogeneity. Using data from the Western Australian Autism Biological Registry, this study found that LBW and maternal SSRI use during pregnancy were associated with greater sleep disturbances and a greater number of gastrointestinal complaints in children with ASD, respectively. The findings from this ‘proof of principle’ paper provide support for this 'bottom-up' approach as a feasible method for creating homogenous groups.

  6. Visionmaker NYC: A bottom-up approach to finding shared socioeconomic pathways in New York City

    Science.gov (United States)

    Sanderson, E. W.; Fisher, K.; Giampieri, M.; Barr, J.; Meixler, M.; Allred, S. B.; Bunting-Howarth, K. E.; DuBois, B.; Parris, A. S.

    2015-12-01

    Visionmaker NYC is a free, public participatory, bottom-up web application to develop and share climate mitigation and adaptation strategies for New York City neighborhoods. The goal is to develop shared socioeconomic pathways by allowing a broad swath of community members - from schoolchildren to architects and developers to the general public - to input their concepts for a desired future. Visions are comprised of climate scenarios, lifestyle choices, and ecosystem arrangements, where ecosystems are broadly defined to include built ecosystems (e.g. apartment buildings, single family homes, etc.), transportation infrastructure (e.g. highways, connector roads, sidewalks), and natural land cover types (e.g. wetlands, forests, estuary.) Metrics of water flows, carbon cycling, biodiversity patterns, and population are estimated for the user's vision, for the same neighborhood today, and for that neighborhood as it existed in the pre-development state, based on the Welikia Project (welikia.org.) Users can keep visions private, share them with self-defined groups of other users, or distribute them publicly. Users can also propose "challenges" - specific desired states of metrics for specific parts of the city - and others can post visions in response. Visionmaker contributes by combining scenario planning, scientific modelling, and social media to create new, wide-open possibilities for discussion, collaboration, and imagination regarding future, shared socioeconomic pathways.

  7. A Computational Strategy to Analyze Label-Free Temporal Bottom-up Proteomics Data

    Energy Technology Data Exchange (ETDEWEB)

    Du, Xiuxia; Callister, Stephen J.; Manes, Nathan P.; Adkins, Joshua N.; Alexandridis, Roxana A.; Zeng, Xiaohua; Roh, Jung Hyeob; Smith, William E.; Donohue, Timothy J.; Kaplan, Samuel; Smith, Richard D.; Lipton, Mary S.

    2008-07-01

    Motivation: Biological systems are in a continual state of flux, which necessitates an understanding of the dynamic nature of protein abundances. The study of protein abundance dynamics has become feasible with recent improvements in mass spectrometry-based quantitative proteomics. However, a number of challenges still re-main related to how best to extract biological information from dy-namic proteomics data; for example, challenges related to extrane-ous variability, missing abundance values, and the identification of significant temporal patterns. Results: This article describes a strategy that addresses the afore-mentioned issues for the analysis of temporal bottom-up proteomics data. The core strategy for the data analysis algorithms and subse-quent data interpretation was formulated to take advantage of the temporal properties of the data. The analysis procedure presented herein was applied to data from a Rhodobacter sphaeroides 2.4.1 time-course study. The results were in close agreement with existing knowledge about R. sphaeroides, therefore demonstrating the utility of this analytical strategy.

  8. Bottom-up simulations of methane and ethane emissions from global oil and gas systems 1980 to 2012

    Science.gov (United States)

    Höglund-Isaksson, Lena

    2017-02-01

    Existing bottom-up emission inventories of methane from global oil and gas systems do not satisfactorily explain year-on-year variation in atmospheric methane estimated by top-down models. Using a novel bottom-up approach this study quantifies and attributes methane and ethane emissions from global oil and gas production from 1980 to 2012. Country-specific information on associated gas flows from published sources are combined with inter-annual variations in observed flaring of associated gas from satellite images from 1994 to 2010, to arrive at country-specific annual estimates of methane and ethane emissions from flows of associated gas. Results confirm trends from top-down models and indicate considerably higher methane and ethane emissions from oil production than previously shown in bottom-up inventories for this time period.

  9. Bottom-up synthesis of ordered metal/oxide/metal nanodots on substrates for nanoscale resistive switching memory

    Science.gov (United States)

    Han, Un-Bin; Lee, Jang-Sik

    2016-01-01

    The bottom-up approach using self-assembled materials/processes is thought to be a promising solution for next-generation device fabrication, but it is often found to be not feasible for use in real device fabrication. Here, we report a feasible and versatile way to fabricate high-density, nanoscale memory devices by direct bottom-up filling of memory elements. An ordered array of metal/oxide/metal (copper/copper oxide/copper) nanodots was synthesized with a uniform size and thickness defined by self-organized nanotemplate mask by sequential electrochemical deposition (ECD) of each layer. The fabricated memory devices showed bipolar resistive switching behaviors confirmed by conductive atomic force microscopy. This study demonstrates that ECD with bottom-up growth has great potential to fabricate high-density nanoelectronic devices beyond the scaling limit of top-down device fabrication processes. PMID:27157385

  10. The Comparative Effect of Top-down Processing and Bottom-up Processing through TBLT on Extrovert and Introvert EFL

    Directory of Open Access Journals (Sweden)

    Pezhman Nourzad Haradasht

    2013-09-01

    Full Text Available This research seeks to examine the effect of two models of reading comprehension, namely top-down and bottom-up processing, on the reading comprehension of extrovert and introvert EFL learners’ reading comprehension. To do this, 120 learners out of a total number of 170 intermediate learners being educated at Iran Mehr English Language School were selected all taking a PET (Preliminary English Test first for homogenization prior to the study. They also answered the Eysenck Personality Inventory (EPI which in turn categorized them into two subgroups within each reading models consisting of introverts and extroverts. All in all, there were four subgroups: 30 introverts and 30 extroverts undergoing the top-down processing treatment, and 30 introverts and 30 extroverts experiencing the bottom-up processing treatment. The aforementioned PET was administered as the post test of the study after each group was exposed to the treatment for 18 sessions in six weeks. After the instructions finished, the mean scores of all four groups on this post test were computed and a two-way ANOVA was run to test all the four hypotheses raise in this study. the results showed that while learners generally benefitted more from the bottom-up processing setting compared  to the top-down processing one, the extrovert group was better off receiving top-down instruction. Furthermore, introverts outperformed extroverts in bottom-up group; yet between the two personalities subgroups in the top-down setting no difference was seen. A predictable pattern of benefitting from teaching procedures could not be drawn for introverts as in both top-down and bottom-up settings, they benefitted more than extroverts. Keywords: Reading comprehension, top-down processing, bottom-up processing, extrovert, introvert

  11. Quantifying the uncertainties of a bottom-up emission inventory of anthropogenic atmospheric pollutants in China

    Directory of Open Access Journals (Sweden)

    Y. Zhao

    2010-11-01

    Full Text Available The uncertainties of a national, bottom-up inventory of Chinese emissions of anthropogenic SO2, NOx, and particulate matter (PM of different size classes and carbonaceous species are comprehensively quantified, for the first time, using Monte Carlo simulation. The inventory is structured by seven dominant sectors: coal-fired electric power, cement, iron and steel, other industry (boiler combustion, other industry (non-combustion processes, transportation, and residential. For each parameter related to emission factors or activity-level calculations, the uncertainties, represented as probability distributions, are either statistically fitted using results of domestic field tests or, when these are lacking, estimated based on foreign or other domestic data. The uncertainties (i.e., 95% confidence intervals around the central estimates of Chinese emissions of SO2, NOx, total PM, PM10, PM2.5, black carbon (BC, and organic carbon (OC in 2005 are estimated to be −14%~12%, −10%~36%, −10%~36%, −12%~42% −16%~52%, −23%~130%, and −37%~117%, respectively. Variations at activity levels (e.g., energy consumption or industrial production are not the main source of emission uncertainties. Due to narrow classification of source types, large sample sizes, and relatively high data quality, the coal-fired power sector is estimated to have the smallest emission uncertainties for all species except BC and OC. Due to poorer source classifications and a wider range of estimated emission factors, considerable uncertainties of NOx and PM emissions from cement production and boiler combustion in other industries are found. The probability distributions of emission factors for biomass burning, the largest source of BC and OC, are fitted based on very limited domestic field measurements, and special caution should thus be taken interpreting these emission uncertainties. Although Monte

  12. Mechanisms underlying the basal forebrain enhancement of top-down and bottom-up attention.

    Science.gov (United States)

    Avery, Michael C; Dutt, Nikil; Krichmar, Jeffrey L

    2014-03-01

    Both attentional signals from frontal cortex and neuromodulatory signals from basal forebrain (BF) have been shown to influence information processing in the primary visual cortex (V1). These two systems exert complementary effects on their targets, including increasing firing rates and decreasing interneuronal correlations. Interestingly, experimental research suggests that the cholinergic system is important for increasing V1's sensitivity to both sensory and attentional information. To see how the BF and top-down attention act together to modulate sensory input, we developed a spiking neural network model of V1 and thalamus that incorporated cholinergic neuromodulation and top-down attention. In our model, activation of the BF had a broad effect that decreases the efficacy of top-down projections and increased the reliance of bottom-up sensory input. In contrast, we demonstrated how local release of acetylcholine in the visual cortex, which was triggered through top-down gluatmatergic projections, could enhance top-down attention with high spatial specificity. Our model matched experimental data showing that the BF and top-down attention decrease interneuronal correlations and increase between-trial reliability. We found that decreases in correlations were primarily between excitatory-inhibitory pairs rather than excitatory-excitatory pairs and suggest that excitatory-inhibitory decorrelation is necessary for maintaining low levels of excitatory-excitatory correlations. Increased inhibitory drive via release of acetylcholine in V1 may then act as a buffer, absorbing increases in excitatory-excitatory correlations that occur with attention and BF stimulation. These findings will lead to a better understanding of the mechanisms underyling the BF's interactions with attention signals and influences on correlations.

  13. Bottom-up model of self-organized criticality on networks.

    Science.gov (United States)

    Noël, Pierre-André; Brummitt, Charles D; D'Souza, Raissa M

    2014-01-01

    The Bak-Tang-Wiesenfeld (BTW) sandpile process is an archetypal, stylized model of complex systems with a critical point as an attractor of their dynamics. This phenomenon, called self-organized criticality, appears to occur ubiquitously in both nature and technology. Initially introduced on the two-dimensional lattice, the BTW process has been studied on network structures with great analytical successes in the estimation of macroscopic quantities, such as the exponents of asymptotically power-law distributions. In this article, we take a microscopic perspective and study the inner workings of the process through both numerical and rigorous analysis. Our simulations reveal fundamental flaws in the assumptions of past phenomenological models, the same models that allowed accurate macroscopic predictions; we mathematically justify why universality may explain these past successes. Next, starting from scratch, we obtain microscopic understanding that enables mechanistic models; such models can, for example, distinguish a cascade's area from its size. In the special case of a 3-regular network, we use self-consistency arguments to obtain a zero-parameter mechanistic (bottom-up) approximation that reproduces nontrivial correlations observed in simulations and that allows the study of the BTW process on networks in regimes otherwise prohibitively costly to investigate. We then generalize some of these results to configuration model networks and explain how one could continue the generalization. The numerous tools and methods presented herein are known to enable studying the effects of controlling the BTW process and other self-organizing systems. More broadly, our use of multitype branching processes to capture information bouncing back and forth in a network could inspire analogous models of systems in which consequences spread in a bidirectional fashion.

  14. Bottom-up control of geomagnetic secular variation by the Earth's inner core.

    Science.gov (United States)

    Aubert, Julien; Finlay, Christopher C; Fournier, Alexandre

    2013-10-10

    Temporal changes in the Earth's magnetic field, known as geomagnetic secular variation, occur most prominently at low latitudes in the Atlantic hemisphere (that is, from -90 degrees east to 90 degrees east), whereas in the Pacific hemisphere there is comparatively little activity. This is a consequence of the geographical localization of intense, westward drifting, equatorial magnetic flux patches at the core surface. Despite successes in explaining the morphology of the geomagnetic field, numerical models of the geodynamo have so far failed to account systematically for this striking pattern of geomagnetic secular variation. Here we show that it can be reproduced provided that two mechanisms relying on the inner core are jointly considered. First, gravitational coupling aligns the inner core with the mantle, forcing the flow of liquid metal in the outer core into a giant, westward drifting, sheet-like gyre. The resulting shear concentrates azimuthal magnetic flux at low latitudes close to the core-mantle boundary, where it is expelled by core convection and subsequently transported westward. Second, differential inner-core growth, fastest below Indonesia, causes an asymmetric buoyancy release in the outer core which in turn distorts the gyre, forcing it to become eccentric, in agreement with recent core flow inversions. This bottom-up heterogeneous driving of core convection dominates top-down driving from mantle thermal heterogeneities, and localizes magnetic variations in a longitudinal sector centred beneath the Atlantic, where the eccentric gyre reaches the core surface. To match the observed pattern of geomagnetic secular variation, the solid material forming the inner core must now be in a state of differential growth rather than one of growth and melting induced by convective translation.

  15. Do top-down or bottom-up forces determine Stephanitis pyrioides abundance in urban landscapes?

    Science.gov (United States)

    Shrewsbury, Paula M; Raupp, Michael J

    2006-02-01

    This study examined the influence of habitat structural complexity on the collective effects of top-down and bottom-up forces on herbivore abundance in urban landscapes. The persistence and varying complexity of urban landscapes set them apart from ephemeral agroecosystems and natural habitats where the majority of studies have been conducted. Using surveys and manipulative experiments. We explicitly tested the effect of natural enemies (enemies hypothesis), host plant quality, and herbivore movement on the abundance of the specialist insect herbivore, Stephanitis pyrioides, in landscapes of varying structural complexity. This herbivore was extremely abundant in simple landscapes and rare in complex ones. Natural enemies were the major force influencing abundance of S. pyrioides across habitat types. Generalist predators, particularly the spider Anyphaena celer, were more abundant in complex landscapes. Predator abundance was related to greater abundance of alternative prey in those landscapes. Stephanitis pyrioides survival was lower in complex habitats when exposed to endemic natural enemy populations. Laboratory feeding trials confirmed the more abundant predators consumed S. pyrioides. Host plant quality was not a strong force influencing patterns of S. pyrioides abundance. When predators were excluded, adult S. pyrioides survival was greater on azaleas grown in complex habitats, in opposition to the observed pattern of abundance. Similarly, complexity did not affect S. pyrioides immigration and emigration rates. The complexity of urban landscapes affects the strength of top-down forces on herbivorous insect populations by influencing alternative prey and generalist predator abundance. It is possible that habitats can be manipulated to promote the suppressive effects of generalist predators.

  16. Pre-stimulus activity predicts the winner of top-down vs. bottom-up attentional selection.

    Directory of Open Access Journals (Sweden)

    Ali Mazaheri

    Full Text Available Our ability to process visual information is fundamentally limited. This leads to competition between sensory information that is relevant for top-down goals and sensory information that is perceptually salient, but task-irrelevant. The aim of the present study was to identify, from EEG recordings, pre-stimulus and pre-saccadic neural activity that could predict whether top-down or bottom-up processes would win the competition for attention on a trial-by-trial basis. We employed a visual search paradigm in which a lateralized low contrast target appeared alone, or with a low (i.e., non-salient or high contrast (i.e., salient distractor. Trials with a salient distractor were of primary interest due to the strong competition between top-down knowledge and bottom-up attentional capture. Our results demonstrated that 1 in the 1-sec pre-stimulus interval, frontal alpha (8-12 Hz activity was higher on trials where the salient distractor captured attention and the first saccade (bottom-up win; and 2 there was a transient pre-saccadic increase in posterior-parietal alpha (7-8 Hz activity on trials where the first saccade went to the target (top-down win. We propose that the high frontal alpha reflects a disengagement of attentional control whereas the transient posterior alpha time-locked to the saccade indicates sensory inhibition of the salient distractor and suppression of bottom-up oculomotor capture.

  17. Pre-stimulus activity predicts the winner of top-down vs. bottom-up attentional selection.

    Science.gov (United States)

    Mazaheri, Ali; DiQuattro, Nicholas E; Bengson, Jesse; Geng, Joy J

    2011-02-28

    Our ability to process visual information is fundamentally limited. This leads to competition between sensory information that is relevant for top-down goals and sensory information that is perceptually salient, but task-irrelevant. The aim of the present study was to identify, from EEG recordings, pre-stimulus and pre-saccadic neural activity that could predict whether top-down or bottom-up processes would win the competition for attention on a trial-by-trial basis. We employed a visual search paradigm in which a lateralized low contrast target appeared alone, or with a low (i.e., non-salient) or high contrast (i.e., salient) distractor. Trials with a salient distractor were of primary interest due to the strong competition between top-down knowledge and bottom-up attentional capture. Our results demonstrated that 1) in the 1-sec pre-stimulus interval, frontal alpha (8-12 Hz) activity was higher on trials where the salient distractor captured attention and the first saccade (bottom-up win); and 2) there was a transient pre-saccadic increase in posterior-parietal alpha (7-8 Hz) activity on trials where the first saccade went to the target (top-down win). We propose that the high frontal alpha reflects a disengagement of attentional control whereas the transient posterior alpha time-locked to the saccade indicates sensory inhibition of the salient distractor and suppression of bottom-up oculomotor capture.

  18. Engineered Micro-Objects as Scaffolding Elements in Cellular Building Blocks for Bottom-Up Tissue Engineering Approaches

    NARCIS (Netherlands)

    Leferink, A.M.; Schipper, D.; Arts, E.; Vrij, E.J.; Rivron, N.C.; Karperien, H.B.J.; Mittmann, K.; Blitterswijk, van C.A.; Moroni, L.; Truckenmuller, R.K.

    2014-01-01

    A material-based bottom-up approach is proposed towards an assembly of cells and engineered micro-objects at the macroscale. We show how shape, size and wettability of engineered micro-objects play an important role in the behavior of cells on these objects. This approach can, among other applicatio

  19. Evaluating the Resilience of the Bottom-up Method used to Detect and Benchmark the Smartness of University Campuses

    DEFF Research Database (Denmark)

    Giovannella, Carlo; Andone, Diana; Dascalu, Mihai

    2016-01-01

    A new method to perform a bottom-up extraction and benchmark of the perceived multilevel smartness of complex ecosystems has been recently described and applied to territories and learning ecosystems like university campuses and schools. In this paper we study the resilience of our method...

  20. Assessing the Gap Between Top-down and Bottom-up Measured Methane Emissions in Indianapolis, IN.

    Science.gov (United States)

    Prasad, K.; Lamb, B. K.; Cambaliza, M. O. L.; Shepson, P. B.; Stirm, B. H.; Salmon, O. E.; Lavoie, T. N.; Lauvaux, T.; Ferrara, T.; Howard, T.; Edburg, S. L.; Whetstone, J. R.

    2014-12-01

    Releases of methane (CH4) from the natural gas supply chain in the United States account for approximately 30% of the total US CH4 emissions. However, there continues to be large questions regarding the accuracy of current emission inventories for methane emissions from natural gas usage. In this paper, we describe results from top-down and bottom-up measurements of methane emissions from the large isolated city of Indianapolis. The top-down results are based on aircraft mass balance and tower based inverse modeling methods, while the bottom-up results are based on direct component sampling at metering and regulating stations, surface enclosure measurements of surveyed pipeline leaks, and tracer/modeling methods for other urban sources. Mobile mapping of methane urban concentrations was also used to identify significant sources and to show an urban-wide low level enhancement of methane levels. The residual difference between top-down and bottom-up measured emissions is large and cannot be fully explained in terms of the uncertainties in top-down and bottom-up emission measurements and estimates. Thus, the residual appears to be, at least partly, attributed to a significant wide-spread diffusive source. Analyses are included to estimate the size and nature of this diffusive source.

  1. Using classic methods in a networked manner: seeing volunteered spatial information in a bottom-up fashion

    NARCIS (Netherlands)

    Carton, L.J.; Ache, P.M.

    2014-01-01

    Using new social media and ICT infrastructures for self-organization, more and more citizen networks and business sectors organize themselves voluntarily around sustainability themes. The paper traces and evaluates one emerging innovation in such bottom-up, networked form of sustainable governance

  2. Evaluating the Resilience of the Bottom-up Method used to Detect and Benchmark the Smartness of University Campuses

    NARCIS (Netherlands)

    Giovannella, Carlo; Andone, Diana; Dascalu, Mihai; Popescu, Elvira; Rehm, Matthias; Mealha, Oscar

    2017-01-01

    A new method to perform a bottom-up extraction and benchmark of the perceived multilevel smartness of complex ecosystems has been recently described and applied to territories and learning ecosystems like university campuses and schools. In this paper we study the resilience of our method by co

  3. Citizenship Policy from the Bottom-Up: The Linguistic and Semiotic Landscape of a Naturalization Field Office

    Science.gov (United States)

    Loring, Ariel

    2015-01-01

    This article follows a bottom-up approach to language policy (Ramanathan, 2005; Wodak, 2006) in an analysis of citizenship in policy and practice. It compares representations of citizenship in and around a regional branch of the United States Citizenship and Immigration Services (USCIS), with a focus on citizenship swearing-in ceremonies for…

  4. Vibrotactile target saliency

    NARCIS (Netherlands)

    Toet, A.; Groen, E.l.; Oosterbeek, M.T.J.; Hooge, I.T.C.

    2008-01-01

    We tested the saliency of a single vibrotractile target (T) among 2 to 7 nontargets (N), presented by 8 tactors that were equally distributed over a horizontal band around the torso. Targets and nontargets had different pulse duration, but the same activation period and no onset asynchrony. T-N simi

  5. The role of attentional priority and saliency in determining capacity limits in enumeration and visual working memory.

    Directory of Open Access Journals (Sweden)

    David Melcher

    Full Text Available Many common tasks require us to individuate in parallel two or more objects out of a complex scene. Although the mechanisms underlying our abilities to count the number of items, remember the visual properties of objects and to make saccadic eye movements towards targets have been studied separately, each of these tasks require selection of individual objects and shows a capacity limit. Here we show that a common factor--salience--determines the capacity limit in the various tasks. We manipulated bottom-up salience (visual contrast and top-down salience (task relevance in enumeration and visual memory tasks. As one item became increasingly salient, the subitizing range was reduced and memory performance for all other less-salient items was decreased. Overall, the pattern of results suggests that our abilities to enumerate and remember small groups of stimuli are grounded in an attentional priority or salience map which represents the location of important items.

  6. Reconciling Top-Down and Bottom-Up Estimates of Oil and Gas Methane Emissions in the Barnett Shale

    Science.gov (United States)

    Hamburg, S.

    2015-12-01

    Top-down approaches that use aircraft, tower, or satellite-based measurements of well-mixed air to quantify regional methane emissions have typically estimated higher emissions from the natural gas supply chain when compared to bottom-up inventories. A coordinated research campaign in October 2013 used simultaneous top-down and bottom-up approaches to quantify total and fossil methane emissions in the Barnett Shale region of Texas. Research teams have published individual results including aircraft mass-balance estimates of regional emissions and a bottom-up, 25-county region spatially-resolved inventory. This work synthesizes data from the campaign to directly compare top-down and bottom-up estimates. A new analytical approach uses statistical estimators to integrate facility emission rate distributions from unbiased and targeted high emission site datasets, which more rigorously incorporates the fat-tail of skewed distributions to estimate regional emissions of well pads, compressor stations, and processing plants. The updated spatially-resolved inventory was used to estimate total and fossil methane emissions from spatial domains that match seven individual aircraft mass balance flights. Source apportionment of top-down emissions between fossil and biogenic methane was corroborated with two independent analyses of methane and ethane ratios. Reconciling top-down and bottom-up estimates of fossil methane emissions leads to more accurate assessment of natural gas supply chain emission rates and the relative contribution of high emission sites. These results increase our confidence in our understanding of the climate impacts of natural gas relative to more carbon-intensive fossil fuels and the potential effectiveness of mitigation strategies.

  7. A bottom-up approach to urban metabolism: the perspective of BRIDGE

    Science.gov (United States)

    Chrysoulakis, N.; Borrego, C.; San Josè, R.; Grimmond, S. B.; Jones, M. B.; Magliulo, V.; Klostermann, J.; Santamouris, M.

    2011-12-01

    Urban metabolism considers a city as a system and usually distinguishes between energy and material flows as its components. "Metabolic" studies are usually top-down approaches that assess the inputs and outputs of food, water, energy, and pollutants from a city, or that compare the changing metabolic process of several cities. In contrast, bottom-up approaches are based on quantitative estimates of urban metabolism components at local to regional scales. Such approaches consider the urban metabolism as the 3D exchange and transformation of energy and matter between a city and its environment. The city is considered as a system and the physical flows between this system and its environment are quantitatively estimated. The transformation of landscapes from primarily agricultural and forest uses to urbanized landscapes can greatly modify energy and material exchanges and it is, therefore, an important aspect of an urban area. Here we focus on the exchanges and transformation of energy, water, carbon and pollutants. Recent advances in bio-physical sciences have led to new methods and models to estimate local scale energy, water, carbon and pollutant fluxes. However, there is often poor communication of new knowledge and its implications to end-users, such as planners, architects and engineers. The FP7 Project BRIDGE (SustainaBle uRban plannIng Decision support accountinG for urban mEtabolism) aims at bridging this gap and at illustrating the advantages of considering environmental issues in urban planning. BRIDGE does not perform a complete life cycle analysis or calculate whole system urban metabolism, but rather focuses on specific metabolism components (energy, water, carbon and pollutants). Its main goal is the development of a Decision Suport System (DSS) with the potential to select planning actions which better fit the goal of changing the metabolism of urban systems towards sustainability. BRIDGE evaluates how planning alternatives can modify the physical

  8. Prediction of visual saliency in video with deep CNNs

    Science.gov (United States)

    Chaabouni, Souad; Benois-Pineau, Jenny; Hadar, Ofer

    2016-09-01

    Prediction of visual saliency in images and video is a highly researched topic. Target applications include Quality assessment of multimedia services in mobile context, video compression techniques, recognition of objects in video streams, etc. In the framework of mobile and egocentric perspectives, visual saliency models cannot be founded only on bottom-up features, as suggested by feature integration theory. The central bias hypothesis, is not respected neither. In this case, the top-down component of human visual attention becomes prevalent. Visual saliency can be predicted on the basis of seen data. Deep Convolutional Neural Networks (CNN) have proven to be a powerful tool for prediction of salient areas in stills. In our work we also focus on sensitivity of human visual system to residual motion in a video. A Deep CNN architecture is designed, where we incorporate input primary maps as color values of pixels and magnitude of local residual motion. Complementary contrast maps allow for a slight increase of accuracy compared to the use of color and residual motion only. The experiments show that the choice of the input features for the Deep CNN depends on visual task:for th eintersts in dynamic content, the 4K model with residual motion is more efficient, and for object recognition in egocentric video the pure spatial input is more appropriate.

  9. Evidence of bottom-up control of marine productivity in the Mediterranean Sea during the last 50 years

    Science.gov (United States)

    Macías, Diego; Garcia-Gorriz, Elisa; Piroddi, Chiara; Stips, Adolf

    2014-05-01

    The temporal dynamics of biogeochemical variables derived from a coupled 3D hydrodynamic-biogeochemical model of the entire Mediterranean Sea is evaluated during the last 50 years (1960 - 2010). Realistic atmospheric forcing and river discharge are used to force the dynamics of the coupled model system. The time evolutions of primary and secondary productions in the entire basin are assessed against available independent data on fisheries yields and catches per unit effort for the same time period. Concordant patterns are found in the time-series of all biological variables (from the model and from fisheries statistics), with low values at the beginning of the series, a later increase with maximum values reached at the end of the 1990's and a posterior stabilization or a small decline. Spectral analysis of the annual biological time-series reveals coincident low-frequency signals in all of them; the first, more energetic signal, peaks at 2000 while the second one (less energetic) presents maximum values at around 1982. Almost identical low-frequency signals are found in the nutrient loads of the main rivers of the basin and in the integrated (0-100 meters) mean nutrient concentrations in the marine ecosystem. Nitrate concentration shows an increasing trend up to 1998 with a later stabilization or a slight decline to present day values. This nitrate evolution seems to be driving the first low-frequency signal found in the biological time series. Phosphate, on the other hand, shows maximum concentrations around 1982 and a posterior sharp decline. This nutrient seems to be responsible for the second low-frequency signal observed in the biological time-series. Our analysis shows that the control of marine productivity (from plankton to fish) in the Mediterranean basin seem to be principally mediated through bottom-up processes that could be traced back to the characteristics of riverine discharges. Other types of control could not be excluded from our analysis (e

  10. Prefrontal /accumbal catecholamine system processes high motivational salience.

    Directory of Open Access Journals (Sweden)

    Stefano ePuglisi-Allegra

    2012-06-01

    Full Text Available Motivational salience regulates the strength of goal seeking, the amount of risk taken, and the energy invested from mild to extreme. Highly motivational experiences promote highly persistent memories. Although this phenomenon is adaptive in normal conditions, experiences with extremely high levels of motivational salience can promote development of memories that can be re-experienced intrusively for long time resulting in maladaptive outcomes.Neural mechanisms mediating motivational salience attribution are, therefore, very important for individual and species survival and for well-being. However, these neural mechanisms could be implicated in attribution of abnormal motivational salience to different stimuli leading to maladaptive compulsive seeking or avoidance. We have offered the first evidence that prefrontal cortical norepinephrine transmission is a necessary condition for motivational salience attribution to highly salient stimuli, through modulation of dopamine in the nucleus accumbens, a brain area involved in all motivated behaviors. Moreover, we have shown that prefrontal-accumbal catecholamine system determines approach or avoidance responses to both reward- and aversion-related stimuli only when the salience of the unconditioned stimulus is high enough to induce sustained catecholamine activation, thus affirming that this system processes motivational salience attribution selectively to highly salient events.

  11. Developing a comprehensive and comparative questionnaire for measuring personality in chimpanzees using a simultaneous top-down/bottom-up design.

    Science.gov (United States)

    Freeman, Hani D; Brosnan, Sarah F; Hopper, Lydia M; Lambeth, Susan P; Schapiro, Steven J; Gosling, Samuel D

    2013-10-01

    One effective method for measuring personality in primates is to use personality trait ratings to distill the experience of people familiar with the individual animals. Previous rating instruments were created using either top-down or bottom-up approaches. Top-down approaches, which essentially adapt instruments originally designed for use with another species, can unfortunately lead to the inclusion of traits irrelevant to chimpanzees or fail to include all relevant aspects of chimpanzee personality. Conversely, because bottom-up approaches derive traits specifically for chimpanzees, their unique items may impede comparisons with findings in other studies and other species. To address the limitations of each approach, we developed a new personality rating scale using a combined top-down/bottom-up design. Seventeen raters rated 99 chimpanzees on the new 41-item scale, with all but one item being rated reliably. Principal components analysis, using both varimax and direct oblimin rotations, identified six broad factors. Strong evidence was found for five of the factors (Reactivity/Undependability, Dominance, Openness, Extraversion, and Agreeableness). A sixth factor (Methodical) was offered provisionally until more data are collected. We validated the factors against behavioral data collected independently on the chimpanzees. The five factors demonstrated good evidence for convergent and predictive validity, thereby underscoring the robustness of the factors. Our combined top-down/bottom-up approach provides the most extensive data to date to support the universal existence of these five personality factors in chimpanzees. This framework, which facilitates cross-species comparisons, can also play a vital role in understanding the evolution of personality and can assist with husbandry and welfare efforts.

  12. Benchmarking energy scenarios for China: perspectives from top-down, economic and bottom-up, technical modelling

    DEFF Research Database (Denmark)

    This study uses a soft-linking methodology to harmonise two complex global top-down and bottom-up models with a regional China focus. The baseline follows the GDP and demographic trends of the Shared Socio-economic Pathways (SSP2) scenario, down-scaled for China, while the carbon tax scenario......-specific modelling results further. These new sub-regional China features can now be used for a more detailed analysis of China's regional developments in a global context....

  13. A comprehensive estimate of recent carbon sinks in China using both top-down and bottom-up approaches

    Science.gov (United States)

    Jiang, Fei; Chen, Jing; Zhou, Linxi; Ju, Weimin; Zhang, Huifang; Machida, Toshinobu; Ciais, Philippe; Peters, Wouter; Wang, Hengmao; Chen, Baozhang; Liu, Linxin; Zhang, Chunhua; Matsueda, Hidekazu; Sawa, Yousuke

    2016-04-01

    Atmospheric inversions use measurements of atmospheric CO2 gradients to constrain regional surface fluxes. Current inversions indicate a net terrestrial CO2 sink in China between 0.16 and 0.35 PgC/yr. The uncertainty of these estimates is as large as the mean because the atmospheric network historically contained only one high altitude station in China. Here, we revisit the calculation of the terrestrial CO2 flux in China, excluding emissions from fossil fuel burning and cement production, by using two inversions with three new CO2 monitoring stations in China as well as aircraft observations over Asia. We estimate a net terrestrial CO2 uptake of 0.39-0.51 PgC/yr with a mean of 0.45 PgC/yr in 2006-2009. After considering the lateral transport of carbon in air and water and international trade, the annual mean carbon sink is adjusted to 0.35 PgC/yr. To evaluate this top-down estimate, we constructed an independent bottom-up estimate based on ecosystem data, and giving a net land sink of 0.33 PgC/yr. This demonstrates closure between the top-down and bottom-up estimates. Both top-down and bottom-up estimates give a higher carbon sink than previous estimates made for the 1980s and 1990s, suggesting a trend towards increased uptake by land ecosystems in China.

  14. Sponge communities on Caribbean coral reefs are structured by factors that are top-down, not bottom-up.

    Directory of Open Access Journals (Sweden)

    Joseph R Pawlik

    Full Text Available Caribbean coral reefs have been transformed in the past few decades with the demise of reef-building corals, and sponges are now the dominant habitat-forming organisms on most reefs. Competing hypotheses propose that sponge communities are controlled primarily by predatory fishes (top-down or by the availability of picoplankton to suspension-feeding sponges (bottom-up. We tested these hypotheses on Conch Reef, off Key Largo, Florida, by placing sponges inside and outside predator-excluding cages at sites with less and more planktonic food availability (15 m vs. 30 m depth. There was no evidence of a bottom-up effect on the growth of any of 5 sponge species, and 2 of 5 species grew more when caged at the shallow site with lower food abundance. There was, however, a strong effect of predation by fishes on sponge species that lacked chemical defenses. Sponges with chemical defenses grew slower than undefended species, demonstrating a resource trade-off between growth and the production of secondary metabolites. Surveys of the benthic community on Conch Reef similarly did not support a bottom-up effect, with higher sponge cover at the shallower depth. We conclude that the structure of sponge communities on Caribbean coral reefs is primarily top-down, and predict that removal of sponge predators by overfishing will shift communities toward faster-growing, undefended species that better compete for space with threatened reef-building corals.

  15. Bottom-up coarse-grained models that accurately describe the structure, pressure, and compressibility of molecular liquids

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, Nicholas J. H.; Noid, W. G., E-mail: wnoid@chem.psu.edu [Department of Chemistry, The Pennsylvania State University, University Park, Pennsylvania 16802 (United States)

    2015-12-28

    The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed “pressure-matching” variational principle to determine a volume-dependent contribution to the potential, U{sub V}(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing U{sub V}, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that U{sub V} accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the “simplicity” of the model.

  16. Top-down vs. bottom-up control on vegetation composition in a tidal marsh depends on scale.

    Science.gov (United States)

    Elschot, Kelly; Vermeulen, Anke; Vandenbruwaene, Wouter; Bakker, Jan P; Bouma, Tjeerd J; Stahl, Julia; Castelijns, Henk; Temmerman, Stijn

    2017-01-01

    The relative impact of top-down control by herbivores and bottom-up control by environmental conditions on vegetation is a subject of debate in ecology. In this study, we hypothesize that top-down control by goose foraging and bottom-up control by sediment accretion on vegetation composition within an ecosystem can co-occur but operate at different spatial and temporal scales. We used a highly dynamic marsh system with a large population of the Greylag goose (Anser anser) to investigate the potential importance of spatial and temporal scales on these processes. At the local scale, Greylag geese grub for below-ground storage organs of the vegetation, thereby creating bare patches of a few square metres within the marsh vegetation. In our study, such activities by Greylag geese allowed them to exert top-down control by setting back vegetation succession. However, we found that the patches reverted back to the initial vegetation type within 12 years. At large spatial (i.e. several square kilometres) and temporal scales (i.e. decades), high rates of sediment accretion surpassing the rate of local sea-level rise were found to drive long-term vegetation succession and increased cover of several climax vegetation types. In summary, we conclude that the vegetation composition within this tidal marsh was primarily controlled by the bottom-up factor of sediment accretion, which operates at large spatial as well as temporal scales. Top-down control exerted by herbivores was found to be a secondary process and operated at much smaller spatial and temporal scales.

  17. Bottom-up coarse-grained models that accurately describe the structure, pressure, and compressibility of molecular liquids

    Science.gov (United States)

    Dunn, Nicholas J. H.; Noid, W. G.

    2015-12-01

    The present work investigates the capability of bottom-up coarse-graining (CG) methods for accurately modeling both structural and thermodynamic properties of all-atom (AA) models for molecular liquids. In particular, we consider 1, 2, and 3-site CG models for heptane, as well as 1 and 3-site CG models for toluene. For each model, we employ the multiscale coarse-graining method to determine interaction potentials that optimally approximate the configuration dependence of the many-body potential of mean force (PMF). We employ a previously developed "pressure-matching" variational principle to determine a volume-dependent contribution to the potential, UV(V), that approximates the volume-dependence of the PMF. We demonstrate that the resulting CG models describe AA density fluctuations with qualitative, but not quantitative, accuracy. Accordingly, we develop a self-consistent approach for further optimizing UV, such that the CG models accurately reproduce the equilibrium density, compressibility, and average pressure of the AA models, although the CG models still significantly underestimate the atomic pressure fluctuations. Additionally, by comparing this array of models that accurately describe the structure and thermodynamic pressure of heptane and toluene at a range of different resolutions, we investigate the impact of bottom-up coarse-graining upon thermodynamic properties. In particular, we demonstrate that UV accounts for the reduced cohesion in the CG models. Finally, we observe that bottom-up coarse-graining introduces subtle correlations between the resolution, the cohesive energy density, and the "simplicity" of the model.

  18. A regression approach for estimation of anthropogenic heat flux based on a bottom-up air pollutant emission database

    Science.gov (United States)

    Lee, Sang-Hyun; McKeen, Stuart A.; Sailor, David J.

    2014-10-01

    A statistical regression method is presented for estimating hourly anthropogenic heat flux (AHF) using an anthropogenic pollutant emission inventory for use in mesoscale meteorological and air-quality modeling. Based on bottom-up AHF estimated from detailed energy consumption data and anthropogenic pollutant emissions of carbon monoxide (CO) and nitrogen oxides (NOx) in the US National Emission Inventory year 2005 (NEI-2005), a robust regression relation between the AHF and the pollutant emissions is obtained for Houston. This relation is a combination of two power functions (Y = aXb) relating CO and NOx emissions to AHF, giving a determinant coefficient (R2) of 0.72. The AHF for Houston derived from the regression relation has high temporal (R = 0.91) and spatial (R = 0.83) correlations with the bottom-up AHF. Hourly AHF for the whole US in summer is estimated by applying the regression relation to the NEI-2005 summer pollutant emissions with a high spatial resolution of 4-km. The summer daily mean AHF range 10-40 W m-2 on a 4 × 4 km2 grid scale with maximum heat fluxes of 50-140 W m-2 for major US cities. The AHFs derived from the regression relations between the bottom-up AHF and either CO or NOx emissions show a small difference of less than 5% (4.7 W m-2) in city-scale daily mean AHF, and similar R2 statistics, compared to results from their combination. Thus, emissions of either species can be used to estimate AHF in the US cities. An hourly AHF inventory at 4 × 4 km2 resolution over the entire US based on the combined regression is derived and made publicly available for use in mesoscale numerical modeling.

  19. Bottom-Up Nano-heteroepitaxy of Wafer-Scale Semipolar GaN on (001) Si

    KAUST Repository

    Hus, Jui Wei

    2015-07-15

    Semipolar {101¯1} InGaN quantum wells are grown on (001) Si substrates with an Al-free buffer and wafer-scale uniformity. The novel structure is achieved by a bottom-up nano-heteroepitaxy employing self-organized ZnO nanorods as the strain-relieving layer. This ZnO nanostructure unlocks the problems encountered by the conventional AlN-based buffer, which grows slowly and contaminates the growth chamber. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. FROM A COMPARISON OF "TOP-DOWN" AND "BOTTOM-UP" APPROACHES TO THE APPLICATION OF THE "INTERACTIVE" APPROACH

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    This paper introduces three models of reading. Then it ana-lyzes the data gathered from an experiment on the comparison ofthe "top.down" and the "bottom-up" approaches and accord-ingly draws the conclusion that the former approach is helpful inimproving students’ reading comprehension while the latter isuseful in developing their writing skills as well as their knowledgeof vocabulary and sentence structure. Finally this paper presentsa procedure of the application of the "interactive approach",which proves to be productive in teaching college English inten-sive reading.

  1. [Diversity in thalamic relay neurons: evidence for "bottom-up" and "top-down" information flow in thalamocortical pathways].

    Science.gov (United States)

    Clascá, Francisco; Rubio-Garrido, Pablo; Galazo, María J; Porrero, César

    2009-01-01

    Thalamocortical (TC) pathways are still mainly understood as the gateway for ascending sensory-motor information into the cortex. However, it is now clear that a great many TC cells are involved in interactions between cortical areas via the thalamus. We review recent data, including our own, which demonstrate the generalized presence in rodent thalamus of two major TC cell types characterized, among other features, by their axon development, arborization and laminar targeting in the cortex. Such duality may allow inputs from thalamus to access cortical circuits via "bottom-up"-wired axon arbors or via "top-down"-wired axon arbors.

  2. Preference for Well-Balanced Saliency in Details Cropped from Photographs.

    Science.gov (United States)

    Abeln, Jonas; Fresz, Leonie; Amirshahi, Seyed Ali; McManus, I Chris; Koch, Michael; Kreysa, Helene; Redies, Christoph

    2015-01-01

    Photographic cropping is the act of selecting part of a photograph to enhance its aesthetic appearance or visual impact. It is common practice with both professional (expert) and amateur (non-expert) photographers. In a psychometric study, McManus et al. (2011b) showed that participants cropped photographs confidently and reliably. Experts tended to select details from a wider range of positions than non-experts, but other croppers did not generally prefer details that were selected by experts. It remained unclear, however, on what grounds participants selected particular details from a photograph while avoiding other details. One of the factors contributing to cropping decision may be visual saliency. Indeed, various saliency-based computer algorithms are available for the automatic cropping of photographs. However, careful experimental studies on the relation between saliency and cropping are lacking to date. In the present study, we re-analyzed the data from the studies by McManus et al. (2011a,b), focusing on statistical image properties. We calculated saliency-based measures for details selected and details avoided during cropping. As expected, we found that selected details contain regions of higher saliency than avoided details on average. Moreover, the saliency center-of-mass was closer to the geometrical center in selected details than in avoided details. Results were confirmed in an eye tracking study with the same dataset of images. Interestingly, the observed regularities in cropping behavior were less pronounced for experts than for non-experts. In summary, our results suggest that, during cropping, participants tend to select salient regions and place them in an image composition that is well-balanced with respect to the distribution of saliency. Our study contributes to the knowledge of perceptual bottom-up features that are germane to aesthetic decisions in photography and their variability in non-experts and experts.

  3. Preference for Well-Balanced Saliency in Details Cropped from Photographs

    Directory of Open Access Journals (Sweden)

    Jonas eAbeln

    2016-01-01

    Full Text Available Photographic cropping is the act of selecting part of a photograph to enhance its aesthetic appearance or visual impact. It is common practice with both professional (expert and amateur (non-expert photographers. In a psychometric study, McManus et al. (2011b showed that participants cropped photographs confidently and reliably. Experts tended to select details from a wider range of positions than non-experts, but other croppers did not generally prefer details that were selected by experts. It remained unclear, however, on what grounds participants selected particular details from a photograph while avoiding other details. One of the factors contributing to cropping decision may be visual saliency. Indeed, various saliency-based computer algorithms are available for the automatic cropping of photographs. However, careful experimental studies on the relation between saliency and cropping are lacking to date. In the present study, we re-analyzed the data from the studies by McManus et al. (2011a,b, focusing on statistical image properties. We calculated saliency-based measures for details selected and details avoided during cropping. As expected, we found that selected details contain regions of higher saliency than avoided details on average. Moreover, the saliency center-of-mass was closer to the geometrical center in selected details than in avoided details. Results were confirmed in an eye tracking study with the same dataset of images. Interestingly, the observed regularities in cropping behavior were less pronounced for experts than for non-experts. In summary, our results suggest that, during cropping, participants tend to select salient regions and place them in an image composition that is well-balanced with respect to the distribution of saliency. Our study contributes to the knowledge of perceptual bottom-up features that are germane to aesthetic decisions in photography and their variability in non-experts and experts.

  4. Smart city planning from a bottom-up approach: local communities' intervention for a smarter urban environment

    Science.gov (United States)

    Alverti, Maroula; Hadjimitsis, Diofantos; Kyriakidis, Phaedon; Serraos, Konstantinos

    2016-08-01

    The aim of this paper is to explore the concept of "smart" cities from the perspective of inclusive community participation and Geographical Information Systems (GIS).The concept of a smart city is critically analyzed, focusing on the power/knowledge implications of a "bottom-up" approach in planning and how GIS could encourage community participation in smart urban planning. The paper commences with a literature review of what it means for cities to be "smart". It draws supporting definitions and critical insights into smart cities with respect to the built environment and the human factor. The second part of the paper, analyzes the "bottom-up" approach in urban planning, focusing on community participation reviewing forms and expressions through good practices from European cities. The third part of the paper includes a debate on how smart urban cities policies and community participation interact and influence each other. Finally, the paper closes with a discussion of the insights that were found and offers recommendations on how this debate could be addressed by Information and Communication Technologies and GIS in particular.

  5. Optimal Environmental Conditions and Anomalous Ecosystem Responses: Constraining Bottom-up Controls of Phytoplankton Biomass in the California Current System

    Science.gov (United States)

    Jacox, Michael G.; Hazen, Elliott L.; Bograd, Steven J.

    2016-06-01

    In Eastern Boundary Current systems, wind-driven upwelling drives nutrient-rich water to the ocean surface, making these regions among the most productive on Earth. Regulation of productivity by changing wind and/or nutrient conditions can dramatically impact ecosystem functioning, though the mechanisms are not well understood beyond broad-scale relationships. Here, we explore bottom-up controls during the California Current System (CCS) upwelling season by quantifying the dependence of phytoplankton biomass (as indicated by satellite chlorophyll estimates) on two key environmental parameters: subsurface nitrate concentration and surface wind stress. In general, moderate winds and high nitrate concentrations yield maximal biomass near shore, while offshore biomass is positively correlated with subsurface nitrate concentration. However, due to nonlinear interactions between the influences of wind and nitrate, bottom-up control of phytoplankton cannot be described by either one alone, nor by a combined metric such as nitrate flux. We quantify optimal environmental conditions for phytoplankton, defined as the wind/nitrate space that maximizes chlorophyll concentration, and present a framework for evaluating ecosystem change relative to environmental drivers. The utility of this framework is demonstrated by (i) elucidating anomalous CCS responses in 1998-1999, 2002, and 2005, and (ii) providing a basis for assessing potential biological impacts of projected climate change.

  6. Climate-induced changes in bottom-up and top-down processes independently alter a marine ecosystem.

    Science.gov (United States)

    Jochum, Malte; Schneider, Florian D; Crowe, Tasman P; Brose, Ulrich; O'Gorman, Eoin J

    2012-11-05

    Climate change has complex structural impacts on coastal ecosystems. Global warming is linked to a widespread decline in body size, whereas increased flood frequency can amplify nutrient enrichment through enhanced run-off. Altered population body-size structure represents a disruption in top-down control, whereas eutrophication embodies a change in bottom-up forcing. These processes are typically studied in isolation and little is known about their potential interactive effects. Here, we present the results of an in situ experiment examining the combined effects of top-down and bottom-up forces on the structure of a coastal marine community. Reduced average body mass of the top predator (the shore crab, Carcinus maenas) and nutrient enrichment combined additively to alter mean community body mass. Nutrient enrichment increased species richness and overall density of organisms. Reduced top-predator body mass increased community biomass. Additionally, we found evidence for an allometrically induced trophic cascade. Here, the reduction in top-predator body mass enabled greater biomass of intermediate fish predators within the mesocosms. This, in turn, suppressed key micrograzers, which led to an overall increase in microalgal biomass. This response highlights the possibility for climate-induced trophic cascades, driven by altered size structure of populations, rather than species extinction.

  7. Source attribution of methane emissions from global oil and gas production: results of bottom-up simulations over three decades

    Science.gov (United States)

    Höglund-Isaksson, Lena

    2016-04-01

    Existing bottom-up emission inventories of historical methane and ethane emissions from global oil and gas systems do not well explain year-on-year variations estimated by top-down models from atmospheric measurements. This paper develops a bottom-up methodology which allows for country- and year specific source attribution of methane and ethane emissions from global oil and natural gas production for the period 1980 to 2012. The analysis rests on country-specific simulations of associated gas flows which are converted into methane and ethane emissions. The associated gas flows are constructed from country-specific information on oil and gas production and associated gas generation and recovery, and coupled with generic assumptions to bridge regional information gaps on the fractions of unrecovered associated gas that is vented instead of flared. Summing up emissions from associated gas flows with global estimates of emissions from unintended leakage and natural gas transmission and distribution, the resulting global emissions of methane and ethane from oil and gas systems are reasonably consistent with corresponding estimates from top-down models. Also revealed is that the fall of the Soviet Union in 1990 had a significant impact on methane and ethane emissions from global oil and gas systems.

  8. The value of using top-down and bottom-up approaches for building trust and transparency in biobanking.

    Science.gov (United States)

    Meslin, Eric M

    2010-01-01

    With the domestic and international proliferation of biobanks and their associated connections to health information databases, scholarly attention has been turning from the ethical issues arising from the construction of biobanks to the ethical issues that emerge in their operation and management. Calls for greater transparency in governance structures, coupled with stern reminders of the value of maintaining public trust, are seen as critical components in the success of these resources. Two different approaches have been adopted for addressing these types of ethical issues: the first is a 'top-down' approach which focuses on developing policy, procedures, regulations and guidelines to aid decision-makers. The second is a 'bottom-up' approach, which begins with those who are most affected by the issues and attempts to inductively develop consensus recommendations and policy. While both approaches have merit, I argue that more work needs to be done on 'bottom-up' strategies if trust and transparency are to be more than mere slogans. Using 2 case examples from Indiana, the paper summarizes data from a set of surveys we recently conducted that address issues arising from biobanks that provide some insight into issues associated with trust and transparency.

  9. The drastic outcomes from voting alliances in three-party bottom-up democratic voting (1990 $\\rightarrow$ 2013)

    CERN Document Server

    Galam, Serge

    2013-01-01

    The drastic effect of local alliances in three-party competition is investigated in democratic hierarchical bottom-up voting. The results are obtained analytically using a model which extends a sociophysics frame introduced in 1986 \\cite{psy} and 1990 \\cite{lebo} to study two-party systems and the spontaneous formation of democratic dictatorship. It is worth stressing that the 1990 paper was published in the Journal of Statistical Physics, the first paper of its kind in the journal. It was shown how a minority in power can preserve its leadership using bottom-up democratic elections. However such a bias holds only down to some critical value of minimum support. The results were used latter to explain the sudden collapse of European communist parties in the nineties. The extension to three-party competition reveals the mechanisms by which a very small minority party can get a substantial representation at higher levels of the hierarchy when the other two competing parties are big. Additional surprising results...

  10. Optimal Environmental Conditions and Anomalous Ecosystem Responses: Constraining Bottom-up Controls of Phytoplankton Biomass in the California Current System.

    Science.gov (United States)

    Jacox, Michael G; Hazen, Elliott L; Bograd, Steven J

    2016-06-09

    In Eastern Boundary Current systems, wind-driven upwelling drives nutrient-rich water to the ocean surface, making these regions among the most productive on Earth. Regulation of productivity by changing wind and/or nutrient conditions can dramatically impact ecosystem functioning, though the mechanisms are not well understood beyond broad-scale relationships. Here, we explore bottom-up controls during the California Current System (CCS) upwelling season by quantifying the dependence of phytoplankton biomass (as indicated by satellite chlorophyll estimates) on two key environmental parameters: subsurface nitrate concentration and surface wind stress. In general, moderate winds and high nitrate concentrations yield maximal biomass near shore, while offshore biomass is positively correlated with subsurface nitrate concentration. However, due to nonlinear interactions between the influences of wind and nitrate, bottom-up control of phytoplankton cannot be described by either one alone, nor by a combined metric such as nitrate flux. We quantify optimal environmental conditions for phytoplankton, defined as the wind/nitrate space that maximizes chlorophyll concentration, and present a framework for evaluating ecosystem change relative to environmental drivers. The utility of this framework is demonstrated by (i) elucidating anomalous CCS responses in 1998-1999, 2002, and 2005, and (ii) providing a basis for assessing potential biological impacts of projected climate change.

  11. Saliency detection for stereoscopic images.

    Science.gov (United States)

    Fang, Yuming; Wang, Junle; Narwaria, Manish; Le Callet, Patrick; Lin, Weisi

    2014-06-01

    Many saliency detection models for 2D images have been proposed for various multimedia processing applications during the past decades. Currently, the emerging applications of stereoscopic display require new saliency detection models for salient region extraction. Different from saliency detection for 2D images, the depth feature has to be taken into account in saliency detection for stereoscopic images. In this paper, we propose a novel stereoscopic saliency detection framework based on the feature contrast of color, luminance, texture, and depth. Four types of features, namely color, luminance, texture, and depth, are extracted from discrete cosine transform coefficients for feature contrast calculation. A Gaussian model of the spatial distance between image patches is adopted for consideration of local and global contrast calculation. Then, a new fusion method is designed to combine the feature maps to obtain the final saliency map for stereoscopic images. In addition, we adopt the center bias factor and human visual acuity, the important characteristics of the human visual system, to enhance the final saliency map for stereoscopic images. Experimental results on eye tracking databases show the superior performance of the proposed model over other existing methods.

  12. Atomic layer deposition-Sequential self-limiting surface reactions for advanced catalyst "bottom-up" synthesis

    Science.gov (United States)

    Lu, Junling; Elam, Jeffrey W.; Stair, Peter C.

    2016-06-01

    Catalyst synthesis with precise control over the structure of catalytic active sites at the atomic level is of essential importance for the scientific understanding of reaction mechanisms and for rational design of advanced catalysts with high performance. Such precise control is achievable using atomic layer deposition (ALD). ALD is similar to chemical vapor deposition (CVD), except that the deposition is split into a sequence of two self-limiting surface reactions between gaseous precursor molecules and a substrate. The unique self-limiting feature of ALD allows conformal deposition of catalytic materials on a high surface area catalyst support at the atomic level. The deposited catalytic materials can be precisely constructed on the support by varying the number and type of ALD cycles. As an alternative to the wet-chemistry based conventional methods, ALD provides a cycle-by-cycle "bottom-up" approach for nanostructuring supported catalysts with near atomic precision. In this review, we summarize recent attempts to synthesize supported catalysts with ALD. Nucleation and growth of metals by ALD on oxides and carbon materials for precise synthesis of supported monometallic catalyst are reviewed. The capability of achieving precise control over the particle size of monometallic nanoparticles by ALD is emphasized. The resulting metal catalysts with high dispersions and uniformity often show comparable or remarkably higher activity than those prepared by conventional methods. For supported bimetallic catalyst synthesis, we summarize the strategies for controlling the deposition of the secondary metal selectively on the primary metal nanoparticle but not on the support to exclude monometallic formation. As a review of the surface chemistry and growth behavior of metal ALD on metal surfaces, we demonstrate the ways to precisely tune size, composition and structure of bimetallic metal nanoparticles. The cycle-by-cycle "bottom up" construction of bimetallic (or multiple

  13. The top-down, middle-down, and bottom-up mass spectrometry approaches for characterization of histone variants and their post-translational modifications.

    Science.gov (United States)

    Moradian, Annie; Kalli, Anastasia; Sweredoski, Michael J; Hess, Sonja

    2014-03-01

    Epigenetic regulation of gene expression is, at least in part, mediated by histone modifications. PTMs of histones change chromatin structure and regulate gene transcription, DNA damage repair, and DNA replication. Thus, studying histone variants and their modifications not only elucidates their functional mechanisms in chromatin regulation, but also provides insights into phenotypes and diseases. A challenge in this field is to determine the best approach(es) to identify histone variants and their PTMs using a robust high-throughput analysis. The large number of histone variants and the enormous diversity that can be generated through combinatorial modifications, also known as histone code, makes identification of histone PTMs a laborious task. MS has been proven to be a powerful tool in this regard. Here, we focus on bottom-up, middle-down, and top-down MS approaches, including CID and electron-capture dissociation/electron-transfer dissociation based techniques for characterization of histones and their PTMs. In addition, we discuss advances in chromatographic separation that take advantage of the chemical properties of the specific histone modifications. This review is also unique in its discussion of current bioinformatic strategies for comprehensive histone code analysis.

  14. Methodology to characterize a residential building stock using a bottom-up approach: a case study applied to Belgium

    Directory of Open Access Journals (Sweden)

    Samuel Gendebien

    2014-06-01

    Full Text Available In the last ten years, the development and implementation of measures to mitigate climate change have become of major importance. In Europe, the residential sector accounts for 27% of the final energy consumption [1], and therefore contributes significantly to CO2 emissions. Roadmaps towards energy-efficient buildings have been proposed [2]. In such a context, the detailed characterization of residential building stocks in terms of age, type of construction, insulation level, energy vector, and of evolution prospects appears to be a useful contribution to the assessment of the impact of implementation of energy policies. In this work, a methodology to develop a tree-structure characterizing a residential building stock is presented in the frame of a bottom-up approach that aims to model and simulate domestic energy use. The methodology is applied to the Belgian case for the current situation and up to 2030 horizon. The potential applications of the developed tool are outlined.

  15. Construction of membrane-bound artificial cells using microfluidics: a new frontier in bottom-up synthetic biology.

    Science.gov (United States)

    Elani, Yuval

    2016-06-15

    The quest to construct artificial cells from the bottom-up using simple building blocks has received much attention over recent decades and is one of the grand challenges in synthetic biology. Cell mimics that are encapsulated by lipid membranes are a particularly powerful class of artificial cells due to their biocompatibility and the ability to reconstitute biological machinery within them. One of the key obstacles in the field centres on the following: how can membrane-based artificial cells be generated in a controlled way and in high-throughput? In particular, how can they be constructed to have precisely defined parameters including size, biomolecular composition and spatial organization? Microfluidic generation strategies have proved instrumental in addressing these questions. This article will outline some of the major principles underpinning membrane-based artificial cells and their construction using microfluidics, and will detail some recent landmarks that have been achieved.

  16. Bottom-Up Fabrication of Nanopatterned Polymers on DNA Origami by In Situ Atom-Transfer Radical Polymerization.

    Science.gov (United States)

    Tokura, Yu; Jiang, Yanyan; Welle, Alexander; Stenzel, Martina H; Krzemien, Katarzyna M; Michaelis, Jens; Berger, Rüdiger; Barner-Kowollik, Christopher; Wu, Yuzhou; Weil, Tanja

    2016-05-04

    Bottom-up strategies to fabricate patterned polymers at the nanoscale represent an emerging field in the development of advanced nanodevices, such as biosensors, nanofluidics, and nanophotonics. DNA origami techniques provide access to distinct architectures of various sizes and shapes and present manifold opportunities for functionalization at the nanoscale with the highest precision. Herein, we conduct in situ atom-transfer radical polymerization (ATRP) on DNA origami, yielding differently nanopatterned polymers of various heights. After cross-linking, the grafted polymeric nanostructures can even stably exist in solution without the DNA origami template. This straightforward approach allows for the fabrication of patterned polymers with low nanometer resolution, which provides access to unique DNA-based functional hybrid materials.

  17. Cyclization of the N-Terminal X-Asn-Gly Motif during Sample Preparation for Bottom-Up Proteomics

    DEFF Research Database (Denmark)

    Zhang, Xumin; Højrup, Peter

    2010-01-01

    We, herein, report a novel -17 Da peptide modification corresponding to an N-terminal cyclization of peptides possessing the N-terminal motif of X-Asn-Gly. The cyclization occurs spontaneously during sample preparation for bottom-up proteomics studies. Distinct from the two well-known N......-terminal cyclizations, cyclization of N-terminal glutamine and S-carbamoylmethylcysteine, it is dependent on pH instead of [NH(4)(+)]. The data set from our recent study on large-scale N(α)-modified peptides revealed a sequence requirement for the cyclization event similar to the well-known deamidation of Asn to iso......Asp and Asp. Detailed analysis using synthetic peptides confirmed that the cyclization forms between the N-terminus and its neighboring Asn residue, and the reaction shares the same succinimide intermediate with the Asn deamidation event. As a result, we, here, propose a molecular mechanism for this specific...

  18. Bottom-up fabrication of paper-based microchips by blade coating of cellulose microfibers on a patterned surface.

    Science.gov (United States)

    Gao, Bingbing; Liu, Hong; Gu, Zhongze

    2014-12-23

    We report a method for the bottom-up fabrication of paper-based capillary microchips by the blade coating of cellulose microfibers on a patterned surface. The fabrication process is similar to the paper-making process in which an aqueous suspension of cellulose microfibers is used as the starting material and is blade-coated onto a polypropylene substrate patterned using an inkjet printer. After water evaporation, the cellulose microfibers form a porous, hydrophilic, paperlike pattern that wicks aqueous solution by capillary action. This method enables simple, fast, inexpensive fabrication of paper-based capillary channels with both width and height down to about 10 μm. When this method is used, the capillary microfluidic chip for the colorimetric detection of glucose and total protein is fabricated, and the assay requires only 0.30 μL of sample, which is 240 times smaller than for paper devices fabricated using photolithography.

  19. Identifying Bottom-Up and Top-Down Components of Attentional Weight by Experimental Analysis and Computational Modeling

    DEFF Research Database (Denmark)

    Nordfang, Maria; Dyrholm, Mads; Bundesen, Claus

    2013-01-01

    The attentional weight of a visual object depends on the contrast of the features of the object to its local surroundings (feature contrast) and the relevance of the features to one’s goals (feature relevance). We investigated the dependency in partial report experiments with briefly presented....... Measured by use of Bundesen’s (1990) computational theory of visual attention, the attentional weight of a singleton object was nearly proportional to the weight of an otherwise similar nonsingleton object, with a factor of proportionality that increased with the strength of the feature contrast...... of the singleton. This result is explained by generalizing the weight equation of Bundesen’s (1990) theory of visual attention such that the attentional weight of an object becomes a product of a bottom-up (feature contrast) and a top-down (feature relevance) component....

  20. Mass Spectrometry Applied to Bottom-Up Proteomics: Entering the High-Throughput Era for Hypothesis Testing

    Science.gov (United States)

    Gillet, Ludovic C.; Leitner, Alexander; Aebersold, Ruedi

    2016-06-01

    Proteins constitute a key class of molecular components that perform essential biochemical reactions in living cells. Whether the aim is to extensively characterize a given protein or to perform high-throughput qualitative and quantitative analysis of the proteome content of a sample, liquid chromatography coupled to tandem mass spectrometry has become the technology of choice. In this review, we summarize the current state of mass spectrometry applied to bottom-up proteomics, the approach that focuses on analyzing peptides obtained from proteolytic digestion of proteins. With the recent advances in instrumentation and methodology, we show that the field is moving away from providing qualitative identification of long lists of proteins to delivering highly consistent and accurate quantification values for large numbers of proteins across large numbers of samples. We believe that this shift will have a profound impact for the field of proteomics and life science research in general.

  1. The faith of a physicist reflections of a bottom-up thinker : the Gifford lectures for 1993-4

    CERN Document Server

    Polkinghorne, John C

    1994-01-01

    Is it possible to think like a scientist and yet have the faith of a Christian? Although many Westerners might say no, there are also many critically minded individuals who entertain what John Polkinghorne calls a "wistful wariness" toward religion--they feel unable to accept religion on rational grounds yet cannot dismiss it completely. Polkinghorne, both a particle physicist and Anglican priest, here explores just what rational grounds there could be for Christian beliefs, maintaining that the quest for motivated understanding is a concern shared by scientists and religious thinkers alike. Anyone who assumes that religion is based on unquestioning certainties, or that it need not take into account empirical knowledge, will be challenged by Polkinghorne's bottom-up examination of Christian beliefs about events ranging from creation to the resurrection. The author organizes his inquiry around the Nicene Creed, an early statement that continues to summarize Christian beliefs. He applies to each of its tenets ...

  2. Radiographic Evaluation of Children with Febrile Urinary Tract Infection: Bottom-Up, Top-Down, or None of the Above?

    Directory of Open Access Journals (Sweden)

    Michaella M. Prasad

    2012-01-01

    Full Text Available The proper algorithm for the radiographic evaluation of children with febrile urinary tract infection (FUTI is hotly debated. Three studies are commonly administered: renal-bladder ultrasound (RUS, voiding cystourethrogram (VCUG, and dimercapto-succinic acid (DMSA scan. However, the order in which these tests are obtained depends on the methodology followed: bottom-up or top-down. Each strategy carries advantages and disadvantages, and some groups now advocate even less of a workup (none of the above due to the current controversies about treatment when abnormalities are diagnosed. New technology is available and still under investigation, but it may help to clarify the interplay between vesicoureteral reflux, renal scarring, and dysfunctional elimination in the future.

  3. Identifying robust clusters and multi-community nodes by combining top-down and bottom-up approaches to clustering

    CERN Document Server

    Gaiteri, Chris; Szymanski, Boleslaw; Kuzmin, Konstantin; Xie, Jierui; Lee, Changkyu; Blanche, Timothy; Neto, Elias Chaibub; Huang, Su-Chun; Grabowski, Thomas; Madhyastha, Tara; Komashko, Vitalina

    2015-01-01

    Biological functions are often realized by groups of interacting molecules or cells. Membership in these groups may overlap when molecules or cells are reused in multiple functions. Traditional clustering methods assign each component to one group. Noisy measurements are common in high-throughput biological datasets. These two limitations reduce our ability to accurately define clusters in biological datasets and to interpret their biological functions. To address these limitations, we designed an algorithm called SpeakEasy, which detects overlapping or non-overlapping communities in biological networks. Input to SpeakEasy can be physical networks, such as molecular interactions, or inferred networks, such as gene coexpression networks. The networks can be directed or undirected, and may contain negative links. SpeakEasy combines traditional bottom-up and top-down approaches to clustering, by creating competition between clusters. Nodes that oscillate between multiple clusters in this competition are classifi...

  4. Bottom-up/top-down high resolution, high throughput lithography using vertically assembled block bottle brush polymers

    Science.gov (United States)

    Trefonas, Peter; Thackeray, James W.; Sun, Guorong; Cho, Sangho; Clark, Corrie; Verkhoturov, Stanislav V.; Eller, Michael J.; Li, Ang; Pavía-Jiménez, Adriana; Schweikert, Emile A.; Wooley, Karen L.

    2013-03-01

    We describe a novel deterministic bottom-up / top-down approach to sub-30 nm photolithography using a film composed of assembled block brush polymers of highly uniform composition and chain length. The polymer architecture consists of a rigid backbone of polymerized norbornene, each linked to flexible short side brush chains. The resultant `bottle brush' topology has a cylindrical shape with short brush chains arranged concentrically around the backbone, in which the cylinder radius is determined by the number of monomers within the brush fragment, while the cylinder length is determined by the degree of backbone polymerization. The modularity of the synthetic system allows a wide diversity of lithographically useful monomers, sequencing, dimension and property variation. Sequential grafting of pre-synthesized blocks allows for facile formation of either concentric or lengthwise block copolymers. Placement of brush chains of different compositions along different regions of the cylinder, along with variation of the relative concentric and lengthwise dimensions, provides mechanisms to align and control placement of the cylinders. These polymers are compatible with photoacid generators (PAGs) and crosslinker functionality. Our results are consistent with a model that the bottle brush polymers assemble (bottom-up) in the film to yield a `forest' of vertically arranged cylindrical block brush polymers, with the film thickness determined by the coherence lengths of the cylinders. Subsequent imaging via electron beam (EB or ebeam) or optical radiation yields a (top-down) mechanism for acid catalyzed crosslinking of adjacent cylinders. Uncrosslinked cylinders are removed in developer to yield negative photoresist patterns. Exposure doses are very low and throughputs are amenable to the requirements of Extreme Ultraviolet (EUV) lithography. The limiting resolution with ebeam exposure is potentially about two cylinder diameters width (< 8 nm), with the smallest observed

  5. Bottom-up/top-down, high-resolution, high-throughput lithography using vertically assembled block bottle brush polymers

    Science.gov (United States)

    Trefonas, Peter; Thackeray, James W.; Sun, Guorong; Cho, Sangho; Clark, Corrie; Verkhoturov, Stanislav V.; Eller, Michael J.; Li, Ang; Pavia-Sanders, Adriana; Schweikert, Emile A.; Wooley, Karen L.

    2013-10-01

    We describe a novel deterministic bottom-up/top-down approach to sub-30-nm photolithography using a film composed of assembled block brush polymers of highly uniform composition and chain length. The polymer architecture consists of a rigid backbone of polymerized norbornene, each linked to flexible short side brush chains. The resultant bottle brush topology has a cylindrical shape with short brush chains arranged concentrically around the backbone, in which the cylinder radius is determined by the number of monomers within the brush fragment, while the cylinder length is determined by the degree of backbone polymerization. The modularity of the synthetic system allows a wide diversity of lithographically useful monomers, sequencing, dimension, and property variation. Sequential grafting of presynthesized blocks allows for facile formation of either concentric or lengthwise block copolymers. Placement of brush chains of different compositions along different regions of the cylinder, along with variation of the relative concentric and lengthwise dimensions, provides mechanisms to align and control placement of the cylinders. These polymers are compatible with photoacid generators and crosslinker functionality. Our results are consistent with a model that the bottle brush polymers assemble (bottom-up) in the film to yield a forest of vertically arranged cylindrical block brush polymers, with the film thickness determined by the coherence lengths of the cylinders. Subsequent imaging via electron beam (e-beam) or optical radiation yields a (top-down) mechanism for acid catalyzed crosslinking of adjacent cylinders. Uncrosslinked cylinders are removed in developer to yield negative photoresist patterns. Exposure doses are very low and throughputs are amenable to the requirements of extreme ultraviolet lithography. The limiting resolution with e-beam exposure is potentially about two cylinder diameters width (<8 nm), with the smallest observed patterns approaching 10 nm.

  6. Biochemistry-directed hollow porous microspheres: bottom-up self-assembled polyanion-based cathodes for sodium ion batteries.

    Science.gov (United States)

    Lin, Bo; Li, Qiufeng; Liu, Baodong; Zhang, Sen; Deng, Chao

    2016-04-21

    Biochemistry-directed synthesis of functional nanomaterials has attracted great interest in energy storage, catalysis and other applications. The unique ability of biological systems to guide molecule self-assembling facilitates the construction of distinctive architectures with desirable physicochemical characteristics. Herein, we report a biochemistry-directed "bottom-up" approach to construct hollow porous microspheres of polyanion materials for sodium ion batteries. Two kinds of polyanions, i.e. Na3V2(PO4)3 and Na3.12Fe2.44(P2O7)2, are employed as cases in this study. The microalgae cell realizes the formation of a spherical "bottom" bio-precursor. Its tiny core is subjected to destruction and its tough shell tends to carbonize upon calcination, resulting in the hollow porous microspheres for the "top" product. The nanoscale crystals of the polyanion materials are tightly enwrapped by the highly-conductive framework in the hollow microsphere, resulting in the hierarchical nano-microstructure. The whole formation process is disclosed as a "bottom-up" mechanism. Moreover, the biochemistry-directed self-assembly process is confirmed to play a crucial role in the construction of the final architecture. Taking advantage of the well-defined hollow-microsphere architecture, the abundant interior voids and the highly-conductive framework, polyanion materials show favourable sodium-intercalation kinetics. Both materials are capable of high-rate long-term cycling. After five hundred cycles at 20 C and 10 C, Na3V2(PO4)3 and Na(3.12)Fe2.44(P2O7)2 retain 96.2% and 93.1% of the initial capacity, respectively. Therefore, the biochemistry-directed technique provides a low-cost, highly-efficient and widely applicable strategy to produce high-performance polyanion-based cathodes for sodium ion batteries.

  7. Fixation and saliency during search of natural scenes: the case of visual agnosia.

    Science.gov (United States)

    Foulsham, Tom; Barton, Jason J S; Kingstone, Alan; Dewhurst, Richard; Underwood, Geoffrey

    2009-07-01

    Models of eye movement control in natural scenes often distinguish between stimulus-driven processes (which guide the eyes to visually salient regions) and those based on task and object knowledge (which depend on expectations or identification of objects and scene gist). In the present investigation, the eye movements of a patient with visual agnosia were recorded while she searched for objects within photographs of natural scenes and compared to those made by students and age-matched controls. Agnosia is assumed to disrupt the top-down knowledge available in this task, and so may increase the reliance on bottom-up cues. The patient's deficit in object recognition was seen in poor search performance and inefficient scanning. The low-level saliency of target objects had an effect on responses in visual agnosia, and the most salient region in the scene was more likely to be fixated by the patient than by controls. An analysis of model-predicted saliency at fixation locations indicated a closer match between fixations and low-level saliency in agnosia than in controls. These findings are discussed in relation to saliency-map models and the balance between high and low-level factors in eye guidance.

  8. Identifying robust communities and multi-community nodes by combining top-down and bottom-up approaches to clustering.

    Science.gov (United States)

    Gaiteri, Chris; Chen, Mingming; Szymanski, Boleslaw; Kuzmin, Konstantin; Xie, Jierui; Lee, Changkyu; Blanche, Timothy; Chaibub Neto, Elias; Huang, Su-Chun; Grabowski, Thomas; Madhyastha, Tara; Komashko, Vitalina

    2015-11-09

    Biological functions are carried out by groups of interacting molecules, cells or tissues, known as communities. Membership in these communities may overlap when biological components are involved in multiple functions. However, traditional clustering methods detect non-overlapping communities. These detected communities may also be unstable and difficult to replicate, because traditional methods are sensitive to noise and parameter settings. These aspects of traditional clustering methods limit our ability to detect biological communities, and therefore our ability to understand biological functions. To address these limitations and detect robust overlapping biological communities, we propose an unorthodox clustering method called SpeakEasy which identifies communities using top-down and bottom-up approaches simultaneously. Specifically, nodes join communities based on their local connections, as well as global information about the network structure. This method can quantify the stability of each community, automatically identify the number of communities, and quickly cluster networks with hundreds of thousands of nodes. SpeakEasy shows top performance on synthetic clustering benchmarks and accurately identifies meaningful biological communities in a range of datasets, including: gene microarrays, protein interactions, sorted cell populations, electrophysiology and fMRI brain imaging.

  9. Estimation of Emissions from Sugarcane Field Burning in Thailand Using Bottom-Up Country-Specific Activity Data

    Directory of Open Access Journals (Sweden)

    Wilaiwan Sornpoon

    2014-09-01

    Full Text Available Open burning in sugarcane fields is recognized as a major source of air pollution. However, the assessment of its emission intensity in many regions of the world still lacks information, especially regarding country-specific activity data including biomass fuel load and combustion factor. A site survey was conducted covering 13 sugarcane plantations subject to different farm management practices and climatic conditions. The results showed that pre-harvest and post-harvest burnings are the two main practices followed in Thailand. In 2012, the total production of sugarcane biomass fuel, i.e., dead, dry and fresh leaves, amounted to 10.15 million tonnes, which is equivalent to a fuel density of 0.79 kg∙m−2. The average combustion factor for the pre-harvest and post-harvest burning systems was determined to be 0.64 and 0.83, respectively. Emissions from sugarcane field burning were estimated using the bottom-up country-specific values from the site survey of this study and the results compared with those obtained using default values from the 2006 IPCC Guidelines. The comparison showed that the use of default values lead to underestimating the overall emissions by up to 30% as emissions from post-harvest burning are not accounted for, but it is the second most common practice followed in Thailand.

  10. Temporal shifts in top-down vs. bottom-up control of epiphytic algae in a seagrass ecosystem

    Science.gov (United States)

    Whalen, Matthew A.; Duffy, J. Emmett; Grace, James B.

    2013-01-01

    In coastal marine food webs, small invertebrate herbivores (mesograzers) have long been hypothesized to occupy an important position facilitating dominance of habitat-forming macrophytes by grazing competitively superior epiphytic algae. Because of the difficulty of manipulating mesograzers in the field, however, their impacts on community organization have rarely been rigorously documented. Understanding mesograzer impacts has taken on increased urgency in seagrass systems due to declines in seagrasses globally, caused in part by widespread eutrophication favoring seagrass overgrowth by faster-growing algae. Using cage-free field experiments in two seasons (fall and summer), we present experimental confirmation that mesograzer reduction and nutrients can promote blooms of epiphytic algae growing on eelgrass (Zostera marina). In this study, nutrient additions increased epiphytes only in the fall following natural decline of mesograzers. In the summer, experimental mesograzer reduction stimulated a 447% increase in epiphytes, appearing to exacerbate seasonal dieback of eelgrass. Using structural equation modeling, we illuminate the temporal dynamics of complex interactions between macrophytes, mesograzers, and epiphytes in the summer experiment. An unexpected result emerged from investigating the interaction network: drift macroalgae indirectly reduced epiphytes by providing structure for mesograzers, suggesting that the net effect of macroalgae on seagrass depends on macroalgal density. Our results show that mesograzers can control proliferation of epiphytic algae, that top-down and bottom-up forcing are temporally variable, and that the presence of macroalgae can strengthen top-down control of epiphytic algae, potentially contributing to eelgrass persistence.

  11. Bioenergy decision-making of farms in Northern Finland: Combining the bottom-up and top-down perspectives

    Energy Technology Data Exchange (ETDEWEB)

    Snaekin, Juha-Pekka, E-mail: juhapekkasnakin@luukku.co [University of Oulu, Department of Geography, P.O. Box 3000, FIN-90014 Oulu (Finland); Muilu, Toivo; Pesola, Tuomo [University of Oulu, Department of Geography, P.O. Box 3000, FIN-90014 Oulu (Finland)

    2010-10-15

    Finnish farmers' role as energy producers is small compared to their role as energy resource owners. Since climate and energy policy in Finland continues favoring large-scale energy visions, additional investment support for agriculture will stay modest. To utilize fully the energy potential in farms, we analyze the farmers' decision-making environment. First, we present an overview of the Finnish energy policy and economy and their effect on farms (the top-down perspective). Then we analyze the drivers behind the bioenergy decisions of farms in general and in the Oulu region, located in Northern Finland (the bottom-up perspective). There is weak policy coherence between national and regional energy efforts. Strong pressure is placed on farmers to improve their business and marketing knowledge, innovation and financial abilities, education level, and networking skills. In the Oulu region, bioenergy forerunners can be divided in three different groups - investors, entrepreneurs and hobbyists - that have different levels of commitment to their energy businesses. This further stresses the importance of getting quality business services from numerous service providers.

  12. Bioenergy decision-making of farms in Northern Finland. Combining the bottom-up and top-down perspectives

    Energy Technology Data Exchange (ETDEWEB)

    Snaekin, Juha-Pekka; Muilu, Toivo; Pesola, Tuomo [University of Oulu, Department of Geography, P.O. Box 3000, FIN-90014 Oulu (Finland)

    2010-10-15

    Finnish farmers' role as energy producers is small compared to their role as energy resource owners. Since climate and energy policy in Finland continues favoring large-scale energy visions, additional investment support for agriculture will stay modest. To utilize fully the energy potential in farms, we analyze the farmers' decision-making environment. First, we present an overview of the Finnish energy policy and economy and their effect on farms (the top-down perspective). Then we analyze the drivers behind the bioenergy decisions of farms in general and in the Oulu region, located in Northern Finland (the bottom-up perspective). There is weak policy coherence between national and regional energy efforts. Strong pressure is placed on farmers to improve their business and marketing knowledge, innovation and financial abilities, education level, and networking skills. In the Oulu region, bioenergy forerunners can be divided in three different groups - investors, entrepreneurs and hobbyists - that have different levels of commitment to their energy businesses. This further stresses the importance of getting quality business services from numerous service providers. (author)

  13. A bottom-up approach to identifying the maximum operational adaptive capacity of water resource systems to a changing climate

    Science.gov (United States)

    Culley, S.; Noble, S.; Yates, A.; Timbs, M.; Westra, S.; Maier, H. R.; Giuliani, M.; Castelletti, A.

    2016-09-01

    Many water resource systems have been designed assuming that the statistical characteristics of future inflows are similar to those of the historical record. This assumption is no longer valid due to large-scale changes in the global climate, potentially causing declines in water resource system performance, or even complete system failure. Upgrading system infrastructure to cope with climate change can require substantial financial outlay, so it might be preferable to optimize existing system performance when possible. This paper builds on decision scaling theory by proposing a bottom-up approach to designing optimal feedback control policies for a water system exposed to a changing climate. This approach not only describes optimal operational policies for a range of potential climatic changes but also enables an assessment of a system's upper limit of its operational adaptive capacity, beyond which upgrades to infrastructure become unavoidable. The approach is illustrated using the Lake Como system in Northern Italy—a regulated system with a complex relationship between climate and system performance. By optimizing system operation under different hydrometeorological states, it is shown that the system can continue to meet its minimum performance requirements for more than three times as many states as it can under current operations. Importantly, a single management policy, no matter how robust, cannot fully utilize existing infrastructure as effectively as an ensemble of flexible management policies that are updated as the climate changes.

  14. A bottom-up, scientist-based initiative for the communication of climate sciences with the general public

    Science.gov (United States)

    Bourqui, Michel; Bolduc, Cassandra; Paul, Charbonneau; Marie, Charrière; Daniel, Hill; Angelica, Lopez; Enrique, Loubet; Philippe, Roy; Barbara, Winter

    2015-04-01

    This talk introduces a scientists-initiated, new online platform whose aim is to contribute to making climate sciences become public knowledge. It takes a unique bottom-up approach, strictly founded on individual-based participation, high scientific standards and independence The main purpose is to build an open-access, multilingual and peer-reviewed journal publishing short climate articles in non-scientific language. The targeted public includes journalists, teachers, students, local politicians, economists, members of the agriculture sector, and any other citizens from around the world with an interest in climate sciences. This journal is meant to offer a simple and direct channel for scientists wishing to disseminate their research to the general public. A high standard of climate articles is ensured through: a) requiring that the main author is an active climate scientist, and b) an innovative peer-review process involving scientific and non-scientific referees with distinct roles. The platform fosters the direct participation of non-scientists through co-authoring, peer-reviewing, language translation. It furthermore engages the general public in the scientific inquiry by allowing non-scientists to invite manuscripts to be written on topics of their concern. The platform is currently being developed by a community of scientists and non-scientists. In this talk, I will present the basic ideas behind this new online platform, its current state and the plans for the next future. The beta version of the platform is available at: http://www.climateonline.bourquiconsulting.ch

  15. Encouraging the pursuit of advanced degrees in science and engineering: Top-down and bottom-up methodologies

    Science.gov (United States)

    Maddox, Anthony B.; Smith-Maddox, Renee P.; Penick, Benson E.

    1989-01-01

    The MassPEP/NASA Graduate Research Development Program (GRDP) whose objective is to encourage Black Americans, Mexican Americans, American Indians, Puerto Ricans, and Pacific Islanders to pursue graduate degrees in science and engineering is described. The GRDP employs a top-down or goal driven methodology through five modules which focus on research, graduate school climate, technical writing, standardized examinations, and electronic networking. These modules are designed to develop and reinforce some of the skills necessary to seriously consider the goal of completing a graduate education. The GRDP is a community-based program which seeks to recruit twenty participants from a pool of Boston-area undergraduates enrolled in engineering and science curriculums and recent graduates with engineering and science degrees. The program emphasizes that with sufficient information, its participants can overcome most of the barriers perceived as preventing them from obtaining graduate science and engineering degrees. Experience has shown that the top-down modules may be complemented by a more bottom-up or event-driven methodology. This approach considers events in the academic and professional experiences of participants in order to develop the personal and leadership skills necessary for graduate school and similar endeavors.

  16. Bottom-up low molecular weight heparin analysis using liquid chromatography-Fourier transform mass spectrometry for extensive characterization.

    Science.gov (United States)

    Li, Guoyun; Steppich, Julia; Wang, Zhenyu; Sun, Yi; Xue, Changhu; Linhardt, Robert J; Li, Lingyun

    2014-07-01

    Low molecular weight heparins (LMWHs) are heterogeneous, polydisperse, and highly negatively charged mixtures of glycosaminoglycan chains prescribed as anticoagulants. The detailed characterization of LMWH is important for the drug quality assurance and for new drug research and development. In this study, online hydrophilic interaction chromatography (HILIC) Fourier transform mass spectrometry (FTMS) was applied to analyze the oligosaccharide fragments of LMWHs generated by heparin lyase II digestion. More than 40 oligosaccharide fragments of LMWH were quantified and used to compare LMWHs prepared by three different manufacturers. The quantified fragment structures included unsaturated disaccharides/oligosaccharides arising from the prominent repeating units of these LMWHs, 3-O-sulfo containing tetrasaccharides arising from their antithrombin III binding sites, 1,6-anhydro ring-containing oligosaccharides formed during their manufacture, saturated uronic acid oligosaccharides coming from some chain nonreducing ends, and oxidized linkage region oligosaccharides coming from some chain reducing ends. This bottom-up approach provides rich detailed structural analysis and quantitative information with high accuracy and reproducibility. When combined with the top-down approach, HILIC LC-FTMS based analysis should be suitable for the advanced quality control and quality assurance in LMWH production.

  17. A bottom-up method for module-based product platform development through mapping, clustering and matching analysis

    Institute of Scientific and Technical Information of China (English)

    ZHANG Meng; LI Guo-xi; CAO Jian-ping; GONG Jing-zhong; WU Bao-zhong

    2016-01-01

    Designing product platform could be an effective and efficient solution for manufacturing firms. Product platforms enable firms to provide increased product variety for the marketplace with as little variety between products as possible. Developed consumer products and modules within a firm can further be investigated to find out the possibility of product platform creation. A bottom-up method is proposed for module-based product platform through mapping, clustering and matching analysis. The framework and the parametric model of the method are presented, which consist of three steps: (1) mapping parameters from existing product families to functional modules, (2) clustering the modules within existing module families based on their parameters so as to generate module clusters, and selecting the satisfactory module clusters based on commonality, and (3) matching the parameters of the module clusters to the functional modules in order to capture platform elements. In addition, the parameter matching criterion and mismatching treatment are put forward to ensure the effectiveness of the platform process, while standardization and serialization of the platform element are presented. A design case of the belt conveyor is studied to demonstrate the feasibility of the proposed method.

  18. Estimation of the measurement uncertainty by the bottom-up approach for the determination of methamphetamine and amphetamine in urine.

    Science.gov (United States)

    Lee, Sooyeun; Choi, Hyeyoung; Kim, Eunmi; Choi, Hwakyung; Chung, Heesun; Chung, Kyu Hyuck

    2010-05-01

    The measurement uncertainty (MU) of methamphetamine (MA) and amphetamine (AP) was estimated in an authentic urine sample with a relatively low concentration of MA and AP using the bottom-up approach. A cause and effect diagram was deduced; the amount of MA or AP in the sample, the volume of the sample, method precision, and sample effect were considered uncertainty sources. The concentrations of MA and AP in the urine sample with their expanded uncertainties were 340.5 +/- 33.2 ng/mL and 113.4 +/- 15.4 ng/mL, respectively, which means 9.7% and 13.6% of the concentration gave an estimated expanded uncertainty, respectively. The largest uncertainty originated from sample effect and method precision in MA and AP, respectively, but the uncertainty of the volume of the sample was minimal in both. The MU needs to be determined during the method validation process to assess test reliability. Moreover, the identification of the largest and/or smallest uncertainty source can help improve experimental protocols.

  19. D-Branes at Singularities A Bottom-Up Approach to the String Embedding of the Standard Model

    CERN Document Server

    Aldazabal, G; Quevedo, Fernando; Uranga, Angel M

    2000-01-01

    We propose a bottom-up approach to the building of particle physics models from string theory. Our building blocks are Type II D-branes which we combine appropriately to reproduce desirable features of a particle theory model: 1) Chirality ; 2) Standard Model group ; 3) N=1 or N=0 supersymmetry ; 4) Three quark-lepton generations. We start such a program by studying configurations of D=10, Type IIB D3-branes located at singularities. We study in detail the case of Z_N, N=1,0 supersymmetric orbifold singularities leading to the SM group or some left-right symmetricextension. In general, tadpole cancellation conditions require the presence of additional branes, e.g. D7-branes. For the N=1 supersymmetric case the unique twist leading to three quark-lepton generations is Z_3, predicting $\\sin^2\\theta_W=3/14=0.21$. The models obtained are the simplest semirealistic string models ever built. In the non-supersymmetric case there is a three-generation model for each Z_N, N>4, but the Weinberg angle is in general too ...

  20. Biochemistry-directed hollow porous microspheres: bottom-up self-assembled polyanion-based cathodes for sodium ion batteries

    Science.gov (United States)

    Lin, Bo; Li, Qiufeng; Liu, Baodong; Zhang, Sen; Deng, Chao

    2016-04-01

    Biochemistry-directed synthesis of functional nanomaterials has attracted great interest in energy storage, catalysis and other applications. The unique ability of biological systems to guide molecule self-assembling facilitates the construction of distinctive architectures with desirable physicochemical characteristics. Herein, we report a biochemistry-directed ``bottom-up'' approach to construct hollow porous microspheres of polyanion materials for sodium ion batteries. Two kinds of polyanions, i.e. Na3V2(PO4)3 and Na3.12Fe2.44(P2O7)2, are employed as cases in this study. The microalgae cell realizes the formation of a spherical ``bottom'' bio-precursor. Its tiny core is subjected to destruction and its tough shell tends to carbonize upon calcination, resulting in the hollow porous microspheres for the ``top'' product. The nanoscale crystals of the polyanion materials are tightly enwrapped by the highly-conductive framework in the hollow microsphere, resulting in the hierarchical nano-microstructure. The whole formation process is disclosed as a ``bottom-up'' mechanism. Moreover, the biochemistry-directed self-assembly process is confirmed to play a crucial role in the construction of the final architecture. Taking advantage of the well-defined hollow-microsphere architecture, the abundant interior voids and the highly-conductive framework, polyanion materials show favourable sodium-intercalation kinetics. Both materials are capable of high-rate long-term cycling. After five hundred cycles at 20 C and 10 C, Na3V2(PO4)3 and Na3.12Fe2.44(P2O7)2 retain 96.2% and 93.1% of the initial capacity, respectively. Therefore, the biochemistry-directed technique provides a low-cost, highly-efficient and widely applicable strategy to produce high-performance polyanion-based cathodes for sodium ion batteries.Biochemistry-directed synthesis of functional nanomaterials has attracted great interest in energy storage, catalysis and other applications. The unique ability of

  1. A novel bottom-up process to produce nanoparticles containing protein and peptide for suspension in hydrofluoroalkane propellants.

    Science.gov (United States)

    Tan, Yinhe; Yang, Zhiwen; Peng, Xinsheng; Xin, Feng; Xu, Yuehong; Feng, Min; Zhao, Chunshun; Hu, Haiyan; Wu, Chuanbin

    2011-07-15

    To overcome the disadvantages of microemulsion and nanoprecipitation methods to produce protein-containing nanoparticles, a novel bottom-up process was developed to produce nanoparticles containing the model protein lysozyme. The nanoparticles were generated by freeze-drying a solution of lysozyme, lecithin and lactose in tert-butyl alcohol (TBA)/water co-solvent system and washing off excess lecithin in lyophilizate by centrifugation. Formulation parameters such as lecithin concentration in organic phase, water content in TBA/water co-solvent, and lactose concentration in water were optimized so as to obtain desired nanoparticles with retention of the bioactivity of lysozyme. Based on the results, 24.0% (w/v) of lecithin, 37.5% (v/v) of water content, and 0.56% (w/v) of lactose concentration were selected to generate spherical nanoparticles with approximately 200 nm in mean size, 0.1 in polydispersity index (PI), and 99% retained bioactivity of lysozyme. These nanoparticles rinsed with ethanol containing dipalmitoylphosphatidylcholine (DPPC), Span 85 or oleic acid (3%, w/v) could readily be dispersed in HFA 134a to form a stable suspension with good redispersibility and 98% retained bioactivity of lysozyme. The study indicates there is a potential to produce pressed metered dose inhaler (pMDI) formulations containing therapeutic protein and peptide nanoparticles.

  2. Conservative and dissipative force field for simulation of coarse-grained alkane molecules: A bottom-up approach

    Energy Technology Data Exchange (ETDEWEB)

    Trément, Sébastien; Rousseau, Bernard, E-mail: bernard.rousseau@u-psud.fr [Laboratoire de Chimie-Physique, UMR 8000 CNRS, Université Paris-Sud, Orsay (France); Schnell, Benoît; Petitjean, Laurent; Couty, Marc [Manufacture Française des Pneumatiques MICHELIN, Centre de Ladoux, 23 place des Carmes, 63000 Clermont-Ferrand (France)

    2014-04-07

    We apply operational procedures available in the literature to the construction of coarse-grained conservative and friction forces for use in dissipative particle dynamics (DPD) simulations. The full procedure rely on a bottom-up approach: large molecular dynamics trajectories of n-pentane and n-decane modeled with an anisotropic united atom model serve as input for the force field generation. As a consequence, the coarse-grained model is expected to reproduce at least semi-quantitatively structural and dynamical properties of the underlying atomistic model. Two different coarse-graining levels are studied, corresponding to five and ten carbon atoms per DPD bead. The influence of the coarse-graining level on the generated force fields contributions, namely, the conservative and the friction part, is discussed. It is shown that the coarse-grained model of n-pentane correctly reproduces self-diffusion and viscosity coefficients of real n-pentane, while the fully coarse-grained model for n-decane at ambient temperature over-predicts diffusion by a factor of 2. However, when the n-pentane coarse-grained model is used as a building block for larger molecule (e.g., n-decane as a two blobs model), a much better agreement with experimental data is obtained, suggesting that the force field constructed is transferable to large macro-molecular systems.

  3. A Nonminimal SO(10) x U(1)-F SUSY GUT model obtained from a bottom up approach

    Energy Technology Data Exchange (ETDEWEB)

    Albright, Carl H.

    1996-08-01

    Many of the ingredients are explored which are needed to develop a super- symmetric SO(10) x U(1)_F grand unified model based on the Yukawa structure of a model previously constructed in collaboration with S. Nandi to explain the quark and lepton masses and mixings in a particular neutrino scenario. The U(1)_F family symmetry can be made anomaly-free with the introduction of one conjugate pair of SO(10)-singlet neutrinos with the same U(1)_F charge. Due to a plethora of conjugate pairs of supermultiplets, the model develops a Landau singularity within a factor of 1.5 above the GUT scale. With the imposition of a Z_2 discrete symmetry and under certain conditions, all higgsino triplets can be made superheavy while just one pair of higgsino doublets remains light and results in mass matrix textures previously obtained from the bottom-up approach. Diametrically opposite splitting of the first and third family scalar quark and lepton masses away from the second family ones results from the nonuniversal D-term contributions.

  4. A comparative 'bottom up' proteomics strategy for the site-specific identification and quantification of protein modifications by electrophilic lipids.

    Science.gov (United States)

    Han, Bingnan; Hare, Michael; Wickramasekara, Samanthi; Fang, Yi; Maier, Claudia S

    2012-10-22

    We report a mass spectrometry-based comparative "bottom up" proteomics approach that combines d(0)/d(4)-succinic anhydride labeling with commercially available hydrazine (Hz)-functionalized beads (Affi-gel Hz beads) for detection, identification and relative quantification of site-specific oxylipid modifications in biological matrices. We evaluated and applied this robust and simple method for the quantitative analysis of oxylipid protein conjugates in cardiac mitochondrial proteome samples isolated from 3- and 24-month-old rat hearts. The use of d(0)/d(4)-succinic anhydride labeling, Hz-bead based affinity enrichment, nanoLC fractionation and MALDI-ToF/ToF tandem mass spectrometry yielded relative quantification of oxylipid conjugates with residue-specific modification information. Conjugation of acrolein (ACR), 4-hydroxy-2-hexenal (HHE), 4-hydroxy-2-nonenal (HNE) and 4-oxo-2-noneal (ONE) to cysteine, histidine and lysine residues were identified. HHE conjugates were the predominant subset of Michael-type adducts detected in this study. The HHE conjugates showed higher levels in mitochondrial preparations from young heart congruent with previous findings by others that the n-3/n-6 PUFA ratio is higher in young heart mitochondrial membranes. Although this study focuses on protein adducts of reactive oxylipids, the method might be equally applicable to protein carbonyl modifications caused by metal catalyzed oxidation reactions.

  5. Beyond Defining the Smart City. Meeting Top-Down and Bottom-Up Approaches in the Middle

    Directory of Open Access Journals (Sweden)

    Jonas Breuer

    2014-05-01

    Full Text Available This paper aims to better frame the discussion and the various, divergent operationalisations and interpretations of the Smart City concept. We start by explicating top-down approaches to the Smart City, followed by what purely bottom-up initiatives can look like. We provide a clear overview of stakeholders’ different viewpoints on the city of tomorrow. Particularly the consequences and potential impacts of these differing interpretations and approaches should be of specific interest to researchers, policy makers, city administrations, private actors and anyone involved and concerned with life in cities. Therefore the goal of this article is not so much answering the question of what the Smart City is, but rather what the concept can mean for different stakeholders as well as the consequences of their interpretation. We do this by assembling an eclectic overview, bringing together definitions, examples and operationalisations from academia, policy and industry as well as identifying major trends and approaches to realizing the Smart City. We add to the debate by proposing a different approach that starts from the collective, collaboration and context when researching Smart City initiatives.

  6. A bottom-up valence bond derivation of excitation energies in 1D-like delocalized systems.

    Science.gov (United States)

    Kepenekian, Mikaël; Robert, Vincent; Boilleau, Corentin; Malrieu, Jean-Paul

    2012-01-28

    Using the chemically relevant parameters hopping integral t(0) and on-site repulsion energy U, the charge gap (lowest dipolarly allowed transition energy) in 1D systems is examined through a bottom-up strategy. The method is based on the locally ionized states, the energies of which are corrected using short-range delocalization effects. In a valence bond framework, these states interact to produce an excitonic matrix which accounts for the delocalized character of excited states. The treatment, which gives access to the correlated spectrum of ionization potentials, is entirely analytical and valid whatever the U/|t(0)| ratio for such systems ruled by Peierls-Hubbard Hamiltonians. This second-order analytical derivation is finally confronted to numerical results of a renormalized excitonic treatment using larger blocks as functions of the U/|t(0)| ratio. The method is applied to dimerized chains and to fused polybenzenic 1D lattices. Such approaches complement the traditional Bloch-function based picture and deliver a conceptual understanding of the charge gap opening process based on a chemical intuitive picture.

  7. Bottom-up estimation of joint moments during manual lifting using orientation sensors instead of position sensors.

    Science.gov (United States)

    Faber, Gert S; Kingma, Idsart; van Dieën, Jaap H

    2010-05-01

    L5/S1, hip and knee moments during manual lifting tasks are, in a laboratory environment, frequently established by bottom-up inverse dynamics, using force plates to measure ground reaction forces (GRFs) and an optoelectronic system to measure segment positions and orientations. For field measurements, alternative measurement systems are being developed. One alternative is the use of small body-mounted inertial/magnetic sensors (IMSs) and instrumented force shoes to measure segment orientation and GRFs, respectively. However, because IMSs measure segment orientations only, the positions of segments relative to each other and relative to the GRFs have to be determined by linking them, assuming fixed segment lengths and zero joint translation. This will affect the estimated joint positions and joint moments. This study investigated the effect of using segment orientations only (orientation-based method) instead of using orientations and positions (reference method) on three-dimensional joint moments. To compare analysis methods (and not measurement methods), GRFs were measured with a force plate and segment positions and/or orientations were measured using optoelectronic marker clusters for both analysis methods. Eleven male subjects lifted a box from floor level using three lifting techniques: a stoop, a semi-squat and a squat technique. The difference between the two analysis methods remained small for the knee moments: knee joint and with reasonable accuracy at the hip and L5/S1 joints using segment orientation and GRF data only.

  8. Temporal shifts in top-down vs. bottom-up control of epiphytic algae in a seagrass ecosystem.

    Science.gov (United States)

    Whalen, Matthew A; Duffy, J Emmett; Grace, James B

    2013-02-01

    In coastal marine food webs, small invertebrate herbivores (mesograzers) have long been hypothesized to occupy an important position facilitating dominance of habitat-forming macrophytes by grazing competitively superior epiphytic algae. Because of the difficulty of manipulating mesograzers in the field, however, their impacts on community organization have rarely been rigorously documented. Understanding mesograzer impacts has taken on increased urgency in seagrass systems due to declines in seagrasses globally, caused in part by widespread eutrophication favoring seagrass overgrowth by faster-growing algae. Using cage-free field experiments in two seasons (fall and summer), we present experimental confirmation that mesograzer reduction and nutrients can promote blooms of epiphytic algae growing on eelgrass (Zostera marina). In this study, nutrient additions increased epiphytes only in the fall following natural decline of mesograzers. In the summer, experimental mesograzer reduction stimulated a 447% increase in epiphytes, appearing to exacerbate seasonal dieback of eelgrass. Using structural equation modeling, we illuminate the temporal dynamics of complex interactions between macrophytes, mesograzers, and epiphytes in the summer experiment. An unexpected result emerged from investigating the interaction network: drift macroalgae indirectly reduced epiphytes by providing structure for mesograzers, suggesting that the net effect of macroalgae on seagrass depends on macroalgal density. Our results show that mesograzers can control proliferation of epiphytic algae, that top-down and bottom-up forcing are temporally variable, and that the presence of macroalgae can strengthen top-down control of epiphytic algae, potentially contributing to eelgrass persistence.

  9. A Family of Highly Efficient CuI-Based Lighting Phosphors Prepared by a Systematic, Bottom-up Synthetic Approach.

    Science.gov (United States)

    Liu, Wei; Fang, Yang; Wei, George Z; Teat, Simon J; Xiong, Kecai; Hu, Zhichao; Lustig, William P; Li, Jing

    2015-07-29

    Copper(I) iodide (CuI)-based inorganic-organic hybrid materials in the general chemical formula of CuI(L) are well-known for their structural diversity and strong photoluminescence and are therefore considered promising candidates for a number of optical applications. In this work, we demonstrate a systematic, bottom-up precursor approach to developing a series of CuI(L) network structures built on CuI rhomboid dimers. These compounds combine strong luminescence due to the CuI inorganic modules and significantly enhanced thermal stability as a result of connecting individual building units into robust, extended networks. Examination of their optical properties reveals that these materials not only exhibit exceptionally high photoluminescence performance (with internal quantum yield up to 95%) but also that their emission energy and color are systematically tunable through modification of the organic component. Results from density functional theory calculations provide convincing correlations between these materials' crystal structures and chemical compositions and their optophysical properties. The advantages of cost-effective, solution-processable, easily scalable and fully controllable synthesis as well as high quantum efficiency with improved thermal stability, make this phosphor family a promising candidate for alternative, RE-free phosphors in general lighting and illumination. This solution-based precursor approach creates a new blueprint for the rational design and controlled synthesis of inorganic-organic hybrid materials.

  10. Saliency location based on color contrast

    Science.gov (United States)

    Vijanprecha, Suchat; Wattuya, Pakaket

    2014-04-01

    Generally, the purpose of saliency detection models for saliency object detection and for fixation prediction is complementary. Saliency detection models for saliency object detection aim to discover as much as possible true positive, while saliency detection models for fixation prediction intend to generate few false positive. In this work, we attempt to combine their strength together. We accomplish this by, firstly, replacing high-level features that frequently used in a fixation prediction model with our new saliency location map in order to make the model more general. Secondly, we train a saliency detection model with human eye tracking data in order to make the model correspond well to the human eye fixation (without the use of top-down attention). We evaluate the performance of our new saliency location map on both saliency detection and fixation prediction datasets in comparison with six state-of-the-art saliency detection models. The experimental results show that the performance of our proposed method is superior to other methods in an application of saliency object detection on MSRA dataset [1]. For fixation prediction application, the results show that our saliency location map performs comparable to the high-level features, but requires much less computation time.

  11. Characterizing the effects of feature salience and top-down attention in the early visual system.

    Science.gov (United States)

    Poltoratski, Sonia; Ling, Sam; McCormack, Devin; Tong, Frank

    2017-04-05

    The visual system employs a sophisticated balance of attentional mechanisms: salient stimuli are prioritized for visual processing, yet observers can also ignore such stimuli when their goals require directing attention elsewhere. A powerful determinant of visual salience is local feature contrast: if a local region differs from its immediate surround along one or more feature dimensions, it will appear more salient. Here, we used high-resolution fMRI at 7T to characterize the modulatory effects of bottom-up salience and top-down voluntary attention within multiple sites along the early visual pathway, including visual areas V1-V4 and the lateral geniculate nucleus (LGN). Observers viewed arrays of spatially distributed gratings, where one of the gratings immediately to the left or right of fixation differed from all other items in orientation or motion direction, making it salient. To investigate the effects of directed attention, observers were cued to attend to the grating to the left or right of fixation, which was either salient or non-salient. Results revealed reliable additive effects of top-down attention and stimulus-driven salience throughout visual areas V1-hV4. In comparison, the LGN exhibited significant attentional enhancement but was not reliably modulated by orientation- or motion-defined salience. Our findings indicate that top-down effects of spatial attention can influence visual processing at the earliest possible site along the visual pathway, including the LGN, while the processing of orientation- and motion-driven salience primarily involves feature-selective interactions that take place in early cortical visual areas.

  12. Isocentric color saliency in images

    NARCIS (Netherlands)

    Valenti, R.; Sebe, N.; Gevers, T.

    2009-01-01

    In this paper we propose a novel computational method to infer visual saliency in images. The computational method is based on the idea that salient objects should have local characteristics that are different than the rest of the scene, being edges, color or shape, and that these characteristics ca

  13. A bottom-up approach to derive the closure relation for modelling hydrological fluxes at the watershed scale

    Science.gov (United States)

    Vannametee, Ekkamol; Karssenberg, Derek; Hendriks, Martin; Bierkens, Marc

    2014-05-01

    Physically-based hydrological modelling could be considered as an ideal approach for predictions in ungauged basins because observable catchment characteristics can be used to parameterize the model, avoiding model calibration using discharge data, which are not available. Lumped physically-based modelling at the watershed scale is possible with the Representative Elementary Watershed (REW) approach. A key to successful application of this approach is to find a reliable way of developing closure relations to calculate fluxes from different hydrological compartments in the REWs. Here, we present a bottom-up approach as a generic framework to identify the closure relations for particular hydrological processes that are scale-independent and can be directly parameterized using the local-scale observable REW characteristics. The approach is illustrated using the Hortonian runoff as an example. This approach starts from developing a physically-based high-resolution model describing the Hortonian runoff mechanism based on physically-based infiltration theory and runoff generation processes at a local scale. This physically-based model is used to generate a synthetic discharge data set of hypothetical rainfall events and HRUs (6×105 scenarios) as a surrogate for real-world observations. The Hortonian runoff closure relation is developed as a lumped process-based model, consisting of the Green-Ampt equation, a time-lagged linear reservoir model, and three scale-transfer parameters representing the processes within REWs. These scale-transfer parameters are identified by calibrating the closure relations against the synthetic discharge data set for each scenario run, which are, in turn, empirically related to their corresponding observable REW properties and rainstorm characteristics. This results in a parameter library, which allows direct estimation of scaling parameter for arbitrary REWs based on their local-scale observable properties and rainfall characteristics

  14. A Bottom-up Energy Efficiency Improvement Roadmap for China’s Iron and Steel Industry up to 2050

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, Qi [Northeastern Univ., Shenyang (China); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Hasanbeigi, Ali [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Price, Lynn [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Lu, Hongyou [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Arens, Marlene [Fraunhofer Inst. for Systems and Innovation Research (ISI), Karlsruhe (Germany)

    2016-09-01

    Iron and steel manufacturing is energy intensive in China and in the world. China is the world largest steel producer accounting for around half of the world steel production. In this study, we use a bottom-up energy consumption model to analyze four steel-production and energy-efficiency scenarios and evaluate the potential for energy savings from energy-efficient technologies in China’s iron and steel industry between 2010 and 2050. The results show that China’s steel production will rise and peak in the year 2020 at 860 million tons (Mt) per year for the base-case scenario and 680 Mt for the advanced energy-efficiency scenario. From 2020 on, production will gradually decrease to about 510 Mt and 400 Mt in 2050, for the base-case and advanced scenarios, respectively. Energy intensity will decrease from 21.2 gigajoules per ton (G/t) in 2010 to 12.2 GJ/t and 9.9 GJ/t in 2050 for the base-case and advanced scenarios, respectively. In the near term, decreases in iron and steel industry energy intensity will come from adoption of energy-efficient technologies. In the long term, a shift in the production structure of China’s iron and steel industry, reducing the share of blast furnace/basic oxygen furnace production and increasing the share of electric-arc furnace production while reducing the use of pig iron as a feedstock to electric-arc furnaces will continue to reduce the sector’s energy consumption. We discuss barriers to achieving these energy-efficiency gains and make policy recommendations to support improved energy efficiency and a shift in the nature of iron and steel production in China.

  15. Comparing bottom-up and top-down parameterisations of a process-based runoff generation model tailored on floods

    Science.gov (United States)

    Antonetti, Manuel; Scherrer, Simon; Margreth, Michael; Zappa, Massimiliano

    2016-04-01

    Information about the spatial distribution of dominant runoff processes (DRPs) can improve flood predictions on ungauged basins, where conceptual rainfall-runoff models usually appear to be limited due to the need for calibration. For example, hydrological classifications based on DRPs can be used as regionalisation tools assuming that, once a model structure and its parameters have been identified for each DRP, they can be transferred to other areas where the same DRP occurs. Here we present a process-based runoff generation model as an event-based spin-off of the conceptual hydrological model PREVAH. The model is grid-based and consists of a specific storage system for each DRP. To unbind the parameter values from catchment-related characteristics, the runoff concentration and the flood routing are uncoupled from the runoff generation routine and simulated separately. For the model parameterisation, two contrasting approaches are applied. First, in a bottom-up approach, the parameters of the runoff generation routine are determined a priori based on the results of sprinkling experiments on 60-100 m2 hillslope plots at several grassland locations in Switzerland. The model is, then, applied on a small catchment (0.5 km2) on the Swiss Plateau, and the parameters linked to the runoff concentration are calibrated on a single heavy rainfall-runoff event. The whole system is finally verified on several nearby catchments of larger sizes (up to 430 km2) affected by different heavy rainfall events. In a second attempt, following a top-down approach, all the parameters are calibrated on the largest catchment under investigation and successively verified on three sub-catchments. Simulation results from both parameterisation techniques are finally compared with results obtained with the traditional PREVAH.

  16. Bottom-up engineering of biological systems through standard bricks: a modularity study on basic parts and devices.

    Directory of Open Access Journals (Sweden)

    Lorenzo Pasotti

    Full Text Available BACKGROUND: Modularity is a crucial issue in the engineering world, as it enables engineers to achieve predictable outcomes when different components are interconnected. Synthetic Biology aims to apply key concepts of engineering to design and construct new biological systems that exhibit a predictable behaviour. Even if physical and measurement standards have been recently proposed to facilitate the assembly and characterization of biological components, real modularity is still a major research issue. The success of the bottom-up approach strictly depends on the clear definition of the limits in which biological functions can be predictable. RESULTS: The modularity of transcription-based biological components has been investigated in several conditions. First, the activity of a set of promoters was quantified in Escherichia coli via different measurement systems (i.e., different plasmids, reporter genes, ribosome binding sites relative to an in vivo reference promoter. Second, promoter activity variation was measured when two independent gene expression cassettes were assembled in the same system. Third, the interchangeability of input modules (a set of constitutive promoters and two regulated promoters connected to a fixed output device (a logic inverter expressing GFP was evaluated. The three input modules provide tunable transcriptional signals that drive the output device. If modularity persists, identical transcriptional signals trigger identical GFP outputs. To verify this, all the input devices were individually characterized and then the input-output characteristic of the logic inverter was derived in the different configurations. CONCLUSIONS: Promoters activities (referred to a standard promoter can vary when they are measured via different reporter devices (up to 22%, when they are used within a two-expression-cassette system (up to 35% and when they drive another device in a functionally interconnected circuit (up to 44%. This paper

  17. A Facile Bottom-Up Approach to Construct Hybrid Flexible Cathode Scaffold for High-Performance Lithium-Sulfur Batteries.

    Science.gov (United States)

    Ghosh, Arnab; Manjunatha, Revanasiddappa; Kumar, Rajat; Mitra, Sagar

    2016-12-14

    Lithium-sulfur batteries mostly suffer from the low utilization of sulfur, poor cycle life, and low rate performances. The prime factors that affect the performance are enormous volume change of the electrode, soluble intermediate product formation, poor electronic and ionic conductivity of S, and end discharge products (i.e., Li2S2 and Li2S). The attractive way to mitigate these challenges underlying in the fabrication of a sulfur nanocomposite electrode consisting of different nanoparticles with distinct properties of lithium storage capability, mechanical reinforcement, and ionic as well as electronic conductivity leading to a mechanically robust and mixed conductive (ionic and electronic conductive) sulfur electrode. Herein, we report a novel bottom-up approach to synthesize a unique freestanding, flexible cathode scaffold made of porous reduced graphene oxide, nanosized sulfur, and Mn3O4 nanoparticles, and all are three-dimensionally interconnected to each other by hybrid polyaniline/sodium alginate (PANI-SA) matrix to serve individual purposes. A capacity of 1098 mAh g(-1) is achieved against lithium after 200 cycles at a current rate of 2 A g(-1) with 97.6% of initial capacity at a same current rate, suggesting the extreme stability and cycling performance of such electrode. Interestingly, with the higher current density of 5 A g(-1), the composite electrode exhibited an initial capacity of 1015 mA h g(-1) and retained 71% of the original capacity after 500 cycles. The in situ Raman study confirms the polysulfide absorption capability of Mn3O4. This work provides a new strategy to design a mechanically robust, mixed conductive nanocomposite electrode for high-performance lithium-sulfur batteries and a strategy that can be used to develop flexible large power storage devices.

  18. Diffusion-driven multiscale analysis on manifolds and graphs: top-down and bottom-up constructions

    Science.gov (United States)

    Szlam, Arthur D.; Maggioni, Mauro; Coifman, Ronald R.; Bremer, James C., Jr.

    2005-08-01

    Classically, analysis on manifolds and graphs has been based on the study of the eigenfunctions of the Laplacian and its generalizations. These objects from differential geometry and analysis on manifolds have proven useful in applications to partial differential equations, and their discrete counterparts have been applied to optimization problems, learning, clustering, routing and many other algorithms.1-7 The eigenfunctions of the Laplacian are in general global: their support often coincides with the whole manifold, and they are affected by global properties of the manifold (for example certain global topological invariants). Recently a framework for building natural multiresolution structures on manifolds and graphs was introduced, that greatly generalizes, among other things, the construction of wavelets and wavelet packets in Euclidean spaces.8,9 This allows the study of the manifold and of functions on it at different scales, which are naturally induced by the geometry of the manifold. This construction proceeds bottom-up, from the finest scale to the coarsest scale, using powers of a diffusion operator as dilations and a numerical rank constraint to critically sample the multiresolution subspaces. In this paper we introduce a novel multiscale construction, based on a top-down recursive partitioning induced by the eigenfunctions of the Laplacian. This yields associated local cosine packets on manifolds, generalizing local cosines in Euclidean spaces.10 We discuss some of the connections with the construction of diffusion wavelets. These constructions have direct applications to the approximation, denoising, compression and learning of functions on a manifold and are promising in view of applications to problems in manifold approximation, learning, dimensionality reduction.

  19. Regime shift from phytoplankton to macrophyte dominance in a large river: Top-down versus bottom-up effects

    Energy Technology Data Exchange (ETDEWEB)

    Ibanez, Carles, E-mail: carles.ibanez@irta.cat [IRTA Aquatic Ecosystems, Carretera Poble Nou, Km 5.5, 43540 St. Carles de la Rapita, Catalonia (Spain); Alcaraz, Carles; Caiola, Nuno; Rovira, Albert; Trobajo, Rosa [IRTA Aquatic Ecosystems, Carretera Poble Nou, Km 5.5, 43540 St. Carles de la Rapita, Catalonia (Spain); Alonso, Miguel [United Research Services S.L., Urgell 143, 08036 Barcelona, Catalonia (Spain); Duran, Concha [Confederacion Hidrografica del Ebro, Sagasta 24-26, 50071 Zaragoza, Aragon (Spain); Jimenez, Pere J. [Grup Natura Freixe, Major 56, 43750 Flix, Catalonia (Spain); Munne, Antoni [Agencia Catalana de l' Aigua, Provenca 204-208, 08036 Barcelona, Catalonia (Spain); Prat, Narcis [Departament d' Ecologia, Universitat de Barcelona, Diagonal 645, 08028 Barcelona Catalonia (Spain)

    2012-02-01

    The lower Ebro River (Catalonia, Spain) has recently undergone a regime shift from a phytoplankton-dominated to a macrophyte-dominated system. This shift is well known in shallow lakes but apparently it has never been documented in rivers. Two initial hypotheses to explain the collapse of the phytoplankton were considered: a) the diminution of nutrients (bottom-up); b) the filtering effect due to the colonization of the zebra mussel (top-down). Data on water quality, hydrology and biological communities (phytoplankton, macrophytes and zebra mussel) was obtained both from existing data sets and new surveys. Results clearly indicate that the decrease in phosphorus is the main cause of a dramatic decrease in chlorophyll and large increase in water transparency, triggering the subsequent colonization of macrophytes in the river bed. A Generalized Linear Model analysis showed that the decrease in dissolved phosphorus had a relative importance 14 times higher than the increase in zebra mussel density to explain the variation of total chlorophyll. We suggest that the described changes in the lower Ebro River can be considered a novel ecosystem shift. This shift is triggering remarkable changes in the biological communities beyond the decrease of phytoplankton and the proliferation of macrophytes, such as massive colonization of Simulidae (black fly) and other changes in the benthic invertebrate communities that are currently investigated. - Highlights: Black-Right-Pointing-Pointer We show a regime shift in a large river from phytoplankton to macrophyte dominance. Black-Right-Pointing-Pointer Two main hypotheses are considered: nutrient decrease and zebra mussel grazing. Black-Right-Pointing-Pointer Phosphorus depletion is found to be the main cause of the phytoplankton decline. Black-Right-Pointing-Pointer We conclude that oligotrophication triggered the colonization of macrophytes. Black-Right-Pointing-Pointer This new regime shift in a river is similar to that described

  20. Using the Hestia bottom-up FFCO2 emissions estimation to identify drivers and hotspots in urban areas

    Science.gov (United States)

    Rao, P.; Patarasuk, R.; Gurney, K. R.; o'Keefe, D.; Song, Y.; Huang, J.; Buchert, M.; Lin, J. C.; Mendoza, D. L.; Ehleringer, J. R.; Eldering, A.; Miller, C. E.; Duren, R. M.

    2015-12-01

    Urban areas occupy 3% of the earth's land surface and generate 75% of the fossil fuel carbon dioxide (FFCO2) emissions. We report on the application of the Hestia Project to the Salt Lake County (SLC) and Los Angeles (LA) domains. Hestia quantifies FFCO2 in fine space-time detail across urban domains using a scientific "bottom-up" approach. We explore the utility of the Hestia to inform both urbanization science and greenhouse gas (GHG) mitigation policy. We focus on the residential sector in SLC and the onroad sector in LA as these sectors are large emissions contributors in each locale, and local governments have some authority and policy levers to mitigate these emissions. Multiple regression using sociodemographic data across SLC census block-groups shows that per capita income exhibits a positive relationship with FFCO2 emissions while household size exhibits a negative relationship, after controlling for total population. Housing units per area (i.e., compact development) has little effect on FFCO2 emissions. Rising income in the high-income group has twice as much impact on the emissions as the low-income group. Household size for the low-income group has four times the impact on the emissions as the high-income group. In LA, onroad FFCO2 emissions account for 49% of total emissions, of which 41% is from arterials (intermediate road class). Arterials also have the largest carbon emissions intensity - FFCO2/vehicle distance travelled (VKT) - possibly from high traffic congestion and fleet composition. Non-interstate hotspot emissions (> 419 tC ln-km-1) are equally dominated by particular arterials and collectors (lowest road class) though collectors have a higher VKT. These hotspots occur largely in LA (67%) and Orange (18%) counties and provide targeted information for onroad emissions reduction. Using Hestia to identify FFCO2 emissions drivers and hotpots can aid state and local policy makers in planning the most effective GHG reductions.

  1. Assessment of Historic Trend in Mobility and Energy Use in India Transportation Sector Using Bottom-up Approach

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Nan; McNeil, Michael A.

    2009-05-01

    Transportation mobility in India has increased significantly in the past decades. From 1970 to 2000, motorized mobility (passenger-km) has risen by 888%, compared with an 88% population growth (Singh,2006). This contributed to many energy and environmental issues, and an energy strategy incorporates efficiency improvement and other measures needs to be designed. Unfortunately, existing energy data do not provide information on driving forces behind energy use and sometime show large inconsistencies. Many previous studies address only a single transportation mode such as passenger road travel; did not include comprehensive data collection or analysis has yet been done, or lack detail on energy demand by each mode and fuel mix. The current study will fill a considerable gap in current efforts, develop a data base on all transport modes including passenger air and water, and freight in order to facilitate the development of energy scenarios and assess significance of technology potential in a global climate change model. An extensive literature review and data collection has been done to establish the database with breakdown of mobility, intensity, distance, and fuel mix of all transportation modes. Energy consumption was estimated and compared with aggregated transport consumption reported in IEA India transportation energy data. Different scenarios were estimated based on different assumptions on freight road mobility. Based on the bottom-up analysis, we estimated that the energy consumption from 1990 to 2000 increased at an annual growth rate of 7% for the mid-range road freight growth case and 12% for the high road freight growth case corresponding to the scenarios in mobility, while the IEA data only shows a 1.7% growth rate in those years.

  2. The benefits of China's efforts on gaseous pollutant control indicated by the bottom-up emissions and satellite observation

    Science.gov (United States)

    Xia, Y.; Zhao, Y.

    2015-12-01

    To evaluate the effectiveness of national policies of air pollution control, the emissions of SO2, NOX, CO and CO2 in China are estimated with a bottom-up method from 2000 to 2014, and vertical column densities (VCD) from satellite observation are used to evaluate the inter-annual trends and spatial distribution of emissions and the temporal and spatial patterns of ambient levels of gaseous pollutants across the country. In particular, an additional emission case named STD case, which combines the most recent issued emission standards for specific industrial sources, is developed for 2012-2014. The inter-annual trends in emissions and VCDs match well except for SO2, and the revised emissions in STD case improve the comparison, implying the benefits of emission control for most recent years. Satellite retrieval error, underestimation of emission reduction and improved atmospheric oxidization caused the differences between emissions and VCDs trend of SO2. Coal-fired power plants play key roles in SO2 and NOX emission reduction. As suggested by VCD and emission inventory, the control of CO in 11th five year plan (FYP) period was more effective than that in the 12th FYP period, while the SO2 appeared opposite. As the new control target added in 12th FYP, NOX emissions have been clearly decreased 4.3 Mt from 2011 to 2014, in contrast to the fast growth before 2011. The inter-annual trends in NO2 VCDs has the poorest correlation with vehicle ownership (R=0.796), due to the staged emission standard of vehicles. In developed regions, transportation has become the main pollutants emission source and we prove this by comparing VCDs of NO2 to VCDs of SO2. Moreover, air quality in mega cities has been evaluated based on satellite observation and emissions, and results indicate that Beijing suffered heavily from the emissions from Hebei and Tianjin, while the local emissions tend to dominate in Shanghai.

  3. Motivation and drives in bottom-up developments in natural hazards management: multiple-use of adaptation strategies in Austria

    Science.gov (United States)

    Thaler, Thomas; Fuchs, Sven

    2015-04-01

    Losses from extreme hydrological events, such as recently experienced in Europe have focused the attention of policymakers as well as researchers on vulnerability to natural hazards. In parallel, the context of changing flood risks under climate and societal change is driving transformation in the role of the state in responsibility sharing and individual responsibilities for risk management and precaution. The new policy agenda enhances the responsibilities of local authorities and private citizens in hazard management and reduces the role of central governments. Within the objective is to place added responsibility on local organisations and citizens to determine locally-based strategies for risk reduction. A major challenge of modelling adaptation is to represent the complexity of coupled human-environmental systems and particularly the feedback loops between environmental dynamics and human decision-making processes on different scales. This paper focuses on bottom-up initiatives to flood risk management which are, by definition, different from the mainstream. These initiatives are clearly influenced (positively or negatively) by a number of factors, where the combination of these interdependences can create specific conditions that alter the opportunity for effective governance arrangements in a local scheme approach. In total, this study identified six general drivers which encourage the implementation of flood storages, such as direct relation to recent major flood frequency and history, the initiative of individual stakeholders (promoters), political pressures from outside (e.g. business companies, private households) and a strong solidarity attitude of municipalities and the stakeholders involved. Although partnership approach may be seen as an 'optimal' solution for flood risk management, in practice there are many limitations and barriers in establishing these collaborations and making them effective (especially in the long term) with the consequences

  4. Trophic cascades of bottom-up and top-down forcing on nutrients and plankton in the Kattegat, evaluated by modelling

    DEFF Research Database (Denmark)

    Petersen, Marcell Elo; Maar, Marie; Larsen, Janus;

    2017-01-01

    The aim of the study was to investigate the relative importance of bottom-up and top-down forcing on trophic cascades in the pelagic food-web and the implications for water quality indicators (summer phytoplankton biomass and winter nutrients) in relation to management. The 3D ecological model ER...

  5. Applying bottom-up material flow analysis to identify the system boundaries of non-energy use data in international energy statistics

    NARCIS (Netherlands)

    Weiss, M.; Neelis, M.L.; Zuidberg, M.C.; Patel, M.K.

    2008-01-01

    Data on the non-energy use of fossil fuels in energy statistics are subject to major uncertainties. We apply a simple bottom-up methodology to recalculate non-energy use for the entire world and for the 50 countries with the highest consumption of fossil fuels for non-energy purposes. We quantify wo

  6. Research on ethics in two large Human Biomonitoring projects ECNIS and NewGeneris: a bottom up approach

    Directory of Open Access Journals (Sweden)

    Casteleyn Ludwine

    2008-01-01

    Full Text Available Abstract Assessment of ethical aspects and authorization by ethics committees have become a major constraint for health research including human subjects. Ethical reference values often are extrapolated from clinical settings, where emphasis lies on decisional autonomy and protection of individual's privacy. The question rises if this set of values used in clinical research can be considered as relevant references for HBM research, which is at the basis of public health surveillance. Current and future research activities using human biomarkers are facing new challenges and expectancies on sensitive socio-ethical issues. Reflection is needed on the necessity to balance individual rights against public interest. In addition, many HBM research programs require international collaboration. Domestic legislation is not always easily applicable in international projects. Also, there seem to be considerable inconsistencies in ethical assessments of similar research activities between different countries and even within one country. All this is causing delay and putting the researcher in situations in which it is unclear how to act in accordance with necessary legal requirements. Therefore, analysis of ethical practices and their consequences for HBM research is needed. This analysis will be performed by a bottom-up approach, based on a methodology for comparative analysis of determinants in ethical reasoning, allowing taking into account different social, cultural, political and historical traditions, in view of safeguarding common EU values. Based on information collected in real life complexity, paradigm cases and virtual case scenarios will be developed and discussed with relevant stakeholders to openly discuss possible obstacles and to identify options for improvement in regulation. The material collected will allow developing an ethical framework which may constitute the basis for a more harmonized and consistent socio-ethical and legal approach

  7. Assisted editing od SensorML with EDI. A bottom-up scenario towards the definition of sensor profiles.

    Science.gov (United States)

    Oggioni, Alessandro; Tagliolato, Paolo; Fugazza, Cristiano; Bastianini, Mauro; Pavesi, Fabio; Pepe, Monica; Menegon, Stefano; Basoni, Anna; Carrara, Paola

    2015-04-01

    -product of this ongoing work is currently constituting an archive of predefined sensor descriptions. This information is being collected in order to further ease metadata creation in the next phase of the project. Users will be able to choose among a number of sensor and sensor platform prototypes: These will be specific instances on which it will be possible to define, in a bottom-up approach, "sensor profiles". We report on the outcome of this activity.

  8. Assessing the construct validity of aberrant salience.

    Science.gov (United States)

    Schmidt, Kristin; Roiser, Jonathan P

    2009-01-01

    We sought to validate the psychometric properties of a recently developed paradigm that aims to measure salience attribution processes proposed to contribute to positive psychotic symptoms, the Salience Attribution Test (SAT). The "aberrant salience" measure from the SAT showed good face validity in previous results, with elevated scores both in high-schizotypy individuals, and in patients with schizophrenia suffering from delusions. Exploring the construct validity of salience attribution variables derived from the SAT is important, since other factors, including latent inhibition/learned irrelevance (LIrr), attention, probabilistic reward learning, sensitivity to probability, general cognitive ability and working memory could influence these measures. Fifty healthy participants completed schizotypy scales, the SAT, a LIrr task, and a number of other cognitive tasks tapping into potentially confounding processes. Behavioural measures of interest from each task were entered into a principal components analysis, which yielded a five-factor structure accounting for approximately 75% of the variance in behaviour. Implicit aberrant salience was found to load onto its own factor, which was associated with elevated "Introvertive Anhedonia" schizotypy, replicating our previous finding. LIrr loaded onto a separate factor, which also included implicit adaptive salience, but was not associated with schizotypy. Explicit adaptive and aberrant salience, along with a measure of probabilistic learning, loaded onto a further factor, though this also did not correlate with schizotypy. These results suggest that the measures of LIrr and implicit adaptive salience might be based on similar underlying processes, which are dissociable both from implicit aberrant salience and explicit measures of salience.

  9. Novelty seeking, incentive salience and acquisition of cocaine self-administration in the rat.

    Science.gov (United States)

    Beckmann, Joshua S; Marusich, Julie A; Gipson, Cassandra D; Bardo, Michael T

    2011-01-01

    It has been suggested that incentive salience plays a major role in drug abuse and the development of addiction. Additionally, novelty seeking has been identified as a significant risk factor for drug abuse. However, how differences in the readiness to attribute incentive salience relate to novelty seeking and drug abuse vulnerability has not been explored. The present experiments examined how individual differences in incentive salience attribution relate to novelty seeking and acquisition of cocaine self-administration in a preclinical model. Rats were first assessed in an inescapable novelty task and a novelty place preference task (measures of novelty seeking), followed by a Pavlovian conditioned approach task for food (a measure of incentive salience attribution). Rats then were trained to self-administer cocaine (0.3 or 1.0 mg/kg/infusion) using an autoshaping procedure. The results demonstrate that animals that attributed incentive salience to a food-associated cue were higher novelty seekers and acquired cocaine self-administration more quickly at the lower dose. The results suggest that novelty-seeking behavior may be a mediator of incentive salience attribution and that incentive salience magnitude may be an indicator of drug reward.

  10. Salience-Affected Neural Networks

    CERN Document Server

    Remmelzwaal, Leendert A; Ellis, George F R

    2010-01-01

    We present a simple neural network model which combines a locally-connected feedforward structure, as is traditionally used to model inter-neuron connectivity, with a layer of undifferentiated connections which model the diffuse projections from the human limbic system to the cortex. This new layer makes it possible to model global effects such as salience, at the same time as the local network processes task-specific or local information. This simple combination network displays interactions between salience and regular processing which correspond to known effects in the developing brain, such as enhanced learning as a result of heightened affect. The cortex biases neuronal responses to affect both learning and memory, through the use of diffuse projections from the limbic system to the cortex. Standard ANNs do not model this non-local flow of information represented by the ascending systems, which are a significant feature of the structure of the brain, and although they do allow associational learning with...

  11. Top-Down and Bottom-Up Approaches in Engineering 1 T Phase Molybdenum Disulfide (MoS2 ): Towards Highly Catalytically Active Materials.

    Science.gov (United States)

    Chua, Chun Kiang; Loo, Adeline Huiling; Pumera, Martin

    2016-09-26

    The metallic 1 T phase of MoS2 has been widely identified to be responsible for the improved performances of MoS2 in applications including hydrogen evolution reactions and electrochemical supercapacitors. To this aim, various synthetic methods have been reported to obtain 1 T phase-rich MoS2 . Here, the aim is to evaluate the efficiencies of the bottom-up (hydrothermal reaction) and top-down (chemical exfoliation) approaches in producing 1 T phase MoS2 . It is established in this study that the 1 T phase MoS2 produced through the bottom-up approach contains a high proportion of 1 T phase and demonstrates excellent electrochemical and electrical properties. Its performance in the hydrogen evolution reaction and electrochemical supercapacitors also surpassed that of 1 T phase MoS2 produced through a top-down approach.

  12. The Study of Randomized Visual Saliency Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Yuantao Chen

    2013-01-01

    Full Text Available Image segmentation process for high quality visual saliency map is very dependent on the existing visual saliency metrics. It is mostly only get sketchy effect of saliency map, and roughly based visual saliency map will affect the image segmentation results. The paper had presented the randomized visual saliency detection algorithm. The randomized visual saliency detection method can quickly generate the same size as the original input image and detailed results of the saliency map. The randomized saliency detection method can be applied to real-time requirements for image content-based scaling saliency results map. The randomization method for fast randomized video saliency area detection, the algorithm only requires a small amount of memory space can be detected detailed oriented visual saliency map, the presented results are shown that the method of visual saliency map used in image after the segmentation process can be an ideal segmentation results.

  13. Top-down and bottom-up lipidomic analysis of rabbit lipoproteins under different metabolic conditions using flow field-flow fractionation, nanoflow liquid chromatography and mass spectrometry.

    Science.gov (United States)

    Byeon, Seul Kee; Kim, Jin Yong; Lee, Ju Yong; Chung, Bong Chul; Seo, Hong Seog; Moon, Myeong Hee

    2015-07-31

    This study demonstrated the performances of top-down and bottom-up approaches in lipidomic analysis of lipoproteins from rabbits raised under different metabolic conditions: healthy controls, carrageenan-induced inflammation, dehydration, high cholesterol (HC) diet, and highest cholesterol diet with inflammation (HCI). In the bottom-up approach, the high density lipoproteins (HDL) and the low density lipoproteins (LDL) were size-sorted and collected on a semi-preparative scale using a multiplexed hollow fiber flow field-flow fractionation (MxHF5), followed by nanoflow liquid chromatography-ESI-MS/MS (nLC-ESI-MS/MS) analysis of the lipids extracted from each lipoprotein fraction. In the top-down method, size-fractionated lipoproteins were directly infused to MS for quantitative analysis of targeted lipids using chip-type asymmetrical flow field-flow fractionation-electrospray ionization-tandem mass spectrometry (cAF4-ESI-MS/MS) in selected reaction monitoring (SRM) mode. The comprehensive bottom-up analysis yielded 122 and 104 lipids from HDL and LDL, respectively. Rabbits within the HC and HCI groups had lipid patterns that contrasted most substantially from those of controls, suggesting that HC diet significantly alters the lipid composition of lipoproteins. Among the identified lipids, 20 lipid species that exhibited large differences (>10-fold) were selected as targets for the top-down quantitative analysis in order to compare the results with those from the bottom-up method. Statistical comparison of the results from the two methods revealed that the results were not significantly different for most of the selected species, except for those species with only small differences in concentration between groups. The current study demonstrated that top-down lipid analysis using cAF4-ESI-MS/MS is a powerful high-speed analytical platform for targeted lipidomic analysis that does not require the extraction of lipids from blood samples.

  14. Modeling Technical Change in Energy System Analysis: Analyzing the Introduction of Learning-by-Doing in Bottom-up Energy Models

    Energy Technology Data Exchange (ETDEWEB)

    Berglund, Christer; Soederholm, Patrik [Luleaa Univ. of Technology (Sweden). Div. of Economics

    2005-02-01

    The main objective of this paper is to provide an overview and a critical analysis of the recent literature on incorporating induced technical change in energy systems models. Special emphasis is put on surveying recent studies aiming at integrating learning-by-doing into bottom-up energy systems models through so-called learning curves, and on analyzing the relevance of learning curve analysis for understanding the process of innovation and technology diffusion in the energy sector. The survey indicates that this model work represents a major advance in energy research, and embeds important policy implications, not the least concerning the cost and the timing of environmental policies (including carbon emission constraints). However, bottom-up energy models with endogenous learning are also limited in their characterization of technology diffusion and innovation. While they provide a detailed account of technical options - which is absent in many top-down models - they also lack important aspects of diffusion behavior that are captured in top-down representations. For instance, they fail in capturing strategic technology diffusion behavior in the energy sector, and they neglect important general equilibrium impacts (such as the opportunity cost of redirecting R and D support to the energy sector). For these reasons bottom-up and top-down models with induced technical change should not be viewed as substitutes but rather as complements.

  15. Tax Salience, Voting, and Deliberation

    DEFF Research Database (Denmark)

    Sausgruber, Rupert; Tyran, Jean-Robert

    biases consumers' voting on tax regimes, and that experience is an effective de-biasing mechanism in the experimental laboratory. Pre-vote deliberation makes initially held opinions more extreme rather than correct and does not eliminate the bias in the typical committee. Yet, if voters can discuss...... their experience with the tax regimes they are less likely to be biased.......Tax incentives can be more or less salient, i.e. noticeable or cognitively easy to process. Our hypothesis is that taxes on consumers are more salient to consumers than equivalent taxes on sellers because consumers underestimate the extent of tax shifting in the market. We show that tax salience...

  16. Tax Salience, Voting, and Deliberation

    DEFF Research Database (Denmark)

    Sausgruber, Rupert; Tyran, Jean-Robert

    Tax incentives can be more or less salient, i.e. noticeable or cognitively easy to process. Our hypothesis is that taxes on consumers are more salient to consumers than equivalent taxes on sellers because consumers underestimate the extent of tax shifting in the market. We show that tax salience...... biases consumers' voting on tax regimes, and that experience is an effective de-biasing mechanism in the experimental laboratory. Pre-vote deliberation makes initially held opinions more extreme rather than correct and does not eliminate the bias in the typical committee. Yet, if voters can discuss...... their experience with the tax regimes they are less likely to be biased....

  17. Salience of Alcohol Expectancies and Drinking Outcomes.

    Science.gov (United States)

    Reese, Finetta L.

    1997-01-01

    Investigated whether the prediction of drinking might be enhanced by considering salience of alcohol expectancies rather than mere endorsement. Hierarchical regression analyses demonstrated that expectancy salience significantly improved the prediction of total alcohol consumption above and beyond the effects of expectancy endorsement. Expectancy…

  18. Regional principal color based saliency detection.

    Directory of Open Access Journals (Sweden)

    Jing Lou

    Full Text Available Saliency detection is widely used in many visual applications like image segmentation, object recognition and classification. In this paper, we will introduce a new method to detect salient objects in natural images. The approach is based on a regional principal color contrast modal, which incorporates low-level and medium-level visual cues. The method allows a simple computation of color features and two categories of spatial relationships to a saliency map, achieving higher F-measure rates. At the same time, we present an interpolation approach to evaluate resulting curves, and analyze parameters selection. Our method enables the effective computation of arbitrary resolution images. Experimental results on a saliency database show that our approach produces high quality saliency maps and performs favorably against ten saliency detection algorithms.

  19. Age-Related Inter-region EEG Coupling Changes during the Control of Bottom-up and Top-down Attention

    Directory of Open Access Journals (Sweden)

    Ling eLi

    2015-12-01

    Full Text Available We investigated age-related changes in electroencephalographic (EEG coupling of theta-, alpha-, and beta-frequency bands during bottom-up and top-down attention. Arrays were presented with either automatic pop-out (bottom-up or effortful search (top-down behavior to younger and older participants. The phase-locking value (PLV was used to estimate coupling strength between scalp recordings. Behavioral performance decreased with age, with a greater age-related decline in accuracy for the search than for the pop-out condition. Aging was associated with a declined coupling strength of theta and alpha frequency bands, with a greater age-related decline in whole-brain coupling values for the search than for the pop-out condition. Specifically, prefronto-frontal coupling in theta- and alpha-bands, fronto-parietal and parieto-occipital couplings in beta-band for younger group showed a right hemispheric dominance, which was reduced with aging to compensate for the inhibitory dysfunction. While pop-out target detection was mainly associated with greater parieto-occipital beta-coupling strength compared to search condition regardless of aging. Furthermore, prefronto-frontal coupling in theta-, alpha- and beta-bands, and parieto-occipital coupling in beta-band functioned as predictors of behavior for both groups. Taken together these findings provide evidence that prefronto-frontal coupling of theta-, alpha-, and beta-bands may serve as a possible basis of aging during visual attention, while parieto-occipital coupling in beta-band could serve for a bottom-up function and be vulnerable to top-down attention control for younger and older groups.

  20. Exploring the Life Expectancy Increase in Poland in the Context of CVD Mortality Fall: The Risk Assessment Bottom-Up Approach, From Health Outcome to Policies.

    Science.gov (United States)

    Kobza, Joanna; Geremek, Mariusz

    2015-01-01

    Life expectancy at birth is considered the best mortality-based summary indicator of the health status of the population and is useful for measuring long-term health changes. The objective of this article was to present the concept of the bottom-up policy risk assessment approach, developed to identify challenges involved in analyzing risk factor reduction policies and in assessing how the related health indicators have changed over time. This article focuses on the reasons of the significant life expectancy prolongation in Poland over the past 2 decades, thus includes policy context. The methodology details a bottom-up risk assessment approach, a chain of relations between the health outcome, risk factors, and health policy, based on Risk Assessment From Policy to Impact Dimension project guidance. A decline in cardiovascular disease mortality was a key factor that followed life expectancy prolongation. Among basic factors, tobacco and alcohol consumption, diet, physical activity, and new treatment technologies were identified. Poor health outcomes of the Polish population at the beginning of 1990s highlighted the need of the implementation of various health promotion programs, legal acts, and more effective public health policies. Evidence-based public health policy needs translating scientific research into policy and practice. The bottom-up case study template can be one of the focal tools in this process. Accountability for the health impact of policies and programs and legitimization of the decisions of policy makers has become one of the key questions nowadays in European countries' decision-making process and in EU public health strategy.

  1. Towards nano-organic chemistry: perspectives for a bottom-up approach to the synthesis of low-dimensional carbon nanostructures

    Science.gov (United States)

    Mercuri, Francesco; Baldoni, Matteo; Sgamellotti, Antonio

    2012-01-01

    Low-dimensional carbon nanostructures, such as nanotubes and graphenes, represent one of the most promising classes of materials, in view of their potential use in nanotechnology. However, their exploitation in applications is often hindered by difficulties in their synthesis and purification. Despite the huge efforts by the research community, the production of nanostructured carbon materials with controlled properties is still beyond reach. Nonetheless, this step is nowadays mandatory for significant progresses in the realization of advanced applications and devices based on low-dimensional carbon nanostructures. Although promising alternative routes for the fabrication of nanostructured carbon materials have recently been proposed, a comprehensive understanding of the key factors governing the bottom-up assembly of simple precursors to form complex systems with tailored properties is still at its early stages. In this paper, following a survey of recent experimental efforts in the bottom-up synthesis of carbon nanostructures, we attempt to clarify generalized criteria for the design of suitable precursors that can be used as building blocks in the production of complex systems based on sp2 carbon atoms and discuss potential synthetic strategies. In particular, the approaches presented in this feature article are based on the application of concepts borrowed from traditional organic chemistry, such as valence-bond theory and Clar sextet theory, and on their extension to the case of complex carbon nanomaterials. We also present and discuss a validation of these approaches through first-principle calculations on prototypical systems. Detailed studies on the processes involved in the bottom-up fabrication of low-dimensional carbon nanostructures are expected to pave the way for the design and optimization of precursors and efficient synthetic routes, thus allowing the development of novel materials with controlled morphology and properties that can be used in

  2. Closing the gap? Top-down versus bottom-up projections of China's regional energy use and CO2 emissions

    DEFF Research Database (Denmark)

    Dai, Hancheng; Mischke, Peggy; Xie, Xuxuan;

    2016-01-01

    As the world's largest CO2 emitter, China is a prominent case study for scenario analysis. This study uses two newly developed global top-down and bottom-up models with a regional China focus to compare China's future energy and CO2 emission pathways toward 2050. By harmonizing the economic...... and demographic trends as well as a carbon tax pathway, we explore how both models respond to these identical exogenous inputs. Then a soft-linking methodology is applied to "narrow the gap" between the results computed by these models. We find for example that without soft-linking, China's baseline CO2 emissions...

  3. Aplicación y comparación de la metodología de diseño Top Down y Bottom Up

    OpenAIRE

    Restrepo Muñóz, Verónica Pauline

    2009-01-01

    Este proyecto estudia y compara las metodologías Bottom Up y Top Down, utilizadas en el desarrollo de productos dentro de un departamento de manufactura en un ambiente colaborativo -- Se desarrolló un producto mediante ambas metodologías, posteriormente se analizó su incidencia en el comportamiento de indicadores de gestión, que miden el desempeño de una organización -- Se destacan también los beneficios del Top Down en la manufactu...

  4. Assessing the construct validity of aberrant salience

    Directory of Open Access Journals (Sweden)

    Kristin Schmidt

    2009-12-01

    Full Text Available We sought to validate the psychometric properties of a recently developed paradigm that aims to measure salience attribution processes proposed to contribute to positive psychotic symptoms, the Salience Attribution Test (SAT. The “aberrant salience” measure from the SAT showed good face validity in previous results, with elevated scores both in high-schizotypy individuals, and in patients with schizophrenia suffering from delusions. Exploring the construct validity of salience attribution variables derived from the SAT is important, since other factors, including latent inhibition/learned irrelevance, attention, probabilistic reward learning, sensitivity to probability, general cognitive ability and working memory could influence these measures. Fifty healthy participants completed schizotypy scales, the SAT, a learned irrelevance task, and a number of other cognitive tasks tapping into potentially confounding processes. Behavioural measures of interest from each task were entered into a principal components analysis, which yielded a five-factor structure accounting for ~75% percent of the variance in behaviour. Implicit aberrant salience was found to load onto its own factor, which was associated with elevated “Introvertive Anhedonia” schizotypy, replicating our previous finding. Learned irrelevance loaded onto a separate factor, which also included implicit adaptive salience, but was not associated with schizotypy. Explicit adaptive and aberrant salience, along with a measure of probabilistic learning, loaded onto a further factor, though this also did not correlate with schizotypy. These results suggest that the measures of learned irrelevance and implicit adaptive salience might be based on similar underlying processes, which are dissociable both from implicit aberrant salience and explicit measures of salience.

  5. Latitudinal variation in top-down and bottom-up control of a salt marsh food web.

    Science.gov (United States)

    Marczak, L B; Ho, C K; Wieski, K; Vu, H; Denno, R F; Pennings, S C

    2011-02-01

    The shrub Iva frutescens, which occupies the terrestrial border of U.S. Atlantic Coast salt marshes, supports a food web that varies strongly across latitude. We tested whether latitudinal variation in plant quality (higher at high latitudes), consumption by omnivores (a crab, present only at low latitudes), consumption by mesopredators (ladybugs, present at all latitudes), or the life history stage of an herbivorous beetle could explain continental-scale field patterns of herbivore density. In a mesocosm experiment, crabs exerted strong top-down control on herbivorous beetles, ladybugs exerted strong top-down control on aphids, and both predators benefited plants through trophic cascades. Latitude of plant origin had no effect on consumers. Herbivorous beetle density was greater if mesocosms were stocked with beetle adults rather than larvae, and aphid densities were reduced in the "adult beetle" treatment. Treatment combinations representing high and low latitudes produced patterns of herbivore density similar to those in the field. We conclude that latitudinal variation in plant quality is less important than latitudinal variation in top consumers and competition in mediating food web structure. Climate may also play a strong role in structuring high-latitude salt marshes by limiting the number of herbivore generations per growing season and causing high overwintering mortality.

  6. Comparing translational population-PBPK modelling of brain microdialysis with bottom-up prediction of brain-to-plasma distribution in rat and human.

    Science.gov (United States)

    Ball, Kathryn; Bouzom, François; Scherrmann, Jean-Michel; Walther, Bernard; Declèves, Xavier

    2014-11-01

    The prediction of brain extracellular fluid (ECF) concentrations in human is a potentially valuable asset during drug development as it can provide the pharmacokinetic input for pharmacokinetic-pharmacodynamic models. This study aimed to compare two translational modelling approaches that can be applied at the preclinical stage of development in order to simulate human brain ECF concentrations. A population-PBPK model of the central nervous system was developed based on brain microdialysis data, and the model parameters were translated to their corresponding human values to simulate ECF and brain tissue concentration profiles. In parallel, the PBPK modelling software Simcyp was used to simulate human brain tissue concentrations, via the bottom-up prediction of brain tissue distribution using two different sets of mechanistic tissue composition-based equations. The population-PBPK and bottom-up approaches gave similar predictions of total brain concentrations in both rat and human, while only the population-PBPK model was capable of accurately simulating the rat ECF concentrations. The choice of PBPK model must therefore depend on the purpose of the modelling exercise, the in vitro and in vivo data available and knowledge of the mechanisms governing the membrane permeability and distribution of the drug.

  7. Chitosan microspheres with an extracellular matrix-mimicking nanofibrous structure as cell-carrier building blocks for bottom-up cartilage tissue engineering.

    Science.gov (United States)

    Zhou, Yong; Gao, Huai-Ling; Shen, Li-Li; Pan, Zhao; Mao, Li-Bo; Wu, Tao; He, Jia-Cai; Zou, Duo-Hong; Zhang, Zhi-Yuan; Yu, Shu-Hong

    2016-01-07

    Scaffolds for tissue engineering (TE) which closely mimic the physicochemical properties of the natural extracellular matrix (ECM) have been proven to advantageously favor cell attachment, proliferation, migration and new tissue formation. Recently, as a valuable alternative, a bottom-up TE approach utilizing cell-loaded micrometer-scale modular components as building blocks to reconstruct a new tissue in vitro or in vivo has been proved to demonstrate a number of desirable advantages compared with the traditional bulk scaffold based top-down TE approach. Nevertheless, micro-components with an ECM-mimicking nanofibrous structure are still very scarce and highly desirable. Chitosan (CS), an accessible natural polymer, has demonstrated appealing intrinsic properties and promising application potential for TE, especially the cartilage tissue regeneration. According to this background, we report here the fabrication of chitosan microspheres with an ECM-mimicking nanofibrous structure for the first time based on a physical gelation process. By combining this physical fabrication procedure with microfluidic technology, uniform CS microspheres (CMS) with controlled nanofibrous microstructure and tunable sizes can be facilely obtained. Especially, no potentially toxic or denaturizing chemical crosslinking agent was introduced into the products. Notably, in vitro chondrocyte culture tests revealed that enhanced cell attachment and proliferation were realized, and a macroscopic 3D geometrically shaped cartilage-like composite can be easily constructed with the nanofibrous CMS (NCMS) and chondrocytes, which demonstrate significant application potential of NCMS as the bottom-up cell-carrier components for cartilage tissue engineering.

  8. A Bottom-Up Building Stock Model for Tracking Regional Energy Targets—A Case Study of Kočevje

    Directory of Open Access Journals (Sweden)

    Marjana Šijanec Zavrl

    2016-10-01

    Full Text Available The paper addresses the development of a bottom-up building stock energy model (BuilS for identification of the building stock renovation potential by considering energy performance of individual buildings through cross-linked data from various public available databases. The model enables integration of various EE and RES measures on the building stock to demonstrate long-term economic and environmental effects of different building stock refurbishment strategies. In the presented case study, the BuilS model was applied in the Kočevje city area and validated using the measured energy consumption of the buildings connected to the city district heating system. Three strategies for improving the building stock in Kočevje towards a more sustainable one are presented with their impact on energy use and CO2 emission projections up to 2030. It is demonstrated that the BuilS bottom-up model enables the setting of a correct baseline regarding energy use of the existing building stock and that such a model is a powerful tool for design and validation of the building stock renovation strategies. It is also shown that the accuracy of the model depends on available information on local resources and local needs, therefore acceleration of the building stock monitoring on the level of each building and continually upgrading of databases with building renovation information is of the utmost importance.

  9. Diagnostic study, design and implementation of an integrated model of care in France: a bottom-up process with continuous leadership

    Directory of Open Access Journals (Sweden)

    Matthieu de Stampa

    2010-02-01

    Full Text Available Background: Sustaining integrated care is difficult, in large part because of problems encountered securing the participation of health care and social service professionals and, in particular, general practitioners (GPs. Purpose: To present an innovative bottom-up and pragmatic strategy used to implement a new integrated care model in France for community-dwelling elderly people with complex needs.Results: In the first step, a diagnostic study was conducted with face-to-face interviews to gather data on current practices from a sample of health and social stakeholders working with elderly people. In the second step, an integrated care model called Coordination Personnes Agées (COPA was designed by the same major stakeholders in order to define its detailed characteristics based on the local context. In the third step, the model was implemented in two phases: adoption and maintenance. This strategy was carried out by a continuous and flexible leadership throughout the process, initially with a mixed leadership (clinician and researcher followed by a double one (clinician and managers of services in the implementation phase.Conclusion: The implementation of this bottom-up and pragmatic strategy relied on establishing a collaborative dynamic among health and social stakeholders. This enhanced their involvement throughout the implementation phase, particularly among the GPs, and allowed them to support the change practices and services arrangements

  10. Diagnostic study, design and implementation of an integrated model of care in France: a bottom-up process with continuous leadership

    Directory of Open Access Journals (Sweden)

    Matthieu de Stampa

    2010-02-01

    Full Text Available Background: Sustaining integrated care is difficult, in large part because of problems encountered securing the participation of health care and social service professionals and, in particular, general practitioners (GPs. Purpose: To present an innovative bottom-up and pragmatic strategy used to implement a new integrated care model in France for community-dwelling elderly people with complex needs. Results: In the first step, a diagnostic study was conducted with face-to-face interviews to gather data on current practices from a sample of health and social stakeholders working with elderly people. In the second step, an integrated care model called Coordination Personnes Agées (COPA was designed by the same major stakeholders in order to define its detailed characteristics based on the local context. In the third step, the model was implemented in two phases: adoption and maintenance. This strategy was carried out by a continuous and flexible leadership throughout the process, initially with a mixed leadership (clinician and researcher followed by a double one (clinician and managers of services in the implementation phase. Conclusion: The implementation of this bottom-up and pragmatic strategy relied on establishing a collaborative dynamic among health and social stakeholders. This enhanced their involvement throughout the implementation phase, particularly among the GPs, and allowed them to support the change practices and services arrangements

  11. A layered abduction model of perception: Integrating bottom-up and top-down processing in a multi-sense agent

    Science.gov (United States)

    Josephson, John R.

    1989-01-01

    A layered-abduction model of perception is presented which unifies bottom-up and top-down processing in a single logical and information-processing framework. The process of interpreting the input from each sense is broken down into discrete layers of interpretation, where at each layer a best explanation hypothesis is formed of the data presented by the layer or layers below, with the help of information available laterally and from above. The formation of this hypothesis is treated as a problem of abductive inference, similar to diagnosis and theory formation. Thus this model brings a knowledge-based problem-solving approach to the analysis of perception, treating perception as a kind of compiled cognition. The bottom-up passing of information from layer to layer defines channels of information flow, which separate and converge in a specific way for any specific sense modality. Multi-modal perception occurs where channels converge from more than one sense. This model has not yet been implemented, though it is based on systems which have been successful in medical and mechanical diagnosis and medical test interpretation.

  12. Objective correlates of pitch salience using pupillometry

    DEFF Research Database (Denmark)

    Bianchi, Federica; Santurette, Sébastien; Wendt, Dorothea;

    2014-01-01

    with increasing effort in performing the task and thus with decreasing pitch salience. A group of normal-hearing listeners first performed a behavioral pitch-discrimination experiment, where fundamental frequency difference limens ( F 0 DLs ) were measured as a function of F 0 . Results showed that pitch salience...... the frequency region and F 0 , were considered. Pupil size was measured for each condition, while the subjects’ task was to detect the deviants by pressing a response button. The expected trend was that pupil size would increase with decreasing salience. Results for musically trained listeners showed...... the expected trend, whereby pupil size significantly increased with decreasing salience of the stimuli. Non-musically trained listeners showed, however, a smaller pupil size for the least salient condition as compared to a medium salient condition, probably due to a too demanding task...

  13. Effects of Communication Mode and Salience on Recasts: A First Exposure Study

    Science.gov (United States)

    Yilmaz, Yucel; Yuksel, Dogan

    2011-01-01

    This article reports on a study that investigated whether the extent to which learners benefit from recasts on two Turkish morphemes differ depending on communication mode--i.e. Face-to-Face Communication (F2FC) and text-based Synchronous Computer-Mediated Communication (SCMC)--and/or the salience of the target structure (i.e. salient and…

  14. Object recognition with hierarchical discriminant saliency networks.

    Science.gov (United States)

    Han, Sunhyoung; Vasconcelos, Nuno

    2014-01-01

    The benefits of integrating attention and object recognition are investigated. While attention is frequently modeled as a pre-processor for recognition, we investigate the hypothesis that attention is an intrinsic component of recognition and vice-versa. This hypothesis is tested with a recognition model, the hierarchical discriminant saliency network (HDSN), whose layers are top-down saliency detectors, tuned for a visual class according to the principles of discriminant saliency. As a model of neural computation, the HDSN has two possible implementations. In a biologically plausible implementation, all layers comply with the standard neurophysiological model of visual cortex, with sub-layers of simple and complex units that implement a combination of filtering, divisive normalization, pooling, and non-linearities. In a convolutional neural network implementation, all layers are convolutional and implement a combination of filtering, rectification, and pooling. The rectification is performed with a parametric extension of the now popular rectified linear units (ReLUs), whose parameters can be tuned for the detection of target object classes. This enables a number of functional enhancements over neural network models that lack a connection to saliency, including optimal feature denoising mechanisms for recognition, modulation of saliency responses by the discriminant power of the underlying features, and the ability to detect both feature presence and absence. In either implementation, each layer has a precise statistical interpretation, and all parameters are tuned by statistical learning. Each saliency detection layer learns more discriminant saliency templates than its predecessors and higher layers have larger pooling fields. This enables the HDSN to simultaneously achieve high selectivity to target object classes and invariance. The performance of the network in saliency and object recognition tasks is compared to those of models from the biological and

  15. A Local Texture-Based Superpixel Feature Coding for Saliency Detection Combined with Global Saliency

    Directory of Open Access Journals (Sweden)

    Bingfei Nan

    2015-12-01

    Full Text Available Because saliency can be used as the prior knowledge of image content, saliency detection has been an active research area in image segmentation, object detection, image semantic understanding and other relevant image-based applications. In the case of saliency detection from cluster scenes, the salient object/region detected needs to not only be distinguished clearly from the background, but, preferably, to also be informative in terms of complete contour and local texture details to facilitate the successive processing. In this paper, a Local Texture-based Region Sparse Histogram (LTRSH model is proposed for saliency detection from cluster scenes. This model uses a combination of local texture patterns and color distribution as well as contour information to encode the superpixels to characterize the local feature of image for region contrast computing. Combining the region contrast as computed with the global saliency probability, a full-resolution salient map, in which the salient object/region detected adheres more closely to its inherent feature, is obtained on the bases of the corresponding high-level saliency spatial distribution as well as on the pixel-level saliency enhancement. Quantitative comparisons with five state-of-the-art saliency detection methods on benchmark datasets are carried out, and the comparative results show that the method we propose improves the detection performance in terms of corresponding measurements.

  16. 3-Substituted-7-(diethylamino)coumarins as molecular scaffolds for the bottom-up self-assembly of solids with extensive π-stacking

    Science.gov (United States)

    Arcos-Ramos, Rafael; Maldonado-Domínguez, Mauricio; Ordóñez-Hernández, Javier; Romero-Ávila, Margarita; Farfán, Norberto; Carreón-Castro, María del Pilar

    2017-02-01

    In this study, a set of molecular crystals derived from 3-substituted-7-(diethylamino)-2H-chromen-2-ones 1-8 were studied to sample the aggregation of coumarins into ordered solids. Crystals of parent compound 1a and its brominated derivative 2 were obtained and solved in the P-1 and C2/c space groups, respectively. All the crystalline coumarins studied display extensive π-stacking in the solid-state. Theoretical valence-conduction band gaps for derivatives 3b and 5 are close to crystalline rubrene, highlighting the importance of cooperativity and periodicity of π-stacking, in organic semiconductors; given their synthetic accessibility, electronic tunability and self-assembly via stacking, dipolar and H-bonding interactions, these systems arise as candidates for the bottom-up construction of organic crystals with extensive π-stacking and high polarizability.

  17. Bottom-up processing of thermoelectric nanocomposites from colloidal nanocrystal building blocks: the case of Ag{sub 2}Te-PbTe

    Energy Technology Data Exchange (ETDEWEB)

    Cadavid, Doris [Catalonia Institute for Energy Research, IREC (Spain); Ibanez, Maria [Universitat de Barcelona, Departament d' Electronica (Spain); Gorsse, Stephane [Universite de Bordeaux, ICMCB, CNRS (France); Lopez, Antonio M. [Universitat Politecnica de Catalunya, Departament d' Enginyeria Electronica (Spain); Cirera, Albert [Universitat de Barcelona, Departament d' Electronica (Spain); Morante, Joan Ramon; Cabot, Andreu, E-mail: acabot@irec.cat [Catalonia Institute for Energy Research, IREC (Spain)

    2012-12-15

    Nanocomposites are highly promising materials to enhance the efficiency of current thermoelectric devices. A straightforward and at the same time highly versatile and controllable approach to produce nanocomposites is the assembly of solution-processed nanocrystal building blocks. The convenience of this bottom-up approach to produce nanocomposites with homogeneous phase distributions and adjustable composition is demonstrated here by blending Ag{sub 2}Te and PbTe colloidal nanocrystals to form Ag{sub 2}Te-PbTe bulk nanocomposites. The thermoelectric properties of these nanocomposites are analyzed in the temperature range from 300 to 700 K. The evolution of their electrical conductivity and Seebeck coefficient is discussed in terms of the blend composition and the characteristics of the constituent materials.

  18. Parallel- and serial-contact electrochemical metallization of monolayer nanopatterns: A versatile synthetic tool en route to bottom-up assembly of electric nanocircuits

    Directory of Open Access Journals (Sweden)

    Jonathan Berson

    2012-02-01

    Full Text Available Contact electrochemical transfer of silver from a metal-film stamp (parallel process or a metal-coated scanning probe (serial process is demonstrated to allow site-selective metallization of monolayer template patterns of any desired shape and size created by constructive nanolithography. The precise nanoscale control of metal delivery to predefined surface sites, achieved as a result of the selective affinity of the monolayer template for electrochemically generated metal ions, provides a versatile synthetic tool en route to the bottom-up assembly of electric nanocircuits. These findings offer direct experimental support to the view that, in electrochemical metal deposition, charge is carried across the electrode–solution interface by ion migration to the electrode rather than by electron transfer to hydrated ions in solution.

  19. Chitosan microspheres with an extracellular matrix-mimicking nanofibrous structure as cell-carrier building blocks for bottom-up cartilage tissue engineering

    Science.gov (United States)

    Zhou, Yong; Gao, Huai-Ling; Shen, Li-Li; Pan, Zhao; Mao, Li-Bo; Wu, Tao; He, Jia-Cai; Zou, Duo-Hong; Zhang, Zhi-Yuan; Yu, Shu-Hong

    2015-12-01

    Scaffolds for tissue engineering (TE) which closely mimic the physicochemical properties of the natural extracellular matrix (ECM) have been proven to advantageously favor cell attachment, proliferation, migration and new tissue formation. Recently, as a valuable alternative, a bottom-up TE approach utilizing cell-loaded micrometer-scale modular components as building blocks to reconstruct a new tissue in vitro or in vivo has been proved to demonstrate a number of desirable advantages compared with the traditional bulk scaffold based top-down TE approach. Nevertheless, micro-components with an ECM-mimicking nanofibrous structure are still very scarce and highly desirable. Chitosan (CS), an accessible natural polymer, has demonstrated appealing intrinsic properties and promising application potential for TE, especially the cartilage tissue regeneration. According to this background, we report here the fabrication of chitosan microspheres with an ECM-mimicking nanofibrous structure for the first time based on a physical gelation process. By combining this physical fabrication procedure with microfluidic technology, uniform CS microspheres (CMS) with controlled nanofibrous microstructure and tunable sizes can be facilely obtained. Especially, no potentially toxic or denaturizing chemical crosslinking agent was introduced into the products. Notably, in vitro chondrocyte culture tests revealed that enhanced cell attachment and proliferation were realized, and a macroscopic 3D geometrically shaped cartilage-like composite can be easily constructed with the nanofibrous CMS (NCMS) and chondrocytes, which demonstrate significant application potential of NCMS as the bottom-up cell-carrier components for cartilage tissue engineering.Scaffolds for tissue engineering (TE) which closely mimic the physicochemical properties of the natural extracellular matrix (ECM) have been proven to advantageously favor cell attachment, proliferation, migration and new tissue formation

  20. Integration scheme of nanoscale resistive switching memory using bottom-up processes at room temperature for high-density memory applications

    Science.gov (United States)

    Han, Un-Bin; Lee, Jang-Sik

    2016-07-01

    A facile and versatile scheme is demonstrated to fabricate nanoscale resistive switching memory devices that exhibit reliable bipolar switching behavior. A solution process is used to synthesize the copper oxide layer into 250-nm via-holes that had been patterned in Si wafers. Direct bottom-up filling of copper oxide can facilitate fabrication of nanoscale memory devices without using vacuum deposition and etching processes. In addition, all materials and processes are CMOS compatible, and especially, the devices can be fabricated at room temperature. Nanoscale memory devices synthesized on wafers having 250-nm via-holes showed reproducible resistive switching programmable memory characteristics with reasonable endurance and data retention properties. This integration strategy provides a solution to overcome the scaling limit of current memory device fabrication methods.

  1. Integration scheme of nanoscale resistive switching memory using bottom-up processes at room temperature for high-density memory applications

    Science.gov (United States)

    Han, Un-Bin; Lee, Jang-Sik

    2016-01-01

    A facile and versatile scheme is demonstrated to fabricate nanoscale resistive switching memory devices that exhibit reliable bipolar switching behavior. A solution process is used to synthesize the copper oxide layer into 250-nm via-holes that had been patterned in Si wafers. Direct bottom-up filling of copper oxide can facilitate fabrication of nanoscale memory devices without using vacuum deposition and etching processes. In addition, all materials and processes are CMOS compatible, and especially, the devices can be fabricated at room temperature. Nanoscale memory devices synthesized on wafers having 250-nm via-holes showed reproducible resistive switching programmable memory characteristics with reasonable endurance and data retention properties. This integration strategy provides a solution to overcome the scaling limit of current memory device fabrication methods. PMID:27364856

  2. A neural computational model of incentive salience.

    Science.gov (United States)

    Zhang, Jun; Berridge, Kent C; Tindell, Amy J; Smith, Kyle S; Aldridge, J Wayne

    2009-07-01

    Incentive salience is a motivational property with 'magnet-like' qualities. When attributed to reward-predicting stimuli (cues), incentive salience triggers a pulse of 'wanting' and an individual is pulled toward the cues and reward. A key computational question is how incentive salience is generated during a cue re-encounter, which combines both learning and the state of limbic brain mechanisms. Learning processes, such as temporal-difference models, provide one way for stimuli to acquire cached predictive values of rewards. However, empirical data show that subsequent incentive values are also modulated on the fly by dynamic fluctuation in physiological states, altering cached values in ways requiring additional motivation mechanisms. Dynamic modulation of incentive salience for a Pavlovian conditioned stimulus (CS or cue) occurs during certain states, without necessarily requiring (re)learning about the cue. In some cases, dynamic modulation of cue value occurs during states that are quite novel, never having been experienced before, and even prior to experience of the associated unconditioned reward in the new state. Such cases can include novel drug-induced mesolimbic activation and addictive incentive-sensitization, as well as natural appetite states such as salt appetite. Dynamic enhancement specifically raises the incentive salience of an appropriate CS, without necessarily changing that of other CSs. Here we suggest a new computational model that modulates incentive salience by integrating changing physiological states with prior learning. We support the model with behavioral and neurobiological data from empirical tests that demonstrate dynamic elevations in cue-triggered motivation (involving natural salt appetite, and drug-induced intoxication and sensitization). Our data call for a dynamic model of incentive salience, such as presented here. Computational models can adequately capture fluctuations in cue-triggered 'wanting' only by incorporating

  3. A neural computational model of incentive salience.

    Directory of Open Access Journals (Sweden)

    Jun Zhang

    2009-07-01

    Full Text Available Incentive salience is a motivational property with 'magnet-like' qualities. When attributed to reward-predicting stimuli (cues, incentive salience triggers a pulse of 'wanting' and an individual is pulled toward the cues and reward. A key computational question is how incentive salience is generated during a cue re-encounter, which combines both learning and the state of limbic brain mechanisms. Learning processes, such as temporal-difference models, provide one way for stimuli to acquire cached predictive values of rewards. However, empirical data show that subsequent incentive values are also modulated on the fly by dynamic fluctuation in physiological states, altering cached values in ways requiring additional motivation mechanisms. Dynamic modulation of incentive salience for a Pavlovian conditioned stimulus (CS or cue occurs during certain states, without necessarily requiring (relearning about the cue. In some cases, dynamic modulation of cue value occurs during states that are quite novel, never having been experienced before, and even prior to experience of the associated unconditioned reward in the new state. Such cases can include novel drug-induced mesolimbic activation and addictive incentive-sensitization, as well as natural appetite states such as salt appetite. Dynamic enhancement specifically raises the incentive salience of an appropriate CS, without necessarily changing that of other CSs. Here we suggest a new computational model that modulates incentive salience by integrating changing physiological states with prior learning. We support the model with behavioral and neurobiological data from empirical tests that demonstrate dynamic elevations in cue-triggered motivation (involving natural salt appetite, and drug-induced intoxication and sensitization. Our data call for a dynamic model of incentive salience, such as presented here. Computational models can adequately capture fluctuations in cue-triggered 'wanting' only by

  4. Visual scanning and recognition of Chinese, Caucasian, and racially ambiguous faces: contributions from bottom-up facial physiognomic information and top-down knowledge of racial categories.

    Science.gov (United States)

    Wang, Qiandong; Xiao, Naiqi G; Quinn, Paul C; Hu, Chao S; Qian, Miao; Fu, Genyue; Lee, Kang

    2015-02-01

    Recent studies have shown that participants use different eye movement strategies when scanning own- and other-race faces. However, it is unclear (1) whether this effect is related to face recognition performance, and (2) to what extent this effect is influenced by top-down or bottom-up facial information. In the present study, Chinese participants performed a face recognition task with Chinese, Caucasian, and racially ambiguous faces. For the racially ambiguous faces, we led participants to believe that they were viewing either own-race Chinese faces or other-race Caucasian faces. Results showed that (1) Chinese participants scanned the nose of the true Chinese faces more than that of the true Caucasian faces, whereas they scanned the eyes of the Caucasian faces more than those of the Chinese faces; (2) they scanned the eyes, nose, and mouth equally for the ambiguous faces in the Chinese condition compared with those in the Caucasian condition; (3) when recognizing the true Chinese target faces, but not the true target Caucasian faces, the greater the fixation proportion on the nose, the faster the participants correctly recognized these faces. The same was true when racially ambiguous face stimuli were thought to be Chinese faces. These results provide the first evidence to show that (1) visual scanning patterns of faces are related to own-race face recognition response time, and (2) it is bottom-up facial physiognomic information that mainly contributes to face scanning. However, top-down knowledge of racial categories can influence the relationship between face scanning patterns and recognition response time.

  5. Top-Down and Bottom-Up Identification of Proteins by Liquid Extraction Surface Analysis Mass Spectrometry of Healthy and Diseased Human Liver Tissue

    Science.gov (United States)

    Sarsby, Joscelyn; Martin, Nicholas J.; Lalor, Patricia F.; Bunch, Josephine; Cooper, Helen J.

    2014-09-01

    Liquid extraction surface analysis mass spectrometry (LESA MS) has the potential to become a useful tool in the spatially-resolved profiling of proteins in substrates. Here, the approach has been applied to the analysis of thin tissue sections from human liver. The aim was to determine whether LESA MS was a suitable approach for the detection of protein biomarkers of nonalcoholic liver disease (nonalcoholic steatohepatitis, NASH), with a view to the eventual development of LESA MS for imaging NASH pathology. Two approaches were considered. In the first, endogenous proteins were extracted from liver tissue sections by LESA, subjected to automated trypsin digestion, and the resulting peptide mixture was analyzed by liquid chromatography tandem mass spectrometry (LC-MS/MS) (bottom-up approach). In the second (top-down approach), endogenous proteins were extracted by LESA, and analyzed intact. Selected protein ions were subjected to collision-induced dissociation (CID) and/or electron transfer dissociation (ETD) mass spectrometry. The bottom-up approach resulted in the identification of over 500 proteins; however identification of key protein biomarkers, liver fatty acid binding protein (FABP1), and its variant (Thr→Ala, position 94), was unreliable and irreproducible. Top-down LESA MS analysis of healthy and diseased liver tissue revealed peaks corresponding to multiple (~15-25) proteins. MS/MS of four of these proteins identified them as FABP1, its variant, α-hemoglobin, and 10 kDa heat shock protein. The reliable identification of FABP1 and its variant by top-down LESA MS suggests that the approach may be suitable for imaging NASH pathology in sections from liver biopsies.

  6. Bottom-up effects of nutrient availability on flower production, pollinator visitation, and seed output in a high-Andean shrub.

    Science.gov (United States)

    Muñoz, Alejandro A; Celedon-Neghme, Constanza; Cavieres, Lohengrin A; Arroyo, Mary T K

    2005-03-01

    Soil nutrient availability directly enhances vegetative growth, flowering, and fruiting in alpine ecosystems. However, the impacts of nutrient addition on pollinator visitation, which could affect seed output indirectly, are unknown. In a nutrient addition experiment, we tested the hypothesis that seed output in the insect-pollinated, self-incompatible shrub, Chuquiraga oppositifolia (Asteraceae) of the Andes of central Chile, is enhanced by soil nitrogen (N) availability. We aimed to monitor total shrub floral display, size of flower heads (capitula), pollinator visitation patterns, and seed output during three growing seasons on control and N addition shrubs. N addition did not augment floral display, size of capitula, pollinator visitation, or seed output during the first growing season. Seed mass and viability were 25-40% lower in fertilised shrubs. During the second growing season only 33% of the N addition shrubs flowered compared to 71% of controls, and a significant (50%) enhancement in vegetative growth occurred in fertilised shrubs. During the third growing season, floral display in N addition shrubs was more than double that of controls, received more than twice the number of insect pollinator visits, and seed output was three- to four-fold higher compared to controls. A significant (50%) enhancement in vegetative growth again occurred in N addition shrubs. Results of this study strongly suggest that soil N availability produces strong positive bottom-up effects on the reproductive output of the alpine shrub C. oppositifolia. Despite taking considerably longer to be manifest in comparison to the previously reported top-down indirect negative effects of lizard predators in the same study system, our results suggest that both bottom-up and top-down forces are important in controlling the reproductive output of an alpine shrub.

  7. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system.

    Directory of Open Access Journals (Sweden)

    Sebastian McBride

    Full Text Available Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1 conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2 implementation and validation of the model into robotic hardware (as a representative of an active vision system. Seven computational requirements were identified: 1 transformation of retinotopic to egocentric mappings, 2 spatial memory for the purposes of medium-term inhibition of return, 3 synchronization of 'where' and 'what' information from the two visual streams, 4 convergence of top-down and bottom-up information to a centralized point of information processing, 5 a threshold function to elicit saccade action, 6 a function to represent task relevance as a ratio of excitation and inhibition, and 7 derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.

  8. Top-down and bottom-up identification of proteins by liquid extraction surface analysis mass spectrometry of healthy and diseased human liver tissue.

    Science.gov (United States)

    Sarsby, Joscelyn; Martin, Nicholas J; Lalor, Patricia F; Bunch, Josephine; Cooper, Helen J

    2014-11-01

    Liquid extraction surface analysis mass spectrometry (LESA MS) has the potential to become a useful tool in the spatially-resolved profiling of proteins in substrates. Here, the approach has been applied to the analysis of thin tissue sections from human liver. The aim was to determine whether LESA MS was a suitable approach for the detection of protein biomarkers of nonalcoholic liver disease (nonalcoholic steatohepatitis, NASH), with a view to the eventual development of LESA MS for imaging NASH pathology. Two approaches were considered. In the first, endogenous proteins were extracted from liver tissue sections by LESA, subjected to automated trypsin digestion, and the resulting peptide mixture was analyzed by liquid chromatography tandem mass spectrometry (LC-MS/MS) (bottom-up approach). In the second (top-down approach), endogenous proteins were extracted by LESA, and analyzed intact. Selected protein ions were subjected to collision-induced dissociation (CID) and/or electron transfer dissociation (ETD) mass spectrometry. The bottom-up approach resulted in the identification of over 500 proteins; however identification of key protein biomarkers, liver fatty acid binding protein (FABP1), and its variant (Thr→Ala, position 94), was unreliable and irreproducible. Top-down LESA MS analysis of healthy and diseased liver tissue revealed peaks corresponding to multiple (~15-25) proteins. MS/MS of four of these proteins identified them as FABP1, its variant, α-hemoglobin, and 10 kDa heat shock protein. The reliable identification of FABP1 and its variant by top-down LESA MS suggests that the approach may be suitable for imaging NASH pathology in sections from liver biopsies.

  9. Identifying the computational requirements of an integrated top-down-bottom-up model for overt visual attention within an active vision system.

    Science.gov (United States)

    McBride, Sebastian; Huelse, Martin; Lee, Mark

    2013-01-01

    Computational visual attention systems have been constructed in order for robots and other devices to detect and locate regions of interest in their visual world. Such systems often attempt to take account of what is known of the human visual system and employ concepts, such as 'active vision', to gain various perceived advantages. However, despite the potential for gaining insights from such experiments, the computational requirements for visual attention processing are often not clearly presented from a biological perspective. This was the primary objective of this study, attained through two specific phases of investigation: 1) conceptual modeling of a top-down-bottom-up framework through critical analysis of the psychophysical and neurophysiological literature, 2) implementation and validation of the model into robotic hardware (as a representative of an active vision system). Seven computational requirements were identified: 1) transformation of retinotopic to egocentric mappings, 2) spatial memory for the purposes of medium-term inhibition of return, 3) synchronization of 'where' and 'what' information from the two visual streams, 4) convergence of top-down and bottom-up information to a centralized point of information processing, 5) a threshold function to elicit saccade action, 6) a function to represent task relevance as a ratio of excitation and inhibition, and 7) derivation of excitation and inhibition values from object-associated feature classes. The model provides further insight into the nature of data representation and transfer between brain regions associated with the vertebrate 'active' visual attention system. In particular, the model lends strong support to the functional role of the lateral intraparietal region of the brain as a primary area of information consolidation that directs putative action through the use of a 'priority map'.

  10. Canopy-scale flux measurements and bottom-up emission estimates of volatile organic compounds from a mixed oak and hornbeam forest in northern Italy

    Directory of Open Access Journals (Sweden)

    W. J. F. Acton

    2015-10-01

    Full Text Available This paper reports the fluxes and mixing ratios of biogenically emitted volatile organic compounds (BVOCs 4 m above a mixed oak and hornbeam forest in northern Italy. Fluxes of methanol, acetaldehyde, isoprene, methyl vinyl ketone + methacrolein, methyl ethyl ketone and monoterpenes were obtained using both a proton transfer reaction-mass spectrometer (PTR-MS and a proton transfer reaction-time of flight-mass spectrometer (PTR-ToF-MS together with the methods of virtual disjunct eddy covariance (PTR-MS and eddy covariance (PTR-ToF-MS. Isoprene was the dominant emitted compound with a mean day-time flux of 1.9 mg m-2 h-1. Mixing ratios, recorded 4 m above the canopy, were dominated by methanol with a mean value of 6.2 ppbv over the 28 day measurement period. Comparison of isoprene fluxes calculated using the PTR-MS and PTR-ToF-MS showed very good agreement while comparison of the monoterpene fluxes suggested a slight over estimation of the flux by the PTR-MS. A basal isoprene emission rate for the forest of 1.7 mg m-2 h-1 was calculated using the MEGAN isoprene emissions algorithms (Guenther et al., 2006. A detailed tree species distribution map for the site enabled the leaf-level emissions of isoprene and monoterpenes recorded using GC-MS to be scaled up to produce a "bottom-up" canopy-scale flux. This was compared with the "top-down" canopy-scale flux obtained by measurements. For monoterpenes, the two estimates were closely correlated and this correlation improved when the plant species composition in the individual flux footprint was taken into account. However, the bottom-up approach significantly underestimated the isoprene flux, compared with the top-down measurements, suggesting that the leaf-level measurements were not representative of actual emission rates.

  11. Visual Saliency Models for Text Detection in Real World.

    Science.gov (United States)

    Gao, Renwu; Uchida, Seiichi; Shahab, Asif; Shafait, Faisal; Frinken, Volkmar

    2014-01-01

    This paper evaluates the degree of saliency of texts in natural scenes using visual saliency models. A large scale scene image database with pixel level ground truth is created for this purpose. Using this scene image database and five state-of-the-art models, visual saliency maps that represent the degree of saliency of the objects are calculated. The receiver operating characteristic curve is employed in order to evaluate the saliency of scene texts, which is calculated by visual saliency models. A visualization of the distribution of scene texts and non-texts in the space constructed by three kinds of saliency maps, which are calculated using Itti's visual saliency model with intensity, color and orientation features, is given. This visualization of distribution indicates that text characters are more salient than their non-text neighbors, and can be captured from the background. Therefore, scene texts can be extracted from the scene images. With this in mind, a new visual saliency architecture, named hierarchical visual saliency model, is proposed. Hierarchical visual saliency model is based on Itti's model and consists of two stages. In the first stage, Itti's model is used to calculate the saliency map, and Otsu's global thresholding algorithm is applied to extract the salient region that we are interested in. In the second stage, Itti's model is applied to the salient region to calculate the final saliency map. An experimental evaluation demonstrates that the proposed model outperforms Itti's model in terms of captured scene texts.

  12. Visual Saliency Models for Text Detection in Real World.

    Directory of Open Access Journals (Sweden)

    Renwu Gao

    Full Text Available This paper evaluates the degree of saliency of texts in natural scenes using visual saliency models. A large scale scene image database with pixel level ground truth is created for this purpose. Using this scene image database and five state-of-the-art models, visual saliency maps that represent the degree of saliency of the objects are calculated. The receiver operating characteristic curve is employed in order to evaluate the saliency of scene texts, which is calculated by visual saliency models. A visualization of the distribution of scene texts and non-texts in the space constructed by three kinds of saliency maps, which are calculated using Itti's visual saliency model with intensity, color and orientation features, is given. This visualization of distribution indicates that text characters are more salient than their non-text neighbors, and can be captured from the background. Therefore, scene texts can be extracted from the scene images. With this in mind, a new visual saliency architecture, named hierarchical visual saliency model, is proposed. Hierarchical visual saliency model is based on Itti's model and consists of two stages. In the first stage, Itti's model is used to calculate the saliency map, and Otsu's global thresholding algorithm is applied to extract the salient region that we are interested in. In the second stage, Itti's model is applied to the salient region to calculate the final saliency map. An experimental evaluation demonstrates that the proposed model outperforms Itti's model in terms of captured scene texts.

  13. A comparison of top-down and bottom-up approaches to benthic habitat mapping to inform offshore wind energy development

    Science.gov (United States)

    LaFrance, Monique; King, John W.; Oakley, Bryan A.; Pratt, Sheldon

    2014-07-01

    Recent interest in offshore renewable energy within the United States has amplified the need for marine spatial planning to direct management strategies and address competing user demands. To assist this effort in Rhode Island, benthic habitat classification maps were developed for two sites in offshore waters being considered for wind turbine installation. Maps characterizing and representing the distribution and extent of benthic habitats are valuable tools for improving understanding of ecosystem patterns and processes, and promoting scientifically-sound management decisions. This project presented the opportunity to conduct a comparison of the methodologies and resulting map outputs of two classification approaches, “top-down” and “bottom-up” in the two study areas. This comparison was undertaken to improve understanding of mapping methodologies and their applicability, including the bottom-up approach in offshore environments where data density tends to be lower, as well as to provide case studies for scientists and managers to consider for their own areas of interest. Such case studies can offer guidance for future work for assessing methodologies and translating them to other areas. The traditional top-down mapping approach identifies biological community patterns based on communities occurring within geologically defined habitat map units, under the concept that geologic environments contain distinct biological assemblages. Alternatively, the bottom-up approach aims to establish habitat map units centered on biological similarity and then uses statistics to identify relationships with associated environmental parameters and determine habitat boundaries. When applied to the two study areas, both mapping approaches produced habitat classes with distinct macrofaunal assemblages and each established statistically strong and significant biotic-abiotic relationships with geologic features, sediment characteristics, water depth, and/or habitat

  14. Object recognition with hierarchical discriminant saliency networks

    Directory of Open Access Journals (Sweden)

    Sunhyoung eHan

    2014-09-01

    Full Text Available The benefits of integrating attention and object recognition are investigated. While attention is frequently modeled as pre-processor for recognition, we investigate the hypothesis that attention is an intrinsic component of recognition and vice-versa. This hypothesis is tested with a recognitionmodel, the hierarchical discriminant saliency network (HDSN, whose layers are top-down saliency detectors, tuned for a visual class according to the principles of discriminant saliency. The HDSN has two possible implementations. In a biologically plausible implementation, all layers comply with the standard neurophysiological model of visual cortex, with sub-layers of simple and complex units that implement a combination of filtering, divisive normalization, pooling, and non-linearities. In a neuralnetwork implementation, all layers are convolutional and implement acombination of filtering, rectification, and pooling. The rectificationis performed with a parametric extension of the now popular rectified linearunits (ReLUs, whose parameters can be tuned for the detection of targetobject classes. This enables a number of functional enhancementsover neural network models that lack a connection to saliency, including optimal feature denoising mechanisms for recognition, modulation ofsaliency responses by the discriminant power of the underlying features,and the ability to detect both feature presence and absence.In either implementation, each layer has a precise statistical interpretation, and all parameters are tuned by statistical learning. Each saliency detection layer learns more discriminant saliency templates than its predecessors and higher layers have larger pooling fields. This enables the HDSN to simultaneously achieve high selectivity totarget object classes and invariance. The resulting performance demonstrates benefits for all the functional enhancements of the HDSN.

  15. Top-down model estimates, bottom-up inventories, and future projections of global natural and anthropogenic emissions of nitrous oxide

    Science.gov (United States)

    Davidson, E. A.; Kanter, D.

    2013-12-01

    Nitrous oxide (N2O) is the third most abundantly emitted greenhouse gas and the largest remaining emitted ozone depleting substance. It is a product of nitrifying and denitrifying bacteria in soils, sediments and water bodies. Humans began to disrupt the N cycle in the preindustrial era as they expanded agricultural land, used fire for land clearing and management, and cultivated leguminous crops that carry out biological N fixation. This disruption accelerated after the industrial revolution, especially as the use of synthetic N fertilizers became common after 1950. Here we present findings from a new United Nations Environment Programme report, in which we constrain estimates of the anthropogenic and natural emissions of N2O and consider scenarios for future emissions. Inventory-based estimates of natural emissions from terrestrial, marine and atmospheric sources range from 10 to 12 Tg N2O-N/yr. Similar values can be derived for global N2O emissions that were predominantly natural before the industrial revolution. While there was inter-decadal variability, there was little or no consistent trend in atmospheric N2O concentrations between 1730 and 1850, allowing us to assume near steady state. Assuming an atmospheric lifetime of 120 years, the 'top-down' estimate of pre-industrial emissions of 11 Tg N2O-N/yr is consistent with the bottom-up inventories for natural emissions, although the former includes some modest pre-industrial anthropogenic effects (probably large inherent uncertainties in both approaches, it is encouraging that the bottom-up (6.0) and top-down (5.3) estimates are within 12% of each other and their uncertainty ranges overlap. N2O is inescapably linked to food production and food security. Future agricultural emissions will be determined by population, dietary habits, and agricultural N use efficiency. Without deliberate and effective mitigation policies, anthropogenic N2O emissions will likely double by 2050 and continue to increase thereafter

  16. Linking top-down and bottom-up approaches for assessing the vulnerability of a 100 % renewable energy system in Northern-Italy

    Science.gov (United States)

    Borga, Marco; Francois, Baptiste; Hingray, Benoit; Zoccatelli, Davide; Creutin, Jean-Dominique; brown, Casey

    2016-04-01

    Due to their variable and un-controllable features, integration of Variable Renewable Energies (e.g. solar-power, wind-power and hydropower, denoted as VRE) into the electricity network implies higher production variability and increased risk of not meeting demand. Two approaches are commonly used for assessing this risk and especially its evolution in a global change context (i.e. climate and societal changes); top-down and bottom-up approaches. The general idea of a top-down approach is to drive analysis of global change or of some key aspects of global change on their systems (e.g., the effects of the COP 21, of the deployment of Smart Grids, or of climate change) with chains of loosely linked simulation models within a predictive framework. The bottom-up approach aims to improve understanding of the dependencies between the vulnerability of regional systems and large-scale phenomenon from knowledge gained through detailed exploration of the response to change of the system of interest, which may reveal vulnerability thresholds, tipping points as well as potential opportunities. Brown et al. (2012) defined an analytical framework to merge these two approaches. The objective is to build, a set of Climate Response Functions (CRFs) putting in perspective i) indicators of desired states ("success") and undesired states ("failure") of a system as defined in collaboration with stakeholders 2) exhaustive exploration of the effects of uncertain forcings and imperfect system understanding on the response of the system itself to a plausible set of possible changes, implemented a with multi-dimensionally consistent "stress test" algorithm, and 3) a set "ex post" hydroclimatic and socioeconomic scenarios that provide insight into the differential effectiveness of alternative policies and serve as entry points for the provision of climate information to inform policy evaluation and choice. We adapted this approach for analyzing a 100 % renewable energy system within a region

  17. A two-step combination of top-down and bottom-up fire emission estimates at regional and global scales: strengths and main uncertainties

    Science.gov (United States)

    Sofiev, Mikhail; Soares, Joana; Kouznetsov, Rostislav; Vira, Julius; Prank, Marje

    2016-04-01

    Top-down emission estimation via inverse dispersion modelling is used for various problems, where bottom-up approaches are difficult or highly uncertain. One of such areas is the estimation of emission from wild-land fires. In combination with dispersion modelling, satellite and/or in-situ observations can, in principle, be used to efficiently constrain the emission values. This is the main strength of the approach: the a-priori values of the emission factors (based on laboratory studies) are refined for real-life situations using the inverse-modelling technique. However, the approach also has major uncertainties, which are illustrated here with a few examples of the Integrated System for wild-land Fires (IS4FIRES). IS4FIRES generates the smoke emission and injection profile from MODIS and SEVIRI active-fire radiative energy observations. The emission calculation includes two steps: (i) initial top-down calibration of emission factors via inverse dispersion problem solution that is made once using training dataset from the past, (ii) application of the obtained emission coefficients to individual-fire radiative energy observations, thus leading to bottom-up emission compilation. For such a procedure, the major classes of uncertainties include: (i) imperfect information on fires, (ii) simplifications in the fire description, (iii) inaccuracies in the smoke observations and modelling, (iv) inaccuracies of the inverse problem solution. Using examples of the fire seasons 2010 in Russia, 2012 in Eurasia, 2007 in Australia, etc, it is pointed out that the top-down system calibration performed for a limited number of comparatively moderate cases (often the best-observed ones) may lead to errors in application to extreme events. For instance, the total emission of 2010 Russian fires is likely to be over-estimated by up to 50% if the calibration is based on the season 2006 and fire description is simplified. Longer calibration period and more sophisticated parameterization

  18. Canopy-scale flux measurements and bottom-up emission estimates of volatile organic compounds from a mixed oak and hornbeam forest in northern Italy

    Science.gov (United States)

    Acton, W. Joe F.; Schallhart, Simon; Langford, Ben; Valach, Amy; Rantala, Pekka; Fares, Silvano; Carriero, Giulia; Tillmann, Ralf; Tomlinson, Sam J.; Dragosits, Ulrike; Gianelle, Damiano; Hewitt, C. Nicholas; Nemitz, Eiko

    2016-06-01

    This paper reports the fluxes and mixing ratios of biogenically emitted volatile organic compounds (BVOCs) 4 m above a mixed oak and hornbeam forest in northern Italy. Fluxes of methanol, acetaldehyde, isoprene, methyl vinyl ketone + methacrolein, methyl ethyl ketone and monoterpenes were obtained using both a proton-transfer-reaction mass spectrometer (PTR-MS) and a proton-transfer-reaction time-of-flight mass spectrometer (PTR-ToF-MS) together with the methods of virtual disjunct eddy covariance (using PTR-MS) and eddy covariance (using PTR-ToF-MS). Isoprene was the dominant emitted compound with a mean daytime flux of 1.9 mg m-2 h-1. Mixing ratios, recorded 4 m above the canopy, were dominated by methanol with a mean value of 6.2 ppbv over the 28-day measurement period. Comparison of isoprene fluxes calculated using the PTR-MS and PTR-ToF-MS showed very good agreement while comparison of the monoterpene fluxes suggested a slight over estimation of the flux by the PTR-MS. A basal isoprene emission rate for the forest of 1.7 mg m-2 h-1 was calculated using the Model of Emissions of Gases and Aerosols from Nature (MEGAN) isoprene emission algorithms (Guenther et al., 2006). A detailed tree-species distribution map for the site enabled the leaf-level emission of isoprene and monoterpenes recorded using gas-chromatography mass spectrometry (GC-MS) to be scaled up to produce a bottom-up canopy-scale flux. This was compared with the top-down canopy-scale flux obtained by measurements. For monoterpenes, the two estimates were closely correlated and this correlation improved when the plant-species composition in the individual flux footprint was taken into account. However, the bottom-up approach significantly underestimated the isoprene flux, compared with the top-down measurements, suggesting that the leaf-level measurements were not representative of actual emission rates.

  19. Bottom-up preparation of MgH2 nanoparticles with enhanced cycle life stability during electrochemical conversion in Li-ion batteries

    Science.gov (United States)

    Oumellal, Yassine; Zlotea, Claudia; Bastide, Stéphane; Cachet-Vivier, Christine; Léonel, Eric; Sengmany, Stéphane; Leroy, Eric; Aymard, Luc; Bonnet, Jean-Pierre; Latroche, Michel

    2014-11-01

    A promising anode material for Li-ion batteries based on MgH2 with around 5 nm average particles size was synthesized by a bottom-up method. A series of several composites containing MgH2 nanoparticles well dispersed into a porous carbon host has been prepared with different metal content up to 70 wt%. A narrow particle size distribution (1-10 nm) of the MgH2 nanospecies with around 5.5 nm average size can be controlled up to 50 wt% Mg. After a ball milling treatment under Ar, the composite containing 50 wt% Mg shows an impressive cycle life stability with a good electrochemical capacity of around 500 mA h g-1. Moreover, the nanoparticles' size distribution is stable during cycling.A promising anode material for Li-ion batteries based on MgH2 with around 5 nm average particles size was synthesized by a bottom-up method. A series of several composites containing MgH2 nanoparticles well dispersed into a porous carbon host has been prepared with different metal content up to 70 wt%. A narrow particle size distribution (1-10 nm) of the MgH2 nanospecies with around 5.5 nm average size can be controlled up to 50 wt% Mg. After a ball milling treatment under Ar, the composite containing 50 wt% Mg shows an impressive cycle life stability with a good electrochemical capacity of around 500 mA h g-1. Moreover, the nanoparticles' size distribution is stable during cycling. Electronic supplementary information (ESI) available: (a) Dark field TEM image and the corresponding SAED electron diffraction pattern of the as-synthesized 15MgH2@HSAG-500, (b) N2 sorption isotherms at 77 K of all as-synthesized xMgH2@HSAG-500 composites, (c) N2 sorption isotherms at 77 K of the 50MgH2@HSAG-500 composite before and after ball milling, (d) electrochemical characterization of all as-synthesized xMgH2@HSAG-500 composites for the first cycle, where x is 15, 25, 50 and 70 wt% Mg. (e) Comparison between the capacities of two ball milled xMgH2@HSAG-500 composites with x = 50 and 70 wt% Mg. (f

  20. On the advantages of spring magnets compared to pure FePt: Strategy for rare-earth free permanent magnets following a bottom-up approach

    Science.gov (United States)

    Pousthomis, M.; Garnero, C.; Marcelot, C. G.; Blon, T.; Cayez, S.; Cassignol, C.; Du, V. A.; Krispin, M.; Arenal, R.; Soulantica, K.; Viau, G.; Lacroix, L.-M.

    2017-02-01

    Nanostructured magnets benefiting from efficient exchange-coupling between hard and soft grains represent an appealing approach for integrated miniaturized magnetic power sources. Using a bottom-up approach, nanostructured materials were prepared from binary assemblies of bcc FeCo and fcc FePt nanoparticles and compared with pure L10-FePt materials. The use of a bifunctional mercapto benzoic acid yields homogeneous assemblies of the two types of particles while reducing the organic matter amount. The 650 °C thermal annealing, mandatory to allow the L10-FePt phase transition, led to an important interdiffusion and thus decreased drastically the amount of soft phase present in the final composites. The analysis of recoil curves however evidenced the presence of an efficient interphase exchange coupling, which allows obtaining better magnetic performances than pure L10 FePt materials, energy product above 100 kJ m-3 being estimated for a Pt content of only 33%. These results clearly evidenced the interest of chemically grown nanoparticles for the preparation of performant spring-magnets, opening promising perspective for integrated subcentimetric magnets with optimized properties.

  1. Middle-Out Approaches to Reform of University Teaching and Learning: Champions striding between the top-down and bottom-up approaches

    Directory of Open Access Journals (Sweden)

    Rick Cummings

    2005-03-01

    Full Text Available In recent years, Australian universities have been driven by a diversity of external forces, including funding cuts, massification of higher education, and changing student demographics, to reform their relationship with students and improve teaching and learning, particularly for those studying off-campus or part-time. Many universities have responded to these forces either through formal strategic plans developed top-down by executive staff or through organic developments arising from staff in a bottom-up approach. By contrast, much of Murdoch University’s response has been led by a small number of staff who have middle management responsibilities and who have championed the reform of key university functions, largely in spite of current policy or accepted practice. This paper argues that the ‘middle-out’ strategy has both a basis in change management theory and practice, and a number of strengths, including low risk, low cost, and high sustainability. Three linked examples of middle-out change management in teaching and learning at Murdoch University are described and the outcomes analyzed to demonstrate the benefits and pitfalls of this approach.

  2. When top-down becomes bottom up: behaviour of hyperdense howler monkeys (Alouatta seniculus) trapped on a 0.6 ha island.

    Science.gov (United States)

    Orihuela, Gabriela; Terborgh, John; Ceballos, Natalia; Glander, Kenneth

    2014-01-01

    Predators are a ubiquitous presence in most natural environments. Opportunities to contrast the behaviour of a species in the presence and absence of predators are thus rare. Here we report on the behaviour of howler monkey groups living under radically different conditions on two land-bridge islands in Lago Guri, Venezuela. One group of 6 adults inhabited a 190-ha island (Danto) where they were exposed to multiple potential predators. This group, the control, occupied a home range of 23 ha and contested access to food resources with neighbouring groups in typical fashion. The second group, containing 6 adults, was isolated on a remote, predator-free 0.6 ha islet (Iguana) offering limited food resources. Howlers living on the large island moved, fed and rested in a coherent group, frequently engaged in affiliative activities, rarely displayed agonistic behaviour and maintained intergroup spacing through howling. In contrast, the howlers on Iguana showed repulsion, as individuals spent most of their time spaced widely around the perimeter of the island. Iguana howlers rarely engaged in affiliative behaviour, often chased or fought with one another and were not observed to howl. These behaviors are interpreted as adjustments to the unrelenting deprivation associated with bottom-up limitation in a predator-free environment.

  3. Duemmler, Kerstin; Nagel, Alexander-Kenneth: governing religious diversity: top-down and bottom-up initiatives in Germany and Switzerland.

    Science.gov (United States)

    Duemmler, Kerstin; Nagel, Alexander-Kenneth

    2013-06-01

    In recent years religious pluralization has become a significant policy issue in Western societies as a result of a new awareness of religion and of religious minorities articulating themselves and becoming more visible. The article explores the variety of social and political reactions to religious diversity in urban areas and in doing so it brings together theoretical concepts of political and cultural sociology. The notion of diversity governance as joint endeavour of state and societal actors managing societies is linked to the notion of boundary work as interplay of state and/or societal actors maintaining or modifying boundaries between religious traditions. Based on two case studies the article illustrates two idealtypical settings of diversity governance: The first case from the German Ruhr Area stands for a bottom-up approach which is based on civic self-organization of interreligious activities whereas the second case from the Swiss canton of Lucerne exhibits a model of top-down governance based on state interventions in religious instruction at schools. Drawing on semi-structured interviews and participant observation the authors show how different governance settings shape the construction and blurring of boundaries in the religious field. Both approaches operate differently when incorporating religious diversity and rendering former homogenous notions of we-groups more heterogeneous. Despite of the approaches initial aim of inclusion, patterns of exclusion are equally reproduced since the idea of 'legitimate religion' rooted in Christian majority culture is present.

  4. High-Throughput Top-Down and Bottom-Up Processes for Forming Single-Nanotube Based Architectures for 3D Electronics

    Science.gov (United States)

    Kaul, Anupama B.; Megerian, Krikor G.; von Allmen, Paul; Kowalczyk, Robert; Baron, Richard

    2009-01-01

    We have developed manufacturable approaches to form single, vertically aligned carbon nanotubes, where the tubes are centered precisely, and placed within a few hundred nm of 1-1.5 micron deep trenches. These wafer-scale approaches were enabled by chemically amplified resists and inductively coupled Cryo-etchers for forming the 3D nanoscale architectures. The tube growth was performed using dc plasma-enhanced chemical vapor deposition (PECVD), and the materials used for the pre-fabricated 3D architectures were chemically and structurally compatible with the high temperature (700 C) PECVD synthesis of our tubes, in an ammonia and acetylene ambient. Tube characteristics were also engineered to some extent, by adjusting growth parameters, such as Ni catalyst thickness, pressure and plasma power during growth. Such scalable, high throughput top-down fabrication techniques, combined with bottom-up tube synthesis, should accelerate the development of PECVD tubes for applications such as interconnects, nano-electromechanical (NEMS), sensors or 3D electronics in general.

  5. The bottom-up approach to defining life : deciphering the functional organization of biological cells via multi-objective representation of biological complexity from molecules to cells

    Directory of Open Access Journals (Sweden)

    Sathish ePeriyasamy

    2013-12-01

    Full Text Available In silico representation of cellular systems needs to represent the adaptive dynamics of biological cells, recognizing a cell’s multi-objective topology formed by spatially and temporally cohesive intracellular structures. The design of these models needs to address the hierarchical and concurrent nature of cellular functions and incorporate the ability to self-organise in response to transitions between healthy and pathological phases, and adapt accordingly. The functions of biological systems are constantly evolving, due to the ever changing demands of their environment. Biological systems meet these demands by pursuing objectives, aided by their constituents, giving rise to biological functions. A biological cell is organised into an objective/task hierarchy. These objective hierarchy corresponds to the nested nature of temporally cohesive structures and representing them will facilitate in studying pleiotropy and polygeny by modeling causalities propagating across multiple interconnected intracellular processes. Although biological adaptations occur in physiological, developmental and reproductive timescales, the paper is focused on adaptations that occur within physiological timescales, where the biomolecular activities contributing to functional organisation, play a key role in cellular physiology. The paper proposes a multi-scale and multi-objective modelling approach from the bottom-up by representing temporally cohesive structures for multi-tasking of intracellular processes. Further the paper characterises the properties and constraints that are consequential to the organisational and adaptive dynamics in biological cells.

  6. Tailoring the morphology and luminescence of GaN/InGaN core-shell nanowires using bottom-up selective-area epitaxy

    Science.gov (United States)

    Nami, Mohsen; Eller, Rhett F.; Okur, Serdal; Rishinaramangalam, Ashwin K.; Liu, Sheng; Brener, Igal; Feezell, Daniel F.

    2017-01-01

    Controlled bottom-up selective-area epitaxy (SAE) is used to tailor the morphology and photoluminescence properties of GaN/InGaN core-shell nanowire arrays. The nanowires are grown on c-plane sapphire substrates using pulsed-mode metal organic chemical vapor deposition. By varying the dielectric mask configuration and growth conditions, we achieve GaN nanowire cores with diameters ranging from 80 to 700 nm that exhibit various degrees of polar, semipolar, and nonpolar faceting. A single InGaN quantum well (QW) and GaN barrier shell is also grown on the GaN nanowire cores and micro-photoluminescence is obtained and analyzed for a variety of nanowire dimensions, array pitch spacings, and aperture diameters. By increasing the nanowire pitch spacing on the same growth wafer, the emission wavelength redshifts from 440 to 520 nm, while increasing the aperture diameter results in a ˜35 nm blueshift. The thickness of one QW/barrier period as a function of pitch and aperture diameter is inferred using scanning electron microscopy, with larger pitches showing significantly thicker QWs. Significant increases in indium composition were predicted for larger pitches and smaller aperture diameters. The results are interpreted in terms of local growth conditions and adatom capture radius around the nanowires. This work provides significant insight into the effects of mask configuration and growth conditions on the nanowire properties and is applicable to the engineering of monolithic multi-color nanowire LEDs on a single chip.

  7. Climate change, pink salmon, and the nexus between bottom-up and top-down forcing in the subarctic Pacific Ocean and Bering Sea.

    Science.gov (United States)

    Springer, Alan M; van Vliet, Gus B

    2014-05-06

    Climate change in the last century was associated with spectacular growth of many wild Pacific salmon stocks in the North Pacific Ocean and Bering Sea, apparently through bottom-up forcing linking meteorology to ocean physics, water temperature, and plankton production. One species in particular, pink salmon, became so numerous by the 1990s that they began to dominate other species of salmon for prey resources and to exert top-down control in the open ocean ecosystem. Information from long-term monitoring of seabirds in the Aleutian Islands and Bering Sea reveals that the sphere of influence of pink salmon is much larger than previously known. Seabirds, pink salmon, other species of salmon, and by extension other higher-order predators, are tightly linked ecologically and must be included in international management and conservation policies for sustaining all species that compete for common, finite resource pools. These data further emphasize that the unique 2-y cycle in abundance of pink salmon drives interannual shifts between two alternate states of a complex marine ecosystem.

  8. Evaluating vehicle re-entrained road dust and its potential to deposit to Lake Tahoe: a bottom-up inventory approach.

    Science.gov (United States)

    Zhu, Dongzi; Kuhns, Hampden D; Gillies, John A; Gertler, Alan W

    2014-01-01

    Identifying hotspot areas impacted by emissions of dust from roadways is an essential step for mitigation. This paper develops a detailed road dust PM₁₀ emission inventory using a bottom-up approach and evaluates the potential for the dust to deposit to Lake Tahoe where it can affect water clarity. Previous studies of estimates of quantities of atmospheric deposition of fine sediment particles ("FSP", dust emission factors, five years of meteorological data, a traffic demand model and GIS analysis was used to estimate the near field deposition of airborne particulate matter atmospheric deposition to the lake. Approximately ~20 Mg year(-1) of PM₁₀ and ~36 Mg year(-1) Total Suspended Particulate (TSP) from roadway emissions of dust are estimated to reach the lake. We estimate that the atmospheric dry deposition of particles to the lake attributable to vehicle travel on paved roads is approximately 0.6% of the Total Maximum Daily Loadings (TMDL) of FSP that the lake can receive and still meet water quality standards.

  9. Benchmarking Non-Hardware Balance-of-System (Soft) Costs for U.S. Photovoltaic Systems, Using a Bottom-Up Approach and Installer Survey - Second Edition

    Energy Technology Data Exchange (ETDEWEB)

    Friedman, B.; Ardani, K.; Feldman, D.; Citron, R.; Margolis, R.; Zuboy, J.

    2013-10-01

    This report presents results from the second U.S. Department of Energy (DOE) sponsored, bottom-up data-collection and analysis of non-hardware balance-of-system costs -- often referred to as 'business process' or 'soft' costs -- for U.S. residential and commercial photovoltaic (PV) systems. In service to DOE's SunShot Initiative, annual expenditure and labor-hour-productivity data are analyzed to benchmark 2012 soft costs related to (1) customer acquisition and system design (2) permitting, inspection, and interconnection (PII). We also include an in-depth analysis of costs related to financing, overhead, and profit. Soft costs are both a major challenge and a major opportunity for reducing PV system prices and stimulating SunShot-level PV deployment in the United States. The data and analysis in this series of benchmarking reports are a step toward the more detailed understanding of PV soft costs required to track and accelerate these price reductions.

  10. When top-down becomes bottom up: behaviour of hyperdense howler monkeys (Alouatta seniculus trapped on a 0.6 ha island.

    Directory of Open Access Journals (Sweden)

    Gabriela Orihuela

    Full Text Available Predators are a ubiquitous presence in most natural environments. Opportunities to contrast the behaviour of a species in the presence and absence of predators are thus rare. Here we report on the behaviour of howler monkey groups living under radically different conditions on two land-bridge islands in Lago Guri, Venezuela. One group of 6 adults inhabited a 190-ha island (Danto where they were exposed to multiple potential predators. This group, the control, occupied a home range of 23 ha and contested access to food resources with neighbouring groups in typical fashion. The second group, containing 6 adults, was isolated on a remote, predator-free 0.6 ha islet (Iguana offering limited food resources. Howlers living on the large island moved, fed and rested in a coherent group, frequently engaged in affiliative activities, rarely displayed agonistic behaviour and maintained intergroup spacing through howling. In contrast, the howlers on Iguana showed repulsion, as individuals spent most of their time spaced widely around the perimeter of the island. Iguana howlers rarely engaged in affiliative behaviour, often chased or fought with one another and were not observed to howl. These behaviors are interpreted as adjustments to the unrelenting deprivation associated with bottom-up limitation in a predator-free environment.

  11. Synthesis of a Cementitious Material Nanocement Using Bottom-Up Nanotechnology Concept: An Alternative Approach to Avoid CO2 Emission during Production of Cement

    Directory of Open Access Journals (Sweden)

    Byung Wan Jo

    2014-01-01

    Full Text Available The world’s increasing need is to develop smart and sustainable construction material, which will generate minimal climate changing gas during their production. The bottom-up nanotechnology has established itself as a promising alternative technique for the production of the cementitious material. The present investigation deals with the chemical synthesis of cementitious material using nanosilica, sodium aluminate, sodium hydroxide, and calcium nitrate as reacting phases. The characteristic properties of the chemically synthesized nanocement were verified by the chemical composition analysis, setting time measurement, particle size distribution, fineness analysis, and SEM and XRD analyses. Finally, the performance of the nanocement was ensured by the fabrication and characterization of the nanocement based mortar. Comparing the results with the commercially available cement product, it is demonstrated that the chemically synthesized nanocement not only shows better physical and mechanical performance, but also brings several encouraging impacts to the society, including the reduction of CO2 emission and the development of sustainable construction material. A plausible reaction scheme has been proposed to explain the synthesis and the overall performances of the nanocement.

  12. Bottom-up derivation of conservative and dissipative interactions for coarse-grained molecular liquids with the conditional reversible work method

    Energy Technology Data Exchange (ETDEWEB)

    Deichmann, Gregor; Marcon, Valentina; Vegt, Nico F. A. van der, E-mail: vandervegt@csi.tu-darmstadt.de [Center of Smart Interfaces, Technische Universität Darmstadt, Alarich-Weiss-Straße 10, 64287 Darmstadt (Germany)

    2014-12-14

    Molecular simulations of soft matter systems have been performed in recent years using a variety of systematically coarse-grained models. With these models, structural or thermodynamic properties can be quite accurately represented while the prediction of dynamic properties remains difficult, especially for multi-component systems. In this work, we use constraint molecular dynamics simulations for calculating dissipative pair forces which are used together with conditional reversible work (CRW) conservative forces in dissipative particle dynamics (DPD) simulations. The combined CRW-DPD approach aims to extend the representability of CRW models to dynamic properties and uses a bottom-up approach. Dissipative pair forces are derived from fluctuations of the direct atomistic forces between mapped groups. The conservative CRW potential is obtained from a similar series of constraint dynamics simulations and represents the reversible work performed to couple the direct atomistic interactions between the mapped atom groups. Neopentane, tetrachloromethane, cyclohexane, and n-hexane have been considered as model systems. These molecular liquids are simulated with atomistic molecular dynamics, coarse-grained molecular dynamics, and DPD. We find that the CRW-DPD models reproduce the liquid structure and diffusive dynamics of the liquid systems in reasonable agreement with the atomistic models when using single-site mapping schemes with beads containing five or six heavy atoms. For a two-site representation of n-hexane (3 carbons per bead), time scale separation can no longer be assumed and the DPD approach consequently fails to reproduce the atomistic dynamics.

  13. Toward improved prediction of the bedrock depth underneath hillslopes: Bayesian inference of the bottom-up control hypothesis using high-resolution topographic data

    Science.gov (United States)

    Gomes, Guilherme J. C.; Vrugt, Jasper A.; Vargas, Eurípedes A.

    2016-04-01

    The depth to bedrock controls a myriad of processes by influencing subsurface flow paths, erosion rates, soil moisture, and water uptake by plant roots. As hillslope interiors are very difficult and costly to illuminate and access, the topography of the bedrock surface is largely unknown. This essay is concerned with the prediction of spatial patterns in the depth to bedrock (DTB) using high-resolution topographic data, numerical modeling, and Bayesian analysis. Our DTB model builds on the bottom-up control on fresh-bedrock topography hypothesis of Rempe and Dietrich (2014) and includes a mass movement and bedrock-valley morphology term to extent the usefulness and general applicability of the model. We reconcile the DTB model with field observations using Bayesian analysis with the DREAM algorithm. We investigate explicitly the benefits of using spatially distributed parameter values to account implicitly, and in a relatively simple way, for rock mass heterogeneities that are very difficult, if not impossible, to characterize adequately in the field. We illustrate our method using an artificial data set of bedrock depth observations and then evaluate our DTB model with real-world data collected at the Papagaio river basin in Rio de Janeiro, Brazil. Our results demonstrate that the DTB model predicts accurately the observed bedrock depth data. The posterior mean DTB simulation is shown to be in good agreement with the measured data. The posterior prediction uncertainty of the DTB model can be propagated forward through hydromechanical models to derive probabilistic estimates of factors of safety.

  14. Emission trading and the role of learning-by-doing spillovers in the 'bottom-up' energy-system ERIS model

    Energy Technology Data Exchange (ETDEWEB)

    Barreto, L.; Klaassen, G. [International Institute for Applied Systems Analysis, Laxenburg (Austria). Environmentally Compatible Energy Strategies

    2004-07-01

    In this paper, using the 'bottom-up' energy-system optimisation Ecris model, we examine the effects of emission trading on technology deployment, emphasising the role of technology learning spill overs. That is, the possibility that the learning accumulated in a particular technology in a given region may spill to other regions as well, leading to cost reductions there also. The effects of different configurations of inter regional spillovers of learning in ERIS and the impact of the emission trading mechanism under those different circumstances are analysed. Including spatial spillovers of learning allows capturing the possibility that the imposition of greenhouse gas emission constraints in a given region may induce technological change in other regions, such as developing countries, even if the latter regions do not face emission constraints. Our stylised results point out the potential benefits of sound international cooperation between industrialised and developing regions on research, development, demonstration and deployment (RD3) of clean energy technologies and on the implementation of emission trading schemes. (author)

  15. Evolutionary Steps in the Emergence of Life Deduced from the Bottom-Up Approach and GADV Hypothesis (Top-Down Approach).

    Science.gov (United States)

    Ikehara, Kenji

    2016-01-26

    It is no doubt quite difficult to solve the riddle of the origin of life. So, firstly, I would like to point out the kinds of obstacles there are in solving this riddle and how we should tackle these difficult problems, reviewing the studies that have been conducted so far. After that, I will propose that the consecutive evolutionary steps in a timeline can be rationally deduced by using a common event as a juncture, which is obtained by two counter-directional approaches: one is the bottom-up approach through which many researchers have studied the origin of life, and the other is the top-down approach, through which I established the [GADV]-protein world hypothesis or GADV hypothesis on the origin of life starting from a study on the formation of entirely new genes in extant microorganisms. Last, I will describe the probable evolutionary process from the formation of Earth to the emergence of life, which was deduced by using a common event-the establishment of the first genetic code encoding [GADV]-amino acids-as a juncture for the results obtained from the two approaches.

  16. Structural and optical nanoscale analysis of GaN core-shell microrod arrays fabricated by combined top-down and bottom-up process on Si(111)

    Science.gov (United States)

    Müller, Marcus; Schmidt, Gordon; Metzner, Sebastian; Veit, Peter; Bertram, Frank; Krylyuk, Sergiy; Debnath, Ratan; Ha, Jong-Yoon; Wen, Baomei; Blanchard, Paul; Motayed, Abhishek; King, Matthew R.; Davydov, Albert V.; Christen, Jürgen

    2016-05-01

    Large arrays of GaN core-shell microrods were fabricated on Si(111) substrates applying a combined bottom-up and top-down approach which includes inductively coupled plasma (ICP) etching of patterned GaN films grown by metal-organic vapor phase epitaxy (MOVPE) and selective overgrowth of obtained GaN/Si pillars using hydride vapor phase epitaxy (HVPE). The structural and optical properties of individual core-shell microrods have been studied with a nanometer scale spatial resolution using low-temperature cathodoluminescence spectroscopy (CL) directly performed in a scanning electron microscope (SEM) and in a scanning transmission electron microscope (STEM). SEM, TEM, and CL measurements reveal the formation of distinct growth domains during the HVPE overgrowth. A high free-carrier concentration observed in the non-polar \\{ 1\\bar{1}00\\} HVPE shells is assigned to in-diffusion of silicon atoms from the substrate. In contrast, the HVPE shells directly grown on top of the c-plane of the GaN pillars reveal a lower free-carrier concentration.

  17. Fabricación de electrodos para control de transporte y alineamiento a micro y nanoescalas usando técnicas bottom-up y top-down

    Directory of Open Access Journals (Sweden)

    Darwin Rodríguez

    2014-12-01

    Full Text Available El continuo avance de aplicaciones en dispositivos de autoensamble, posicionamiento, sensores, actuadores, y que permitan controladamente la manipulación de micro y nanoestructuras, han generado amplio interés en el desarrollo de metodologías que permitan optimizar la fabricación de dispositivos para el control y manipulación a micro y nanoescalas. Este proyecto explora técnicas de fabricación de electrodos con el fin de encontrar una técnica óptima y reproducible. Se compara el rendimiento de cada técnica y se describen protocolos de limpieza y seguridad. Se diseñan e implementan tres geometrías para movilizar y posicionar micro y nanopartículas de hierro en una solución de aceite natural. Finalmente se generan campos eléctricos a partir de electroforesis, con el fin de encontrar la curva que describe el desplazamiento de las partículas con respecto al potencial aplicado. Estos resultados generan gran impacto en los actuales esfuerzos de fabricación bottom-up (controlando con campos la ubicación y la movilidad en dispositivos electrónicos. El hecho de fabricar geometría planar con electrodos genera la posibilidad de que se pueda integrar movimiento de partículas a los circuitos integrados que se fabrican en la actualidad.

  18. Evolutionary Steps in the Emergence of Life Deduced from the Bottom-Up Approach and GADV Hypothesis (Top-Down Approach

    Directory of Open Access Journals (Sweden)

    Kenji Ikehara

    2016-01-01

    Full Text Available It is no doubt quite difficult to solve the riddle of the origin of life. So, firstly, I would like to point out the kinds of obstacles there are in solving this riddle and how we should tackle these difficult problems, reviewing the studies that have been conducted so far. After that, I will propose that the consecutive evolutionary steps in a timeline can be rationally deduced by using a common event as a juncture, which is obtained by two counter-directional approaches: one is the bottom-up approach through which many researchers have studied the origin of life, and the other is the top-down approach, through which I established the [GADV]-protein world hypothesis or GADV hypothesis on the origin of life starting from a study on the formation of entirely new genes in extant microorganisms. Last, I will describe the probable evolutionary process from the formation of Earth to the emergence of life, which was deduced by using a common event—the establishment of the first genetic code encoding [GADV]-amino acids—as a juncture for the results obtained from the two approaches.

  19. Perceived Effects of Pornography on the Couple Relationship: Initial Findings of Open-Ended, Participant-Informed, "Bottom-Up" Research.

    Science.gov (United States)

    Kohut, Taylor; Fisher, William A; Campbell, Lorne

    2017-02-01

    The current study adopted a participant-informed, "bottom-up," qualitative approach to identifying perceived effects of pornography on the couple relationship. A large sample (N = 430) of men and women in heterosexual relationships in which pornography was used by at least one partner was recruited through online (e.g., Facebook, Twitter, etc.) and offline (e.g., newspapers, radio, etc.) sources. Participants responded to open-ended questions regarding perceived consequences of pornography use for each couple member and for their relationship in the context of an online survey. In the current sample of respondents, "no negative effects" was the most commonly reported impact of pornography use. Among remaining responses, positive perceived effects of pornography use on couple members and their relationship (e.g., improved sexual communication, more sexual experimentation, enhanced sexual comfort) were reported frequently; negative perceived effects of pornography (e.g., unrealistic expectations, decreased sexual interest in partner, increased insecurity) were also reported, albeit with considerably less frequency. The results of this work suggest new research directions that require more systematic attention.

  20. Referent Salience Affects Second Language Article Use

    Science.gov (United States)

    Trenkic, Danijela; Pongpairoj, Nattama

    2013-01-01

    The effect of referent salience on second language (L2) article production in real time was explored. Thai (-articles) and French (+articles) learners of English described dynamic events involving two referents, one visually cued to be more salient at the point of utterance formulation. Definiteness marking was made communicatively redundant with…

  1. Visualization of neural networks using saliency maps

    DEFF Research Database (Denmark)

    Mørch, Niels J.S.; Kjems, Ulrik; Hansen, Lars Kai

    1995-01-01

    The saliency map is proposed as a new method for understanding and visualizing the nonlinearities embedded in feedforward neural networks, with emphasis on the ill-posed case, where the dimensionality of the input-field by far exceeds the number of examples. Several levels of approximations...

  2. Unified Saliency Detection Model Using Color and Texture Features.

    Science.gov (United States)

    Zhang, Libo; Yang, Lin; Luo, Tiejian

    2016-01-01

    Saliency detection attracted attention of many researchers and had become a very active area of research. Recently, many saliency detection models have been proposed and achieved excellent performance in various fields. However, most of these models only consider low-level features. This paper proposes a novel saliency detection model using both color and texture features and incorporating higher-level priors. The SLIC superpixel algorithm is applied to form an over-segmentation of the image. Color saliency map and texture saliency map are calculated based on the region contrast method and adaptive weight. Higher-level priors including location prior and color prior are incorporated into the model to achieve a better performance and full resolution saliency map is obtained by using the up-sampling method. Experimental results on three datasets demonstrate that the proposed saliency detection model outperforms the state-of-the-art models.

  3. Role of zinc interstitials and oxygen vacancies of ZnO in photocatalysis: a bottom-up approach to control defect density.

    Science.gov (United States)

    Kayaci, Fatma; Vempati, Sesha; Donmez, Inci; Biyikli, Necmi; Uyar, Tamer

    2014-09-07

    Oxygen vacancies (V(O)s) in ZnO are well-known to enhance photocatalytic activity (PCA) despite various other intrinsic crystal defects. In this study, we aim to elucidate the effect of zinc interstitials (Zn(i)) and V(O)s on PCA, which has applied as well as fundamental interest. To achieve this, the major hurdle of fabricating ZnO with controlled defect density requires to be overcome, where it is acknowledged that defect level control in ZnO is significantly difficult. In the present context, we fabricated nanostructures and thoroughly characterized their morphological (SEM, TEM), structural (XRD, TEM), chemical (XPS) and optical (photoluminescence, PL) properties. To fabricate the nanostructures, we adopted atomic layer deposition (ALD), which is a powerful bottom-up approach. However, to control defects, we chose polysulfone electrospun nanofibers as a substrate on which the non-uniform adsorption of ALD precursors is inevitable because of the differences in the hydrophilic nature of the functional groups. For the first 100 cycles, Zn(i)s were predominant in ZnO quantum dots (QDs), while the presence of V(O)s was negligible. As the ALD cycle number increased, V(O)s were introduced, whereas the density of Zn(i) remained unchanged. We employed PL spectra to identify and quantify the density of each defect for all the samples. PCA was performed on all the samples, and the percent change in the decay constant for each sample was juxtaposed with the relative densities of Zn(i)s and V(O)s. A logical comparison of the relative defect densities of Zn(i)s and V(O)s suggested that the former are less efficient than the latter because of the differences in the intrinsic nature and the physical accessibility of the defects. Other reasons for the efficiency differences were elaborated.

  4. Contribution of Oil and Gas Production to Atmospheric CH4 in the South-Central United States: Reconciling Bottom-up and Top-down Estimates

    Science.gov (United States)

    Liu, Z.; Pinto, J. P.; Turner, A. J.; Bruhwiler, L.; Henze, D. K.; Brioude, J. F.; Bousserez, N.; Sargsyan, K.; Safta, C.; Najm, H. N.; LaFranchi, B. W.; Bambha, R.; Michelsen, H. A.

    2014-12-01

    Estimates of anthropogenic CH4 emissions in the United States have been largely inconsistent, particularly for oil and gas production (OGP) in the South-Central United States. We have quantified the contribution of OGP to the South-Central US (TX/OK/KS) CH4 budget through atmospheric regional transport modeling with the Community Multi-scale Air Quality (CMAQ). This model is driven by a new process-based, spatially resolved OGP CH4 emissions inventory. We employed Bayesian inference to calibrate CMAQ emissions inputs using continuous CH4 measurements at the DOE Southern Great Plains (SGP) central facility and evaluated model predictions against a subset of aircraft and surface flask measurements that are assimilated by NOAA's CarbonTracker-CH4. Our results suggest that OGP emissions are the largest source of CH4 observed at the DOE SGP site and the largest source of CH4 in TX/OK/KS, constituting ~45% of total CH4 emission in the region. The next largest source in the region is livestock, with other sources being relatively less important. We estimate OGP emissions in TX/OK/KS contribute about one half of national total OGP emissions. Using continuous CH4 measurements, we found evidence of rapid nocturnal transport by the Great Plains low-level jet (LLJ) and sporadic oil and gas emissions. Our study demonstrates the importance of improved knowledge of the spatial and temporal features of oil and gas emissions in reconciling CH4 budgets derived using bottom-up and top-down approaches at regional and national scales.

  5. Top-down/bottom-up description of electricity sector for Switzerland using the GEM-E3 computable general equilibrium model

    Energy Technology Data Exchange (ETDEWEB)

    Krakowski, R. A

    2006-06-15

    Participation of the Paul Scherrer Institute (PSI) in the advancement and extension of the multi-region, Computable General Equilibrium (CGE) model GEM-E3 (CES/KUL, 2002) focused primarily on two top-level facets: a) extension of the model database and model calibration, particularly as related to the second component of this study, which is; b) advancement of the dynamics of innovation and investment, primarily through the incorporation of Exogenous Technical Learning (ETL) into he Bottom-Up (BU, technology-based) part of the dynamic upgrade; this latter activity also included the completion of the dynamic coupling of the BU description of the electricity sector with the 'Top-Down' (TD, econometric) description of the economy inherent to the GEM-E3 CGE model. The results of this two- component study are described in two parts that have been combined in this single summary report: Part I describes the methodology and gives illustrative results from the BUTD integration, as well as describing the approach to and giving preliminary results from incorporating an ETL description into the BU component of the overall model; Part II reports on the calibration component of task in terms of: a) formulating a BU technology database for Switzerland based on previous work; incorporation of that database into the GEM-E3 model; and calibrating the BU database with the TD database embodied in the (Swiss) Social Accounting Matrix (SAM). The BUTD coupling along with the ETL incorporation described in Part I represent the major effort embodied in this investigation, but this effort could not be completed without the calibration preamble reported herein as Part II. A brief summary of the scope of each of these key study components is given. (author)

  6. Energetic Bottomup in the Low Countries. Energy transition from the bottom-up. On Happy energetic civilians, Solar and wind cooperatives, New utility companies; Energieke BottomUp in Lage Landen. De Energietransitie van Onderaf. Over Vrolijke energieke burgers, Zon- en windcooperaties, Nieuwe nuts

    Energy Technology Data Exchange (ETDEWEB)

    Schwencke, A.M.

    2012-08-15

    This essay is an outline of the 'energy transition from the bottom-up'. Leading questions are: (1) what are the actual initiatives; (2) who is involved; (3) how does one work (organization, business models); (4) why are people active in this field; (5) what good is it; (6) what is the aim? The essay is based on public information sources (websites, blogs, publications) and interviews with people involved [Dutch] Dit essay is een verkenning van de 'energietransitie van onderaf'. Leidende vragen zijn: (1) om wat voor initiatieven gaat het nu eigenlijk?; (2) wie zijn daarbij betrokken?; (3) hoe gaat men te werk (organisatie, business modellen)?; (4) waarom is men er op die manier mee bezig?; (5) Zet het zoden aan de dijk?; (6) Waar beweegt het naar toe? Het essay baseert zich op openbare bronnen (websites, blogs, publicaties) en gesprekken met mensen uit het veld.

  7. Mesh saliency with adaptive local patches

    Science.gov (United States)

    Nouri, Anass; Charrier, Christophe; Lézoray, Olivier

    2015-03-01

    3D object shapes (represented by meshes) include both areas that attract the visual attention of human observers and others less or not attractive at all. This visual attention depends on the degree of saliency exposed by these areas. In this paper, we propose a technique for detecting salient regions in meshes. To do so, we define a local surface descriptor based on local patches of adaptive size and filled with a local height field. The saliency of mesh vertices is then defined as its degree measure with edges weights computed from adaptive patch similarities. Our approach is compared to the state-of-the-art and presents competitive results. A study evaluating the influence of the parameters establishing this approach is also carried out. The strength and the stability of our approach with respect to noise and simplification are also studied.

  8. Enhancing the Wettability of High Aspect-Ratio Through-Silicon Vias Lined with LPCVD Silicon Nitride or PE-ALD Titanium Nitride for Void-Free Bottom-Up Copper Electroplating

    NARCIS (Netherlands)

    Saadaoui, M.; van Zeijl, H.; Wien, W. H. A.; Pham, H. T. M.; Kwakernaak, C.; Knoops, H. C. M.; Kessels, W. M. M.; R. van de Sanden,; Voogt, F. C.; Roozeboom, F.; Sarro, P. M.

    2011-01-01

    One of the critical steps toward producing void-free and uniform bottom-up copper electroplating in high aspect-ratio (AR) through-silicon vias (TSVs) is the ability of the copper electrolyte to spontaneously flow through the entire depth of the via. This can be accomplished by reducing the concentr

  9. On an elementary definition of visual saliency

    DEFF Research Database (Denmark)

    Loog, Marco

    2008-01-01

    on probabilistic and information or decision theoretic considerations have been proposed. These provide experimentally successful, appealing, low-level, operational, and elementary definitions of visual saliency (see eg, Bruce, 2005 Neurocomputing 65 125 - 133). Here, I demonstrate that, in fact, all......, surprisingly, without the need to refer back to previously observed data. Furthermore, it follows that it is actually not the statistics of the visual scene that would determine what is salient but the low-level features that probe the scene....

  10. A Statistical Method for Estimating Missing GHG Emissions in Bottom-Up Inventories: The Case of Fossil Fuel Combustion in Industry in the Bogota Region, Colombia

    Science.gov (United States)

    Jimenez-Pizarro, R.; Rojas, A. M.; Pulido-Guio, A. D.

    2012-12-01

    The development of environmentally, socially and financially suitable greenhouse gas (GHG) mitigation portfolios requires detailed disaggregation of emissions by activity sector, preferably at the regional level. Bottom-up (BU) emission inventories are intrinsically disaggregated, but although detailed, they are frequently incomplete. Missing and erroneous activity data are rather common in emission inventories of GHG, criteria and toxic pollutants, even in developed countries. The fraction of missing and erroneous data can be rather large in developing country inventories. In addition, the cost and time for obtaining or correcting this information can be prohibitive or can delay the inventory development. This is particularly true for regional BU inventories in the developing world. Moreover, a rather common practice is to disregard or to arbitrarily impute low default activity or emission values to missing data, which typically leads to significant underestimation of the total emissions. Our investigation focuses on GHG emissions by fossil fuel combustion in industry in the Bogota Region, composed by Bogota and its adjacent, semi-rural area of influence, the Province of Cundinamarca. We found that the BU inventories for this sub-category substantially underestimate emissions when compared to top-down (TD) estimations based on sub-sector specific national fuel consumption data and regional energy intensities. Although both BU inventories have a substantial number of missing and evidently erroneous entries, i.e. information on fuel consumption per combustion unit per company, the validated energy use and emission data display clear and smooth frequency distributions, which can be adequately fitted to bimodal log-normal distributions. This is not unexpected as industrial plant sizes are typically log-normally distributed. Moreover, our statistical tests suggest that industrial sub-sectors, as classified by the International Standard Industrial Classification (ISIC

  11. 解决变化问题的自底向上流程建模方法%Bottom-up workflow modeling approach for business changes

    Institute of Scientific and Technical Information of China (English)

    严志民; 徐玮

    2011-01-01

    为使工作流适应业务快速发展而复杂多变的特点,提出一种全新的以数据为中心的业务流程定义和业务流程建模的说明性业务流程建模方法。以自底向上机制分析解剖业务流程,提取出原子工单、活动和业务策略等,将业务要素和业务变化的描述分离成不同的层次。执行语义上,以数据中心的业务流程建模的说明性业务流程建模方法借助有限状态自动机来描述单个工单的生命周期,利用标号迁移系统来描述工作流及多个工单间的交互。此外,还进行了从以数据中心的业务流程建模的说明性业务流程建模方法到实现可部署工作流的探讨,并结合杭州市房产管理局的实际工作流程,阐述了该方法的实际应用。%To meet with the adaptability requirements of workflow in a complicated and rapid changing business environment,a new modeling method named Declarative ARTifact-centric workflow(DART) was proposed.The business process was analyzed in the bottom-up manner so that its building blocks such as artifacts,activities and business policies were extracted.Representation of business component and change were differentiated.DART also took Finite State Automata(FSA) to illustrate single artifact's lifecycle,and Labeled Transition Systems(LTS) to describe workflow and interactions among artifacts.In addition,from DART modeling method to realize deployable workflow was also discussed.This method was tested in Hangzhou real estate administration bureau and application was finally studied.

  12. Benchmarking Non-Hardware Balance-of-System (Soft) Costs for U.S. Photovoltaic Systems Using a Bottom-Up Approach and Installer Survey

    Energy Technology Data Exchange (ETDEWEB)

    Ardani, Kristen [National Renewable Energy Lab. (NREL), Golden, CO (United States); Margolis, Robert [National Renewable Energy Lab. (NREL), Golden, CO (United States); Feldman, David [National Renewable Energy Lab. (NREL), Golden, CO (United States); Ong, Sean [National Renewable Energy Lab. (NREL), Golden, CO (United States); Barbose, Galen [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Wiser, Ryan [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-11-01

    This report presents results from the first U.S. Department of Energy (DOE) sponsored, bottom-up data-collection and analysis of non-hardware balance-of-system costs—often referred to as “business process” or “soft” costs—for residential and commercial photovoltaic (PV) systems. Annual expenditure and labor-hour-productivity data are analyzed to benchmark 2010 soft costs related to the DOE priority areas of (1) customer acquisition; (2) permitting, inspection, and interconnection; (3) installation labor; and (4) installer labor for arranging third-party financing. Annual expenditure and labor-hour data were collected from 87 PV installers. After eliminating outliers, the survey sample consists of 75 installers, representing approximately 13% of all residential PV installations and 4% of all commercial installations added in 2010. Including assumed permitting fees, in 2010 the average soft costs benchmarked in this analysis total $1.50/W for residential systems (ranging from $0.66/W to $1.66/W between the 20th and 80th percentiles). For commercial systems, the median 2010 benchmarked soft costs (including assumed permitting fees) are $0.99/W for systems smaller than 250 kW (ranging from $0.51/W to $1.45/W between the 20th and 80th percentiles) and $0.25/W for systems larger than 250 kW (ranging from $0.17/W to $0.78/W between the 20th and 80th percentiles). Additional soft costs not benchmarked in the present analysis (e.g., installer profit, overhead, financing, and contracting) are significant and would add to these figures. The survey results provide a benchmark for measuring—and helping to accelerate—progress over the next decade toward achieving the DOE SunShot Initiative’s soft-cost-reduction targets. We conclude that the selected non-hardware business processes add considerable cost to U.S. PV systems, constituting 23% of residential PV system price, 17% of small commercial system price, and 5% of large commercial system price (in 2010

  13. Benefits of China's efforts in gaseous pollutant control indicated by the bottom-up emissions and satellite observations 2000-2014

    Science.gov (United States)

    Xia, Yinmin; Zhao, Yu; Nielsen, Chris P.

    2016-07-01

    To evaluate the effectiveness of national air pollution control policies, the emissions of SO2, NOX, CO and CO2 in China are estimated using bottom-up methods for the most recent 15-year period (2000-2014). Vertical column densities (VCDs) from satellite observations are used to test the temporal and spatial patterns of emissions and to explore the ambient levels of gaseous pollutants across the country. The inter-annual trends in emissions and VCDs match well except for SO2. Such comparison is improved with an optimistic assumption in emission estimation that the emission standards for given industrial sources issued after 2010 have been fully enforced. Underestimation of emission abatement and enhanced atmospheric oxidization likely contribute to the discrepancy between SO2 emissions and VCDs. As suggested by VCDs and emissions estimated under the assumption of full implementation of emission standards, the control of SO2 in the 12th Five-Year Plan period (12th FYP, 2011-2015) is estimated to be more effective than that in the 11th FYP period (2006-2010), attributed to improved use of flue gas desulfurization in the power sector and implementation of new emission standards in key industrial sources. The opposite was true for CO, as energy efficiency improved more significantly from 2005 to 2010 due to closures of small industrial plants. Iron & steel production is estimated to have had particularly strong influence on temporal and spatial patterns of CO. In contrast to fast growth before 2011 driven by increased coal consumption and limited controls, NOX emissions decreased from 2011 to 2014 due to the penetration of selective catalytic/non-catalytic reduction systems in the power sector. This led to reduced NO2 VCDs, particularly in relatively highly polluted areas such as the eastern China and Pearl River Delta regions. In developed areas, transportation is playing an increasingly important role in air pollution, as suggested by the increased ratio of NO2 to SO

  14. DISC: Deep Image Saliency Computing via Progressive Representation Learning.

    Science.gov (United States)

    Chen, Tianshui; Lin, Liang; Liu, Lingbo; Luo, Xiaonan; Li, Xuelong

    2016-06-01

    Salient object detection increasingly receives attention as an important component or step in several pattern recognition and image processing tasks. Although a variety of powerful saliency models have been intensively proposed, they usually involve heavy feature (or model) engineering based on priors (or assumptions) about the properties of objects and backgrounds. Inspired by the effectiveness of recently developed feature learning, we provide a novel deep image saliency computing (DISC) framework for fine-grained image saliency computing. In particular, we model the image saliency from both the coarse-and fine-level observations, and utilize the deep convolutional neural network (CNN) to learn the saliency representation in a progressive manner. In particular, our saliency model is built upon two stacked CNNs. The first CNN generates a coarse-level saliency map by taking the overall image as the input, roughly identifying saliency regions in the global context. Furthermore, we integrate superpixel-based local context information in the first CNN to refine the coarse-level saliency map. Guided by the coarse saliency map, the second CNN focuses on the local context to produce fine-grained and accurate saliency map while preserving object details. For a testing image, the two CNNs collaboratively conduct the saliency computing in one shot. Our DISC framework is capable of uniformly highlighting the objects of interest from complex background while preserving well object details. Extensive experiments on several standard benchmarks suggest that DISC outperforms other state-of-the-art methods and it also generalizes well across data sets without additional training. The executable version of DISC is available online: http://vision.sysu.edu.cn/projects/DISC.

  15. Saliency Detection via Combining Region-Level and Pixel-Level Predictions with CNNs

    OpenAIRE

    Tang, Youbao; Wu, Xiangqian

    2016-01-01

    This paper proposes a novel saliency detection method by combining region-level saliency estimation and pixel-level saliency prediction with CNNs (denoted as CRPSD). For pixel-level saliency prediction, a fully convolutional neural network (called pixel-level CNN) is constructed by modifying the VGGNet architecture to perform multi-scale feature learning, based on which an image-to-image prediction is conducted to accomplish the pixel-level saliency detection. For region-level saliency estima...

  16. Toward systematic integration between self-determination theory and motivational interviewing as examples of top-down and bottom-up intervention development: autonomy or volition as a fundamental theoretical principle.

    Science.gov (United States)

    Vansteenkiste, Maarten; Williams, Geoffrey C; Resnicow, Ken

    2012-03-02

    Clinical interventions can be developed through two distinct pathways. In the first, which we call top-down, a well-articulated theory drives the development of the intervention, whereas in the case of a bottom-up approach, clinical experience, more so than a dedicated theoretical perspective, drives the intervention. Using this dialectic, this paper discusses Self-Determination Theory (SDT) 12 and Motivational Interviewing (MI) 3 as prototypical examples of a top-down and bottom-up approaches, respectively. We sketch the different starting points, foci and developmental processes of SDT and MI, but equally note the complementary character and the potential for systematic integration between both approaches. Nevertheless, for a deeper integration to take place, we contend that MI researchers might want to embrace autonomy as a fundamental basic process underlying therapeutic change and we discuss the advantages of doing so.

  17. The Aberrant Salience Inventory: A New Measure of Psychosis Proneness

    Science.gov (United States)

    Cicero, David C.; Kerns, John G.; McCarthy, Denis M.

    2010-01-01

    Aberrant salience is the unusual or incorrect assignment of salience, significance, or importance to otherwise innocuous stimuli and has been hypothesized to be important for psychosis and psychotic disorders such as schizophrenia. Despite the importance of this concept in psychosis research, no questionnaire measures are available to assess…

  18. Moving object detection in aerial video based on spatiotemporal saliency

    Institute of Scientific and Technical Information of China (English)

    Shen Hao; Li Shuxiao; Zhu Chengfei; Chang Hongxing; Zhang Jinglan

    2013-01-01

    In this paper, the problem of moving object detection in aerial video is addressed. While motion cues have been extensively exploited in the literature, how to use spatial information is still an open problem. To deal with this issue, we propose a novel hierarchical moving target detection method based on spatiotemporal saliency. Temporal saliency is used to get a coarse segmentation, and spatial saliency is extracted to obtain the object’s appearance details in candidate motion regions. Finally, by combining temporal and spatial saliency information, we can get refined detec-tion results. Additionally, in order to give a full description of the object distribution, spatial sal-iency is detected in both pixel and region levels based on local contrast. Experiments conducted on the VIVID dataset show that the proposed method is efficient and accurate.

  19. Improving Saliency Models by Predicting Human Fixation Patches

    KAUST Repository

    Dubey, Rachit

    2015-04-16

    There is growing interest in studying the Human Visual System (HVS) to supplement and improve the performance of computer vision tasks. A major challenge for current visual saliency models is predicting saliency in cluttered scenes (i.e. high false positive rate). In this paper, we propose a fixation patch detector that predicts image patches that contain human fixations with high probability. Our proposed model detects sparse fixation patches with an accuracy of 84 % and eliminates non-fixation patches with an accuracy of 84 % demonstrating that low-level image features can indeed be used to short-list and identify human fixation patches. We then show how these detected fixation patches can be used as saliency priors for popular saliency models, thus, reducing false positives while maintaining true positives. Extensive experimental results show that our proposed approach allows state-of-the-art saliency methods to achieve better prediction performance on benchmark datasets.

  20. Revealing Event Saliency in Unconstrained Video Collection.

    Science.gov (United States)

    Zhang, Dingwen; Han, Junwei; Jiang, Lu; Ye, Senmao; Chang, Xiaojun

    2017-04-01

    Recent progresses in multimedia event detection have enabled us to find videos about a predefined event from a large-scale video collection. Research towards more intrinsic unsupervised video understanding is an interesting but understudied field. Specifically, given a collection of videos sharing a common event of interest, the goal is to discover the salient fragments, i.e., the curt video fragments that can concisely portray the underlying event of interest, from each video. To explore this novel direction, this paper proposes an unsupervised event saliency revealing framework. It first extracts features from multiple modalities to represent each shot in the given video collection. Then, these shots are clustered to build the cluster-level event saliency revealing framework, which explores useful information cues (i.e., the intra-cluster prior, inter-cluster discriminability, and inter-cluster smoothness) by a concise optimization model. Compared with the existing methods, our approach could highlight the intrinsic stimulus of the unseen event within a video in an unsupervised fashion. Thus, it could potentially benefit to a wide range of multimedia tasks like video browsing, understanding, and search. To quantitatively verify the proposed method, we systematically compare the method to a number of baseline methods on the TRECVID benchmarks. Experimental results have demonstrated its effectiveness and efficiency.

  1. The Social Salience Hypothesis of Oxytocin.

    Science.gov (United States)

    Shamay-Tsoory, Simone G; Abu-Akel, Ahmad

    2016-02-01

    Oxytocin is a nonapeptide that also serves as a neuromodulator in the human central nervous system. Over the last decade, a sizeable body of literature has examined its effects on social behavior in humans. These studies show that oxytocin modulates various aspects of social behaviors such as empathy, trust, in-group preference, and memory of socially relevant cues. Several theoretical formulations have attempted to explain the effects of oxytocin. The prosocial account argues that oxytocin mainly enhances affiliative prosocial behaviors; the fear/stress theory suggests that oxytocin affects social performance by attenuating stress; and the in-/out-group approach proposes that oxytocin regulates cooperation and conflict among humans in the context of intergroup relations. Nonetheless, accumulating evidence reveals that the effects of oxytocin are dependent on a variety of contextual aspects and the individual's characteristics and can induce antisocial effects including aggression and envy. In an attempt to reconcile these accounts, we suggest a theoretical framework that focuses on the overarching role of oxytocin in regulating the salience of social cues through its interaction with the dopaminergic system. Crucially, the salience effect modulates attention orienting responses to external contextual social cues (e.g., competitive vs. cooperative environment) but is dependent on baseline individual differences such as gender, personality traits, and degree of psychopathology. This view could have important implications for the therapeutic applications of oxytocin in conditions characterized with aberrant social behavior.

  2. Direct current stimulation (tDCS) reveals parietal asymmetry in local/global and salience-based selection.

    Science.gov (United States)

    Bardi, Lara; Kanai, Ryota; Mapelli, Daniela; Walsh, Vincent

    2013-03-01

    Data from neuropsychology and neuroimaging studies indicate hemispheric asymmetries in processing object's global form versus local parts. However the attentional mechanisms subtending visual selection of different levels of information are poorly understood. The classical left hemisphere/local-right hemisphere/global dichotomy has been recently challenged by studies linking the asymmetry of activation in the posterior parietal cortex (PPC) with the relative salience of the stimulus rather than with the local/global level. The present study aimed to assess hemispheric asymmetry in local-global and salience-based selection in hierarchical stimuli by using transcranial direct current stimulation (tDCS). To this end, tDCS has been applied to the PPC of both the hemispheres. Our data revealed that tDCS did affect the selection of the target on the basis of its relative salience in a manner that depended on the tDCS polarity applied to the two hemispheres. This result is in line with previous findings that the left PPC is critically involved in attention for low-salience stimuli in the presence of high-salience distractor information, while right PPC is involved in attending to more salient stimuli. Hemispheric asymmetries were also found in local/global selection. Overall the results suggest that neural activation in the PPC is related to both the salience and the level of stimulus representations mediating responses to hierarchical stimuli. The comparison of the results from Experiments 1 and 2 in local/global-based selection suggests that the effect of stimulation could be completely opposite depending on subtle differences in demands of attentional control (sustained attention vs task switching).

  3. Attention shift-based multiple saliency object segmentation

    Science.gov (United States)

    Wu, Chang-Wei; Zhao, Hou-Qiang; Cao, Song-Xiao; Xiang, Ke; Wang, Xuan-Yin

    2016-09-01

    Object segmentation is an important but highly challenging problem in computer vision and image processing. An attention shift-based multiple saliency object segmentation model, called ASMSO, is introduced. The proposed ASMSO could produce a pool of potential object regions for each saliency object and be applicable to multiple saliency object segmentation. The potential object regions are produced by combing the methods of gPb-owt-ucm and min-cut graph, whereas the saliency objects are detected by a visual attention model with an attention shift mechanism. In order to deal with various scenes, the model attention shift-based multiple saliency object segmentation (ASMSO) contains different features which include not only traditional features, such as color, uniform, and texture, but also a new position feature originating from proximity of Gestalt theory. Experiments on the training set of PASCAL VOC2012 segmentation dataset not only show that traditional color feature and the proposed position feature work much better than features of texture and uniformity, but also prove that ASMSO is suitable for multiple object segmentation. In addition, experiments on a traditional saliency dataset show that ASMSO could also be applied to traditional saliency object segmentation and performs much better than the state-of-the-art method.

  4. Salience Effects in the North-West of England

    Directory of Open Access Journals (Sweden)

    Sandra Jansen

    2014-06-01

    Full Text Available The question of how we can define salience, what properties it includes and how we can quantify it have been discussed widely over the past thirty years but we still have more questions than answers about this phenomenon, e. g. not only how salience arises, but also how we can define it. However, despite the lack of a clear definition, salience is often taken into account as an explanatory factor in language change. The scientific discourse on salience has in most cases revolved around phonetic features, while hardly any variables on other linguistic levels have been investigated in terms of their salience. Hence, one goal of this paper is to argue for an expanded view of salience in the sociolinguistic context. This article investigates the variation and change of two groups of variables in Carlisle, an urban speech community in the north west of England. I analyse the variable (th and in particular the replacement of /θ/ with [f] which is widely known as th-fronting. The use of three discourse markers is also examined. Both groups of features will then be discussed in the light of sociolinguistic salience.

  5. A computational substrate for incentive salience.

    Science.gov (United States)

    McClure, Samuel M; Daw, Nathaniel D; Montague, P Read

    2003-08-01

    Theories of dopamine function are at a crossroads. Computational models derived from single-unit recordings capture changes in dopaminergic neuron firing rate as a prediction error signal. These models employ the prediction error signal in two roles: learning to predict future rewarding events and biasing action choice. Conversely, pharmacological inhibition or lesion of dopaminergic neuron function diminishes the ability of an animal to motivate behaviors directed at acquiring rewards. These lesion experiments have raised the possibility that dopamine release encodes a measure of the incentive value of a contemplated behavioral act. The most complete psychological idea that captures this notion frames the dopamine signal as carrying 'incentive salience'. On the surface, these two competing accounts of dopamine function seem incommensurate. To the contrary, we demonstrate that both of these functions can be captured in a single computational model of the involvement of dopamine in reward prediction for the purpose of reward seeking.

  6. Dynamic pupillary exchange engages brain regions encoding social salience.

    Science.gov (United States)

    Harrison, Neil A; Gray, Marcus A; Critchley, Hugo D

    2009-01-01

    Covert exchange of autonomic responses may shape social affective behavior, as observed in mirroring of pupillary responses during sadness processing. We examined how, independent of facial emotional expression, dynamic coherence between one's own and another's pupil size modulates regional brain activity. Fourteen subjects viewed pairs of eye stimuli while undergoing fMRI. Using continuous pupillometry biofeedback, the size of the observed pupils was varied, correlating positively or negatively with changes in participants' own pupils. Viewing both static and dynamic stimuli activated right fusiform gyrus. Observing dynamically changing pupils activated STS and amygdala, regions engaged by non-static and salient facial features. Discordance between observed and observer's pupillary changes enhanced activity within bilateral anterior insula, left amygdala and anterior cingulate. In contrast, processing positively correlated pupils enhanced activity within left frontal operculum. Our findings suggest pupillary signals are monitored continuously during social interactions and that incongruent changes activate brain regions involved in tracking motivational salience and attentionally meaningful information. Naturalistically, dynamic coherence in pupillary change follows fluctuations in ambient light. Correspondingly, in social contexts discordant pupil response is likely to reflect divergence of dispositional state. Our data provide empirical evidence for an autonomically mediated extension of forward models of motor control into social interaction.

  7. Development Of A Web Service And Android 'APP' For The Distribution Of Rainfall Data. A Bottom-Up Remote Sensing Data Mining And Redistribution Project In The Age Of The 'Web 2.0'

    Science.gov (United States)

    Mantas, Vasco M.; Pereira, A. J. S. C.; Liu, Zhong

    2013-12-01

    A project was devised to develop a set of freely available applications and web services that can (1) simplify access from Mobile Devices to TOVAS data and (2) support the development of new datasets through data repackaging and mash-up. The bottom-up approach enables the multiplication of new services, often of limited direct interest to the organizations that produces the original, global datasets, but significant to small, local users. Through this multiplication of services, the development cost is transferred to the intermediate or end users and the entire process is made more efficient, even allowing new players to use the data in innovative ways.

  8. Motion saliency detection using a temporal fourier transform

    Science.gov (United States)

    Chen, Zhe; Wang, Xin; Sun, Zhen; Wang, Zhijian

    2016-06-01

    Motion saliency detection aims at detecting the dynamic semantic regions in a video sequence. It is very important for many vision tasks. This paper proposes a new type of motion saliency detection method, Temporal Fourier Transform, for fast motion saliency detection. Different from conventional motion saliency detection methods that use complex mathematical models or features, variations in the phase spectrum of consecutive frames are identified and extracted as the key to obtaining the location of salient motion. As all the calculation is made on the temporal frequency spectrum, our model is independent of features, background models, or other forms of prior knowledge about scenes. The benefits of the proposed approach are evaluated for various videos where the number of moving objects, illumination, and background are all different. Compared with some the state of the art methods, our method achieves both good accuracy and fast computation.

  9. A critical evaluation of two approaches to defining perceptual salience

    Directory of Open Access Journals (Sweden)

    Bethany MacLeod

    2015-01-01

    Full Text Available The notion of perceptual salience is frequently invoked as an explanatory factor in discussions of various linguistic phenomena, but the way salience is defined varies between studies. This paper provides a critical evaluation of two approaches to operationalizing perceptual salience that have been applied to studies of phonetic accommodation: the criteria-list approach and the experimental approach. The purpose is to provide a starting point for researchers interested in exploring the role of perceptual salience in linguistic patterns, such as phonetic accommodation. In addition, the paper aims to consider the nature of the information captured by the different approaches, to explore how these approaches might be best used, and to examine how they reflect changes in theorizing on linguistic variables more generally.

  10. Mortality salience increases personal relevance of the norm of reciprocity.

    Science.gov (United States)

    Schindler, Simon; Reinhard, Marc-André; Stahlberg, Dagmar

    2012-10-01

    Research on terror management theory found evidence that people under mortality salience strive to live up to salient cultural norms and values, like egalitarianism, pacifism, or helpfulness. A basic, strongly internalized norm in most human societies is the norm of reciprocity: people should support those who supported them (i.e., positive reciprocity), and people should injure those who injured them (i.e., negative reciprocity), respectively. In an experiment (N = 98; 47 women, 51 men), mortality salience overall significantly increased personal relevance of the norm of reciprocity (M = 4.45, SD = 0.65) compared to a control condition (M = 4.19, SD = 0.59). Specifically, under mortality salience there was higher motivation to punish those who treated them unfavourably (negative norm of reciprocity). Unexpectedly, relevance of the norm of positive reciprocity remained unaffected by mortality salience. Implications and limitations are discussed.

  11. Are persistent delusions in schizophrenia associated with aberrant salience?

    Directory of Open Access Journals (Sweden)

    Rafeef Abboud

    2016-06-01

    Conclusion: These findings do not support the hypothesis that persistent delusions are related to aberrant motivational salience processing in TRS patients. However, they do support the view that patients with schizophrenia have impaired reward learning.

  12. Selective target processing: perceptual load or distractor salience?

    Science.gov (United States)

    Eltiti, Stacy; Wallace, Denise; Fox, Elaine

    2005-07-01

    Perceptual load theory (Lavie, 1995) states that participants cannot engage in focused attention when shown displays containing a low perceptual load, because attentional resources are not exhausted, whereas in high-load displays attention is always focused, because attentional resources are exhausted. An alternative "salience" hypothesis holds that the salience of distractors and not perceptual load per se determines selective attention. Three experiments were conducted to investigate the influence that target and distractor onsets and offsets have on selective processing in a standard interference task. Perceptual load theory predicts that, regardless of target or distractor presentation (onset or offset), interference from ignored distractors should occur in low-load displays only. In contrast, the salience hypothesis predicts that interference should occur when the distractor appears as an onset and would occur for distractor offsets only when the target was also an offset. Interference may even occur in highload displays if the distractor is more salient. The results supported the salience hypothesis.

  13. Properties of V1 neurons tuned to conjunctions of visual features: application of the V1 saliency hypothesis to visual search behavior.

    Directory of Open Access Journals (Sweden)

    Li Zhaoping

    Full Text Available From a computational theory of V1, we formulate an optimization problem to investigate neural properties in the primary visual cortex (V1 from human reaction times (RTs in visual search. The theory is the V1 saliency hypothesis that the bottom-up saliency of any visual location is represented by the highest V1 response to it relative to the background responses. The neural properties probed are those associated with the less known V1 neurons tuned simultaneously or conjunctively in two feature dimensions. The visual search is to find a target bar unique in color (C, orientation (O, motion direction (M, or redundantly in combinations of these features (e.g., CO, MO, or CM among uniform background bars. A feature singleton target is salient because its evoked V1 response largely escapes the iso-feature suppression on responses to the background bars. The responses of the conjunctively tuned cells are manifested in the shortening of the RT for a redundant feature target (e.g., a CO target from that predicted by a race between the RTs for the two corresponding single feature targets (e.g., C and O targets. Our investigation enables the following testable predictions. Contextual suppression on the response of a CO-tuned or MO-tuned conjunctive cell is weaker when the contextual inputs differ from the direct inputs in both feature dimensions, rather than just one. Additionally, CO-tuned cells and MO-tuned cells are often more active than the single feature tuned cells in response to the redundant feature targets, and this occurs more frequently for the MO-tuned cells such that the MO-tuned cells are no less likely than either the M-tuned or O-tuned neurons to be the most responsive neuron to dictate saliency for an MO target.

  14. Multi-scale saliency search in image analysis.

    Energy Technology Data Exchange (ETDEWEB)

    Slepoy, Alexander; Campisi, Anthony; Backer, Alejandro

    2005-10-01

    Saliency detection in images is an important outstanding problem both in machine vision design and the understanding of human vision mechanisms. Recently, seminal work by Itti and Koch resulted in an effective saliency-detection algorithm. We reproduce the original algorithm in a software application Vision and explore its limitations. We propose extensions to the algorithm that promise to improve performance in the case of difficult-to-detect objects.

  15. Feature saliency and feedback information interactively impact visual category learning.

    Science.gov (United States)

    Hammer, Rubi; Sloutsky, Vladimir; Grill-Spector, Kalanit

    2015-01-01

    Visual category learning (VCL) involves detecting which features are most relevant for categorization. VCL relies on attentional learning, which enables effectively redirecting attention to object's features most relevant for categorization, while 'filtering out' irrelevant features. When features relevant for categorization are not salient, VCL relies also on perceptual learning, which enables becoming more sensitive to subtle yet important differences between objects. Little is known about how attentional learning and perceptual learning interact when VCL relies on both processes at the same time. Here we tested this interaction. Participants performed VCL tasks in which they learned to categorize novel stimuli by detecting the feature dimension relevant for categorization. Tasks varied both in feature saliency (low-saliency tasks that required perceptual learning vs. high-saliency tasks), and in feedback information (tasks with mid-information, moderately ambiguous feedback that increased attentional load, vs. tasks with high-information non-ambiguous feedback). We found that mid-information and high-information feedback were similarly effective for VCL in high-saliency tasks. This suggests that an increased attentional load, associated with the processing of moderately ambiguous feedback, has little effect on VCL when features are salient. In low-saliency tasks, VCL relied on slower perceptual learning; but when the feedback was highly informative participants were able to ultimately attain the same performance as during the high-saliency VCL tasks. However, VCL was significantly compromised in the low-saliency mid-information feedback task. We suggest that such low-saliency mid-information learning scenarios are characterized by a 'cognitive loop paradox' where two interdependent learning processes have to take place simultaneously.

  16. Multi-polarimetric textural distinctiveness for outdoor robotic saliency detection

    Science.gov (United States)

    Haider, S. A.; Scharfenberger, C.; Kazemzadeh, F.; Wong, A.; Clausi, D. A.

    2015-01-01

    Mobile robots that rely on vision, for navigation and object detection, use saliency approaches to identify a set of potential candidates to recognize. The state of the art in saliency detection for mobile robotics often rely upon visible light imaging, using conventional camera setups, to distinguish an object against its surroundings based on factors such as feature compactness, heterogeneity and/or homogeneity. We are demonstrating a novel multi- polarimetric saliency detection approach which uses multiple measured polarization states of a scene. We leverage the light-material interaction known as Fresnel reflections to extract rotationally invariant multi-polarimetric textural representations to then train a high dimensional sparse texture model. The multi-polarimetric textural distinctiveness is characterized using a conditional probability framework based on the sparse texture model which is then used to determine the saliency at each pixel of the scene. It was observed that through the inclusion of additional polarized states into the saliency analysis, we were able to compute noticeably improved saliency maps in scenes where objects are difficult to distinguish from their background due to color intensity similarities between the object and its surroundings.

  17. Removing label ambiguity in learning-based visual saliency estimation.

    Science.gov (United States)

    Li, Jia; Xu, Dong; Gao, Wen

    2012-04-01

    Visual saliency is a useful clue to depict visually important image/video contents in many multimedia applications. In visual saliency estimation, a feasible solution is to learn a "feature-saliency" mapping model from the user data obtained by manually labeling activities or eye-tracking devices. However, label ambiguities may also arise due to the inaccurate and inadequate user data. To process the noisy training data, we propose a multi-instance learning to rank approach for visual saliency estimation. In our approach, the correlations between various image patches are incorporated into an ordinal regression framework. By iteratively refining a ranking model and relabeling the image patches with respect to their mutual correlations, the label ambiguities can be effectively removed from the training data. Consequently, visual saliency can be effectively estimated by the ranking model, which can pop out real targets and suppress real distractors. Extensive experiments on two public image data sets show that our approach outperforms 11 state-of-the-art methods remarkably in visual saliency estimation.

  18. What drives farmers to make top-down or bottom-up adaptation to climate change and fluctuations? A comparative study on 3 cases of apple farming in Japan and South Africa.

    Science.gov (United States)

    Fujisawa, Mariko; Kobayashi, Kazuhiko; Johnston, Peter; New, Mark

    2015-01-01

    Agriculture is one of the most vulnerable sectors to climate change. Farmers have been exposed to multiple stressors including climate change, and they have managed to adapt to those risks. The adaptation actions undertaken by farmers and their decision making are, however, only poorly understood. By studying adaptation practices undertaken by apple farmers in three regions: Nagano and Kazuno in Japan and Elgin in South Africa, we categorize the adaptation actions into two types: farmer initiated bottom-up adaptation and institution led top-down adaptation. We found that the driver which differentiates the type of adaptation likely adopted was strongly related to the farmers' characteristics, particularly their dependence on the institutions, e.g. the farmers' cooperative, in selling their products. The farmers who rely on the farmers' cooperative for their sales are likely to adopt the institution-led adaptation, whereas the farmers who have established their own sales channels tend to start innovative actions by bottom-up. We further argue that even though the two types have contrasting features, the combinations of the both types of adaptations could lead to more successful adaptation particularly in agriculture. This study also emphasizes that more farm-level studies for various crops and regions are warranted to provide substantial feedbacks to adaptation policy.

  19. What drives farmers to make top-down or bottom-up adaptation to climate change and fluctuations? A comparative study on 3 cases of apple farming in Japan and South Africa.

    Directory of Open Access Journals (Sweden)

    Mariko Fujisawa

    Full Text Available Agriculture is one of the most vulnerable sectors to climate change. Farmers have been exposed to multiple stressors including climate change, and they have managed to adapt to those risks. The adaptation actions undertaken by farmers and their decision making are, however, only poorly understood. By studying adaptation practices undertaken by apple farmers in three regions: Nagano and Kazuno in Japan and Elgin in South Africa, we categorize the adaptation actions into two types: farmer initiated bottom-up adaptation and institution led top-down adaptation. We found that the driver which differentiates the type of adaptation likely adopted was strongly related to the farmers' characteristics, particularly their dependence on the institutions, e.g. the farmers' cooperative, in selling their products. The farmers who rely on the farmers' cooperative for their sales are likely to adopt the institution-led adaptation, whereas the farmers who have established their own sales channels tend to start innovative actions by bottom-up. We further argue that even though the two types have contrasting features, the combinations of the both types of adaptations could lead to more successful adaptation particularly in agriculture. This study also emphasizes that more farm-level studies for various crops and regions are warranted to provide substantial feedbacks to adaptation policy.

  20. A framework for assessing inter-individual variability in pharmacokinetics using virtual human populations and integrating general knowledge of physical chemistry, biology, anatomy, physiology and genetics: A tale of 'bottom-up' vs 'top-down' recognition of covariates.

    Science.gov (United States)

    Jamei, Masoud; Dickinson, Gemma L; Rostami-Hodjegan, Amin

    2009-01-01

    An increasing number of failures in clinical stages of drug development have been related to the effects of candidate drugs in a sub-group of patients rather than the 'average' person. Expectation of extreme effects or lack of therapeutic effects in some subgroups following administration of similar doses requires a full understanding of the issue of variability and the importance of identifying covariates that determine the exposure to the drug candidates in each individual. In any drug development program the earlier these covariates are known the better. An important component of the drive to decrease this failure rate in drug development involves attempts to use physiologically-based pharmacokinetics 'bottom-up' modeling and simulation to optimize molecular features with respect to the absorption, distribution, metabolism and elimination (ADME) processes. The key element of this approach is the separation of information on the system (i.e. human body) from that of the drug (e.g. physicochemical characteristics determining permeability through membranes, partitioning to tissues, binding to plasma proteins or affinities toward certain enzymes and transporter proteins) and the study design (e.g. dose, route and frequency of administration, concomitant drugs and food). In this review, the classical 'top-down' approach in covariate recognition is compared with the 'bottom-up' paradigm. The determinants and sources of inter-individual variability in different stages of drug absorption, distribution, metabolism and excretion are discussed in detail. Further, the commonly known tools for simulating ADME properties are introduced.

  1. Preparation of Au-Pt nanostructures by combining top-down with bottom-up strategies and application in label-free electrochemical immunosensor for detection of NMP22.

    Science.gov (United States)

    Jia, Hongying; Gao, Picheng; Ma, Hongmin; Wu, Dan; Du, Bin; Wei, Qin

    2015-02-01

    A novel label-free amperometric immunosensor for sensitive detection of nuclear matrix protein 22 (NMP22) was developed based on Au-Pt bimetallic nanostructures, which were prepared by combining top-down with bottom-up strategies. Nanoporous gold (NPG) was prepared by "top-down" dealloying of commercial Au/Ag alloy film. After deposition of NPG on an electrode, Pt nanoparticles (PtNPs) were further decorated on NPG by "bottom-up" electrodeposition. The prepared bimetallic nanostructures combine the merits of both NPG and PtNPs, and show a high electrocatalytic activity towards the reduction of H2O2. The label-free immunosensor was constructed by directly immobilizing antibody of NMP22 (anti-NMP22) on the surface of bimetallic nanostructures. The immunoreaction induced amperometric response could be detected and negatively correlated to the concentration of NMP22. Bimetallic nanostructure morphologies and detection conditions were investigated to obtain the best sensing performance. Under the optimal conditions, a linear range from 0.01ng/mL to 10ng/mL and a detection limit of 3.33pg/mL were obtained. The proposed immunosensor showed high sensitivity, good selectivity, stability, reproducibility, and regeneration for the detection of NMP22, and it was evaluated in urine samples, receiving satisfactory results.

  2. Maximum saliency bias in binocular fusion

    Science.gov (United States)

    Lu, Yuhao; Stafford, Tom; Fox, Charles

    2016-07-01

    Subjective experience at any instant consists of a single ("unitary"), coherent interpretation of sense data rather than a "Bayesian blur" of alternatives. However, computation of Bayes-optimal actions has no role for unitary perception, instead being required to integrate over every possible action-percept pair to maximise expected utility. So what is the role of unitary coherent percepts, and how are they computed? Recent work provided objective evidence for non-Bayes-optimal, unitary coherent, perception and action in humans; and further suggested that the percept selected is not the maximum a posteriori percept but is instead affected by utility. The present study uses a binocular fusion task first to reproduce the same effect in a new domain, and second, to test multiple hypotheses about exactly how utility may affect the percept. After accounting for high experimental noise, it finds that both Bayes optimality (maximise expected utility) and the previously proposed maximum-utility hypothesis are outperformed in fitting the data by a modified maximum-salience hypothesis, using unsigned utility magnitudes in place of signed utilities in the bias function.

  3. Sources of Error in Remote Sensing-Based Bottom-Up Emission Estimates of Carbon and Air Quality Emissions from Crop Residue Burning in the Contiguous United States and the Russian Federation

    Science.gov (United States)

    McCarty, J. L.; Romanenkov, V.

    2010-12-01

    Since its publication in 1980, the Seiler and Crutzen bottom-up method of estimating biomass burning emissions has become an accepted and standard approach cited in nearly 500 peer-reviewed scientific publications. As the science of biomass burning emissions advances, the need to quantify error in variable inputs has also grown. This research focuses on bottom-up emission estimates of black carbon (BC), CO2, CO, CH4, PM10, PM2.5, NO2, and SO2 from crop residue burning in the contiguous U.S. (CONUS) and the Russian Federation. Crop residue burning emissions for the CONUS were estimated for a five-year period, 2003 through 2007, using multispectral remote sensing-derived products, specifically multi-year crop type maps, an 8-day difference Normalized Burn Ratio product, and calibrated area estimates of cropland burning from 1 km MODIS Active Fire Points. An emission factor database was assembled from eleven published sources while fuel loads and combustion completeness were derived from expert knowledge and governmental reports. With the aim of transferring technique and knowledge to in-country collaborators, crop residue burning emissions in Russia were calculated from burned area estimates derived from the 1 km MODIS Active Fire Points. A second analysis of burned area estimates from both a regionally tuned 8-day difference Normalized Burn Ratio product and the standard MODIS Burned Area product focused on the European region of Russia. For these analyses, BC emission factors were estimated by multiplying published BC to PM2.5 ratios to PM2.5 emission factors for similar crops in the CONUS. Errors and uncertainties were quantified for emission factors, fuel loads, combustion completeness, and accuracies of remote sensing products for both burned area and land cover type for the analyses in the CONUS and Russia. The uncertainty for the non-remote sensing variables was difficult to quantify given the lack of observations available. The results from this uncertainty

  4. Visual salience guided feature-aware shape simplification

    Institute of Scientific and Technical Information of China (English)

    Yong-wei MIAO; Fei-xia HU; Min-yan CHEN; Zhen LIU; Hua-hao SHOU

    2014-01-01

    In the area of 3D digital engineering and 3D digital geometry processing, shape simplification is an important task to reduce their requirement of large memory and high time complexity. By incorporating the content-aware visual salience measure of a polygonal mesh into simplification operation, a novel feature-aware shape simplification approach is presented in this paper. Owing to the robust extraction of relief heights on 3D highly detailed meshes, our visual salience measure is defined by a center-surround operator on Gaussian-weighted relief heights in a scale-dependent manner. Guided by our visual salience map, the feature-aware shape simplification algorithm can be performed by weighting the high-dimensional feature space quadric error metric of vertex pair contractions with the weight map derived from our visual salience map. The weighted quadric error metric is calculated in a six-dimensional feature space by combining the position and normal information of mesh vertices. Experimental results demonstrate that our visual salience guided shape simplification scheme can adaptively and effectively re-sample the underlying models in a feature-aware manner, which can account for the visually salient features of the complex shapes and thus yield better visual fidelity.

  5. Transient pupil response is modulated by contrast-based saliency.

    Science.gov (United States)

    Wang, Chin-An; Boehnke, Susan E; Itti, Laurent; Munoz, Douglas P

    2014-01-08

    The sudden appearance of a novel stimulus in the environment initiates a series of orienting responses that include coordinated shifts of gaze and attention, and also transient changes in pupil size. Although numerous studies have identified a significant effect of stimulus saliency on shifts of gaze and attention, saliency effects on pupil size are less understood. To examine salience-evoked pupil responses, we presented visual, auditory, or audiovisual stimuli while monkeys fixated a central visual spot. Transient pupil dilation was elicited after visual stimulus presentation regardless of target luminance relative to background, and auditory stimuli also evoked similar pupil responses. Importantly, the evoked pupil response was modulated by contrast-based saliency, with faster and larger pupil responses following the presentation of more salient stimuli. The initial transient component of pupil dilation was qualitatively similar to that evoked by weak microstimulation of the midbrain superior colliculus. The pupil responses elicited by audiovisual stimuli were well predicted by a linear summation of each modality response. Together, the results suggest that the transient pupil response, as one component of orienting, is modulated by contrast-based saliency, and the superior colliculus is likely involved in its coordination.

  6. Efficient Research Design: Using Value-of-Information Analysis to Estimate the Optimal Mix of Top-down and Bottom-up Costing Approaches in an Economic Evaluation alongside a Clinical Trial.

    Science.gov (United States)

    Wilson, Edward C F; Mugford, Miranda; Barton, Garry; Shepstone, Lee

    2016-04-01

    In designing economic evaluations alongside clinical trials, analysts are frequently faced with alternative methods of collecting the same data, the extremes being top-down ("gross costing") and bottom-up ("micro-costing") approaches. A priori, bottom-up approaches may be considered superior to top-down approaches but are also more expensive to collect and analyze. In this article, we use value-of-information analysis to estimate the efficient mix of observations on each method in a proposed clinical trial. By assigning a prior bivariate distribution to the 2 data collection processes, the predicted posterior (i.e., preposterior) mean and variance of the superior process can be calculated from proposed samples using either process. This is then used to calculate the preposterior mean and variance of incremental net benefit and hence the expected net gain of sampling. We apply this method to a previously collected data set to estimate the value of conducting a further trial and identifying the optimal mix of observations on drug costs at 2 levels: by individual item (process A) and by drug class (process B). We find that substituting a number of observations on process A for process B leads to a modest £ 35,000 increase in expected net gain of sampling. Drivers of the results are the correlation between the 2 processes and their relative cost. This method has potential use following a pilot study to inform efficient data collection approaches for a subsequent full-scale trial. It provides a formal quantitative approach to inform trialists whether it is efficient to collect resource use data on all patients in a trial or on a subset of patients only or to collect limited data on most and detailed data on a subset.

  7. Salience and Strategy Choice in 2 × 2 Games

    Directory of Open Access Journals (Sweden)

    Jonathan W. Leland

    2015-10-01

    Full Text Available We present a model of boundedly rational play in single-shot 2 × 2 games. Players choose strategies based on the perceived salience of their own payoffs and, if own-payoff salience is uninformative, on the perceived salience of their opponent’s payoffs. When own payoffs are salient, the model’s predictions correspond to those of Level-1 players in a cognitive hierarchy model. When it is the other player’s payoffs that are salient, the predictions of the model correspond to those of traditional game theory. The model provides unique predictions for the entire class of 2 × 2 games. It identifies games where a Nash equilibrium will always occur, ones where it will never occur, and ones where it will occur only for certain payoff values. It also predicts the outcome of games for which there are no pure Nash equilibria. Experimental results supporting these predictions are presented.

  8. Dysregulated but not decreased salience network activity in schizophrenia

    Directory of Open Access Journals (Sweden)

    Thomas eWhite

    2013-03-01

    Full Text Available Effective estimation of the salience of environmental stimuli underlies adaptive behaviour, while related aberrance is believed to undermine rational thought processes in schizophrenia. A network including bilateral frontoinsular cortex (FIC and dorsal anterior cingulate cortex (dACC has been observed to respond to salient stimuli using functional magnetic resonance imaging (fMRI. To test the hypothesis that activity in this salience network (SN is less discriminately modulated by contextually-relevant stimuli in schizophrenia than in healthy individuals, fMRI data were collected in 20 individuals with schizophrenia and 13 matched controls during performance of a modified monetary incentive delay task. After quantitatively identifying spatial components representative of the FIC and dACC features of the SN, two principal analyses were conducted. In the first, modulation of SN activity by salience was assessed by measuring response to trial outcome. First-level general linear models were applied to individual-specific time-courses of SN activity identified using spatial independent component analysis. This analysis revealed a significant salience-by-performance-by-group interaction on the best-fit FIC component’s activity at reward outcome, whereby healthy individuals but not individuals with schizophrenia exhibited significantly greater distinction between the response to hits and misses in high salience trials than in low salience trials. The second analysis aimed to ascertain whether SN component amplitude differed between the study groups over the duration of the experiment. Independent-samples T-tests on back-projected, percent-signal-change scaled SN component images importantly showed that the groups did not differ in the overall amplitude of SN expression over the entire dataset. These findings of dysregulated but not decreased SN activity in schizophrenia provide physiological support for mechanistic conceptual frameworks of delusional

  9. Dysregulated but not decreased salience network activity in schizophrenia

    Science.gov (United States)

    White, Thomas P.; Gilleen, James; Shergill, Sukhwinder S.

    2013-01-01

    Effective estimation of the salience of environmental stimuli underlies adaptive behavior, while related aberrance is believed to undermine rational thought processes in schizophrenia. A network including bilateral frontoinsular cortex (FIC) and dorsal anterior cingulate cortex (dACC) has been observed to respond to salient stimuli using functional magnetic resonance imaging (fMRI). To test the hypothesis that activity in this salience network (SN) is less discriminately modulated by contextually-relevant stimuli in schizophrenia than in healthy individuals, fMRI data were collected in 20 individuals with schizophrenia and 13 matched controls during performance of a modified monetary incentive delay (MID) task. After quantitatively identifying spatial components representative of the FIC and dACC features of the SN, two principal analyses were conducted. In the first, modulation of SN activity by salience was assessed by measuring response to trial outcome. First-level general linear models were applied to individual-specific time-courses of SN activity identified using spatial independent component analysis (ICA). This analysis revealed a significant salience-by-performance-by-group interaction on the best-fit FIC component's activity at trial outcome, whereby healthy individuals but not individuals with schizophrenia exhibited greater distinction between the response to hits and misses in high salience trials than in low salience trials. The second analysis aimed to ascertain whether SN component amplitude differed between the study groups over the duration of the experiment. Independent-samples T-tests on back-projected, percent-signal-change scaled SN component images importantly showed that the groups did not differ in the overall amplitude of SN expression over the entire dataset. These findings of dysregulated but not decreased SN activity in schizophrenia provide physiological support for mechanistic conceptual frameworks of delusional thought formation

  10. Land Cover Change Detection Using Saliency Andwavelet Transformation

    Science.gov (United States)

    Zhang, Haopeng; Jiang, Zhiguo; Cheng, Yan

    2016-06-01

    How to obtain accurate difference map remains an open challenge in change detection. To tackle this problem, we propose a change detection method based on saliency detection and wavelet transformation. We do frequency-tuned saliency detection in initial difference image (IDI) obtained by logarithm ratio to get a salient difference image (SDI). Then, we calculate local entropy of SDI to obtain an entropic salient difference image (ESDI). The final difference image (FDI) is the wavelet fusion of IDI and ESDI, and Otsu thresholding is used to extract difference map from FDI. Experimental results validate the effectiveness and feasibility.

  11. Image based Monument Recognition using Graph based Visual Saliency

    DEFF Research Database (Denmark)

    Kalliatakis, Grigorios; Triantafyllidis, Georgios

    2013-01-01

    This article presents an image-based application aiming at simple image classification of well-known monuments in the area of Heraklion, Crete, Greece. This classification takes place by utilizing Graph Based Visual Saliency (GBVS) and employing Scale Invariant Feature Transform (SIFT) or Speeded......, the images have been previously processed according to the Graph Based Visual Saliency model in order to keep either SIFT or SURF features corresponding to the actual monuments while the background “noise” is minimized. The application is then able to classify these images, helping the user to better...

  12. Research and Development from the bottom up

    DEFF Research Database (Denmark)

    Brem, Alexander; Wolfram, P.

    2014-01-01

    is introduced consisting of three core dimensions: sophistication, sustainability, and emerging market orientation. On the basis of these dimensions, analogies and distinctions between the terms are identified and general tendencies are explored such as the increasing importance of sustainability in social...... and ecological context or the growing interest of developed market firms in approaches from emerging markets. Hence, the presented framework supports further research in new paradigms for research and development (R&D) in developed market firms (DMFs), particularly in relation to emerging markets. This framework...... enables scholars to compare concepts from developed and emerging markets, to address studies specifically by using consistent terms, and to advance research into the concepts according their characterization....

  13. Horizontal Symmetry: Bottom Up and Top Down

    CERN Document Server

    Lam, C S

    2011-01-01

    A group-theoretical connection between horizontal symmetry $\\G$ and fermion mixing is established, and applied to neutrino mixing. The group-theoretical approach is consistent with a dynamical theory based on $U(1)\\times \\G$, but the dynamical theory can be used to pick out the most stable mixing that purely group-theoretical considerations cannot. A symmetry common to leptons and quarks is also discussed. This higher symmetry picks $A_4$ over $S_4$ to be the preferred symmetry for leptons.

  14. Bottom Up Project Cost and Risk Modeling

    Data.gov (United States)

    National Aeronautics and Space Administration — Microcosm along with its partners HRP Systems, End-to-End Analytics, and ARES Corporation (unfunded in Phase I), propose to develop a new solution for detailed data...

  15. Milk bottom-up proteomics: method optimisation.

    Directory of Open Access Journals (Sweden)

    Delphine eVincent

    2016-01-01

    Full Text Available Milk is a complex fluid whose proteome displays a diverse set of proteins of high abundance such as caseins and medium to low abundance whey proteins such as ß-lactoglobulin, lactoferrin, immunoglobulins, glycoproteins, peptide hormones and enzymes. A sample preparation method that enables high reproducibility and throughput is key in reliably identifying proteins present or proteins responding to conditions such as a diet, health or genetics. Using skim milk samples from Jersey and Holstein-Friesian cows, we compared three extraction procedures which have not previously been applied to samples of cows’ milk. Method A (urea involved a simple dilution of the milk in a urea-based buffer, method B (TCA/acetone involved a trichloroacetic acid (TCA/acetone precipitation and method C (methanol/chloroform involved a tri-phasic partition method in chloroform/methanol solution. Protein assays, SDS-PAGE profiling, and trypsin digestion followed by nanoHPLC-electrospray ionisation-tandem mass spectrometry (nLC-ESI-MS/MS analyses were performed to assess their efficiency. Replicates were used at each analytical step (extraction, digestion, injection to assess reproducibility. Mass spectrometry (MS data are available via ProteomeXchange with identifier PXD002529. Overall 186 unique accessions, major and minor proteins, were identified with a combination of methods. Method C (methanol/chloroform yielded the best resolved SDS-patterns and highest protein recovery rates, method A (urea yielded the greatest number of accessions, and, of the three procedures, method B (TCA/acetone was the least compatible of all with a wide range of downstream analytical procedures. Our results also highlighted breed differences between the proteins in milk of Jersey and Holstein-Friesian cows.

  16. Mobile Handsets from the Bottom Up

    DEFF Research Database (Denmark)

    Wallis, Cara; Linchuan Qiu, Jack; Ling, Richard

    2013-01-01

    The setting could be a hole-in-the-wall that serves as a shop in a narrow alley in Guangzhou, a cart on a dusty street on the outskirts of Accra, a bustling marketplace in Mexico City, or a tiny storefront near downtown Los Angeles’ garment district. At such locales, men and women hawk an array o...

  17. Bottom Up Succession Planning Works Better.

    Science.gov (United States)

    Stevens, Paul

    Most succession planning practices are based on the premise that ambitious people have and want only one career direction--upwardly mobile. However, employees have 10 career direction options at any stage of their working lives. A minority want the career action requiring promotion. Employers with a comprehensive career planning support program…

  18. Teaching Listening Comprehension: Bottom-Up Approach

    Science.gov (United States)

    Khuziakhmetov, Anvar N.; Porchesku, Galina V.

    2016-01-01

    Improving listening comprehension skills is one of the urgent contemporary educational problems in the field of second language acquisition. Understanding how L2 listening comprehension works can have a serious influence on language pedagogy. The aim of the paper is to discuss the practical and methodological value of the notion of the perception…

  19. Bottom-up Experiments and Concrete Utopias

    DEFF Research Database (Denmark)

    Andersson, Lasse

    2010-01-01

    Artiklen undersøger hvorledes brugerdrevne experimenter kan udfordre den standardiserede erhvervsorienterede udgave af Oplevelsesbyen og via eksperimentet stimulerer lokalt forankrede og demokratiske udgaver af en oplevelses- og vidensbaseret by....

  20. Thinning based Antialiasing Approach for Visual Saliency of Digital Images

    NARCIS (Netherlands)

    Rukundo, O.

    2015-01-01

    A thinning based approach for spatial antialiasing (TAA) has been proposed for visual saliency of digital images. This TAA approach is based on edge-matting and digital compositing strategies. Prior to edgematting the image edges are detected using ant colony optimization (ACO) algorithm and then th

  1. Visual salience modulates structure choice in relative clause production.

    Science.gov (United States)

    Montag, Jessica L; MacDonald, Maryellen C

    2014-06-01

    The role of visual salience on utterance form was investigated in a picture description study. Participants heard spoken questions about animate or inanimate entities in a picture and produced a relative clause in response. Visual properties of the scenes affected production choices such that less salient inanimate entities tended to yield longer initiation latencies and to be described with passive relative clauses more than visually salient inanimates. We suggest that the participants' question-answering task can change as a function of visual salience of entities in the picture. Less salient entities require a longer visual search of the scene, which causes the speaker to notice or attend more to the non-target competitors in the picture. As a result, it becomes more important in answering the question for the speaker to contrast the target item with a salient competitor. This effect is different from other effects of visual salience, which tend to find that more salient entities take more prominent grammatical roles in the sentence. We interpret this discrepancy as evidence that visual salience does not have a single effect on sentence production, but rather its effect is modulated by task and linguistic context.

  2. Dopamine, Salience, and Response Set Shifting in Prefrontal Cortex.

    Science.gov (United States)

    Shiner, T; Symmonds, M; Guitart-Masip, M; Fleming, S M; Friston, K J; Dolan, R J

    2015-10-01

    Dopamine is implicated in multiple functions, including motor execution, action learning for hedonically salient outcomes, maintenance, and switching of behavioral response set. Here, we used a novel within-subject psychopharmacological and combined functional neuroimaging paradigm, investigating the interaction between hedonic salience, dopamine, and response set shifting, distinct from effects on action learning or motor execution. We asked whether behavioral performance in response set shifting depends on the hedonic salience of reversal cues, by presenting these as null (neutral) or salient (monetary loss) outcomes. We observed marked effects of reversal cue salience on set-switching, with more efficient reversals following salient loss outcomes. L-Dopa degraded this discrimination, leading to inappropriate perseveration. Generic activation in thalamus, insula, and striatum preceded response set switches, with an opposite pattern in ventromedial prefrontal cortex (vmPFC). However, the behavioral effect of hedonic salience was reflected in differential vmPFC deactivation following salient relative to null reversal cues. l-Dopa reversed this pattern in vmPFC, suggesting that its behavioral effects are due to disruption of the stability and switching of firing patterns in prefrontal cortex. Our findings provide a potential neurobiological explanation for paradoxical phenomena, including maintenance of behavioral set despite negative outcomes, seen in impulse control disorders in Parkinson's disease.

  3. Selective Attention-Based Saliency of Traffic Images and Characteristics of Eye Movement%基于选择性注意的交通环境显著性及眼动特征研究

    Institute of Scientific and Technical Information of China (English)

    邓涛; 罗恩晴; 张艳山; 颜红梅

    2014-01-01

    Human visual system is a complicated information processing system. Selective attention is an important mechanism which enables us to process relevant inputs from a large amount of visual information. Traffic environment is a complex and tridimensional scene of multiple information sources, which changes dynamically and requires being processed instantly. The driver’s attention is always controlled by two visual attention mechanisms, namely bottom-up and top-down attention. The bottom-up attention is driven by the environment and image features, and top-down attention is based on tasks and cognitive experiences. In this paper, the behavioral experiment was carried out to investigate the features of eye movements under these two selective attention mechanisms in traffic environment, and also to acquire the dataset of saliency maps of real eye movement. Our results show that there are significant differences between bottom-up and top-down selective attention in saliency maps, characteristics of eye movement, and viewing of traffic signals and traffic signs, and so on.%通过行为实验,采集了两组被试在不同注意状态下观看道路交通场景图像时的眼动数据,研究了在交通驾驶环境中两种不同注意机制(自底向上和自上而下)驱动下的眼动特征。研究表明,在交通驾驶环境中,基于自底向上注意驱动的眼动特征与基于自上而下注意驱动的眼动特征存在着显著性差异,二者对应的视觉显著图也有明显区别。另外,两种注意机制对信号灯及交通标识等重要交通元素的关注及识别等方面也存在显著的不同。

  4. Design of the Bottom-up Innovation project - a participatory, primary preventive, organizational level intervention on work-related stress and well-being for workers in Dutch vocational education

    Science.gov (United States)

    2013-01-01

    Background In the educational sector job demands have intensified, while job resources remained the same. A prolonged disbalance between demands and resources contributes to lowered vitality and heightened need for recovery, eventually resulting in burnout, sickness absence and retention problems. Until now stress management interventions in education focused mostly on strengthening the individual capacity to cope with stress, instead of altering the sources of stress at work at the organizational level. These interventions have been only partly effective in influencing burnout and well-being. Therefore, the “Bottom-up Innovation” project tests a two-phased participatory, primary preventive organizational level intervention (i.e. a participatory action approach) that targets and engages all workers in the primary process of schools. It is hypothesized that participating in the project results in increased occupational self-efficacy and organizational efficacy. The central research question: is an organization focused stress management intervention based on participatory action effective in reducing the need for recovery and enhancing vitality in school employees in comparison to business as usual? Methods/Design The study is designed as a controlled trial with mixed methods and three measurement moments: baseline (quantitative measures), six months and 18 months (quantitative and qualitative measures). At first follow-up short term effects of taking part in the needs assessment (phase 1) will be determined. At second follow-up the long term effects of taking part in the needs assessment will be determined as well as the effects of implemented tailored workplace solutions (phase 2). A process evaluation based on quantitative and qualitative data will shed light on whether, how and why the intervention (does not) work(s). Discussion “Bottom-up Innovation” is a combined effort of the educational sector, intervention providers and researchers. Results will

  5. Optical imaging system-based real-time image saliency extraction method

    Science.gov (United States)

    Zhao, Jufeng; Gao, Xiumin; Chen, Yueting; Feng, Huajun

    2015-04-01

    Saliency extraction has become a popular topic in imaging science. One of the challenges in image saliency extraction is to detect the saliency content efficiently with a full-resolution saliency map. Traditional methods only involve computer calculation and thus result in limitations in computational speed. An optical imaging system-based visual saliency extraction method is developed to solve this problem. The optical system is built by effectively implementing an optical Fourier process with a Fourier lens to form two frequency planes for further operation. The proposed method combines optical components and computer calculations and mainly relies on frequency selection with precise pinholes on the frequency planes to efficiently produce a saliency map. Comparison shows that the method is suitable for extracting salient information and operates in real time to generate a full-resolution saliency map with good boundaries.

  6. Convergência brasileira aos padrões internacionais de contabilidade pública vis-à-vis as estratégias top-down e bottom-up

    Directory of Open Access Journals (Sweden)

    Janyluce Rezende Gama

    2014-02-01

    Full Text Available O Brasil está em processo de convergência de sua contabilidade pública em relação aos padrões internacionais desenvolvidos pela Federação Internacional dos Contadores (Ifac. A implementação de sistemas de informação contábil é geralmente realizada por meio das abordagens top-down ou bottom-up. Assim, este estudo tem por objetivos: 1 identificar a abordagem adotada pelo governo federal brasileiro; 2 descrever o modelo de implementação do sistema de informação contábil público no Brasil; e 3 mapear o fluxo de informações e atores envolvidos no processo de convergência. A abordagem qualitativa foi adotada utilizando a pesquisa documental e análise de conteúdo de documentos disponíveis para operacionalizar a pesquisa. Foi identificado que o Brasil utiliza a abordagem middle-up-down, que favorece a interação entre múltiplos atores no processo, diferentemente da abordagem top-down, que segue o modelo internacional divulgado.

  7. Importance of Macrophyte Quality in Determining Life-History Traits of the Apple Snails Pomacea canaliculata: Implications for Bottom-Up Management of an Invasive Herbivorous Pest in Constructed Wetlands

    Directory of Open Access Journals (Sweden)

    Rita S. W. Yam

    2016-02-01

    Full Text Available Pomacea canaliculata (Ampullariidae has extensively invaded most Asian constructed wetlands and its massive herbivory of macrophytes has become a major cause of ecosystem dysfunctioning of these restored habitats. We conducted non-choice laboratory feeding experiments of P. canaliculata using five common macrophyte species in constructed wetlands including Ipomoea aquatica, Commelina communis, Nymphoides coreana, Acorus calamus and Phragmites australis. Effects of macrophytes on snail feeding, growth and fecundity responses were evaluated. Results indicated that P. canaliculata reared on Ipomoea had the highest feeding and growth rates with highest reproductive output, but all individuals fed with Phragmites showed lowest feeding rates and little growth with poorest reproductive output. Plant N and P contents were important for enhancing palatability, supporting growth and offspring quantity of P. canaliculata, whilst toughness, cellulose and phenolics had critically deterrent effects on various life-history traits. Although snail offspring quality was generally consistent regardless of maternal feeding conditions, the reduced growth and offspring quantity of the poorly-fed snails in constructed wetlands dominated by the less-palatable macrophytes could limit the invasive success of P. canaliculata. Effective bottom-up control of P. canaliculata in constructed wetlands should involve selective planting strategy using macrophytes with low nutrient and high toughness, cellulose and phenolic contents.

  8. Importance of Macrophyte Quality in Determining Life-History Traits of the Apple Snails Pomacea canaliculata: Implications for Bottom-Up Management of an Invasive Herbivorous Pest in Constructed Wetlands.

    Science.gov (United States)

    Yam, Rita S W; Fan, Yen-Tzu; Wang, Tzu-Ting

    2016-02-24

    Pomacea canaliculata (Ampullariidae) has extensively invaded most Asian constructed wetlands and its massive herbivory of macrophytes has become a major cause of ecosystem dysfunctioning of these restored habitats. We conducted non-choice laboratory feeding experiments of P. canaliculata using five common macrophyte species in constructed wetlands including Ipomoea aquatica, Commelina communis, Nymphoides coreana, Acorus calamus and Phragmites australis. Effects of macrophytes on snail feeding, growth and fecundity responses were evaluated. Results indicated that P. canaliculata reared on Ipomoea had the highest feeding and growth rates with highest reproductive output, but all individuals fed with Phragmites showed lowest feeding rates and little growth with poorest reproductive output. Plant N and P contents were important for enhancing palatability, supporting growth and offspring quantity of P. canaliculata, whilst toughness, cellulose and phenolics had critically deterrent effects on various life-history traits. Although snail offspring quality was generally consistent regardless of maternal feeding conditions, the reduced growth and offspring quantity of the poorly-fed snails in constructed wetlands dominated by the less-palatable macrophytes could limit the invasive success of P. canaliculata. Effective bottom-up control of P. canaliculata in constructed wetlands should involve selective planting strategy using macrophytes with low nutrient and high toughness, cellulose and phenolic contents.

  9. Taking salience seriously: the viability of Ronald Dworkin’s theory of salience in the context of extra-territorial corporate accountability

    Directory of Open Access Journals (Sweden)

    David Brian Dennison

    2015-11-01

    Full Text Available In his posthumously published article “A New Philosophy for International Law”, Ronald Dworkin advocates for the use of “salience” as means for generating international law. Dworkin argues that the consent-based mechanisms for establishing international law are often incapable of addressing collective challenges such as change. Dworkin’s salience in alternative means for creating international law whereby the law can emerge from widely held principles and practices without the necessity of global sovereign consent. Unfortunately and somewhat ironically, Dworkin’s essay on salience does not include non-consent based mechanisms for salience to obtain international recognition as a legitimate engine for creating international law. This essay offers international corporate accountability is a fertile area for the emergence of salience as a source of international law. Dworkin’s description of salience and its law-forming capacity speaks to what can take place and what is taking place in the development of this area of law. Salience presents a theoretical construct that can nurture the development of coherent and extra-referential standards for judicial engagement with extra-territorial corporate wrongs. Thus the use of salience in the context of international corporate accountability is well-suited for the specific task at hand and can offer a stage whereby salience can prove its worth and legitimacy as a source of international law.

  10. Voronoi poles-based saliency feature detection from point clouds

    Science.gov (United States)

    Xu, Tingting; Wei, Ning; Dong, Fangmin; Yang, Yuanqin

    2016-12-01

    In this paper, we represent a novel algorithm for point cloud feature detection. Firstly, the algorithm estimates the local feature for each sample point by computing the ratio of the distance from the inner voronoi pole and the outer voronoi pole to the surface. Then the surface global saliency feature is detected by adding the results of the difference of Gaussian for local feature under different scales. Compared with the state of the art methods, our algorithm has higher computing efficiency and more accurate feature detection for sharp edge. The detected saliency features are applied as the weights for surface mesh simplification. The numerical results for mesh simplification show that our method keeps the more details of key features than the traditional methods.

  11. Shadow detection and removal based on the saliency map

    Science.gov (United States)

    Fang, Zhiwen; Cao, Zhiguo; Deng, Chunhua; Yan, Ruicheng; Qin, Yueming

    2013-10-01

    The detection of shadow is the first step to reduce the imaging effect that is caused by the interactions of the light source with surfaces, and then shadow removal can recover the vein information from the dark region. In this paper, we have presented a new method to detect the shadow in a single nature image with the saliency map and to remove the shadow. Firstly, RGB image is transferred to 2D module in order to improve the blue component. Secondly, saliency map of blue component is extracted via graph-based manifold ranking. Then the edge of the shadow can be detected in order to recover the transitional region between the shadow and non-shadow region. Finally, shadow is compensated by enhancing the image in RGB space. Experimental results show the effectiveness of the proposed method.

  12. Exploiting Surroundedness for Saliency Detection: A Boolean Map Approach.

    Science.gov (United States)

    Zhang, Jianming; Sclaroff, Stan

    2016-05-01

    We demonstrate the usefulness of surroundedness for eye fixation prediction by proposing a Boolean Map based Saliency model (BMS). In our formulation, an image is characterized by a set of binary images, which are generated by randomly thresholding the image's feature maps in a whitened feature space. Based on a Gestalt principle of figure-ground segregation, BMS computes a saliency map by discovering surrounded regions via topological analysis of Boolean maps. Furthermore, we draw a connection between BMS and the Minimum Barrier Distance to provide insight into why and how BMS can properly captures the surroundedness cue via Boolean maps. The strength of BMS is verified by its simplicity, efficiency and superior performance compared with 10 state-of-the-art methods on seven eye tracking benchmark datasets.

  13. Visual Saliency and Attention as Random Walks on Complex Networks

    CERN Document Server

    Costa, L F

    2006-01-01

    The unmatched versatility of vision in mammals is totally dependent on purposive eye movements and selective attention guided by saliencies in the presented images. The current article shows how concepts and tools from the areas of random walks, Markov chains, complex networks and artificial image analysis can be naturally combined in order to provide a unified and biologically plausible model for saliency detection and visual attention, which become indistinguishable in the process. Images are converted into complex networks by considering pixels as nodes while connections are established in terms of fields of influence defined by visual features such as tangent fields induced by luminance contrasts, distance, and size. Random walks are performed on such networks in order to emulate attentional shifts and even eye movements in the case of large shapes, and the frequency of visits to each node is conveniently obtained from the eigenequation defined by the stochastic matrix associated to the respectively drive...

  14. Mortality Salience, Self-esteem and Status Seeking

    OpenAIRE

    2013-01-01

    According to the Terror Management Theory, the fear of death may induce anxiety and threaten individual self-esteem. To remove this fear, individuals need to obtain and sustain self-esteem, for example by competing in rank order tournaments, or by focusing on status seeking. Within an experimental setting, this paper investigates the effect of Mortality Salience on individual productivity, manipulating the information on subjects’ relative performance in a real-effort task where the economic ...

  15. An experimental field study of weight salience and food choice.

    Science.gov (United States)

    Incollingo Rodriguez, Angela C; Finch, Laura E; Buss, Julia; Guardino, Christine M; Tomiyama, A Janet

    2015-06-01

    Laboratory research has found that individuals will consume more calories and make unhealthy food choices when in the presence of an overweight individual, sometimes even regardless of what that individual is eating. This study expanded these laboratory paradigms to the field to examine how weight salience influences eating in the real world. More specifically, we tested the threshold of the effect of weight salience of food choice to see if a more subtle weight cue (e.g., images) would be sufficient to affect food choice. Attendees (N = 262) at Obesity Week 2013, a weight-salient environment, viewed slideshows containing an image of an overweight individual, an image of a thin individual, or no image (text only), and then selected from complimentary snacks. Results of ordinal logistic regression analysis showed that participants who viewed the image of the overweight individual had higher odds of selecting the higher calorie snack compared to those who viewed the image of the thin individual (OR = 1.77, 95% CI = [1.04, 3.04]), or no image (OR = 2.42, 95% CI = [1.29, 4.54]). Perceiver BMI category did not moderate the influence of image on food choice, as these results occurred regardless of participant BMI. These findings suggest that in the context of societal weight salience, weight-related cues alone may promote unhealthy eating in the general public.

  16. Impact of feature saliency on visual category learning.

    Science.gov (United States)

    Hammer, Rubi

    2015-01-01

    People have to sort numerous objects into a large number of meaningful categories while operating in varying contexts. This requires identifying the visual features that best predict the 'essence' of objects (e.g., edibility), rather than categorizing objects based on the most salient features in a given context. To gain this capacity, visual category learning (VCL) relies on multiple cognitive processes. These may include unsupervised statistical learning, that requires observing multiple objects for learning the statistics of their features. Other learning processes enable incorporating different sources of supervisory information, alongside the visual features of the categorized objects, from which the categorical relations between few objects can be deduced. These deductions enable inferring that objects from the same category may differ from one another in some high-saliency feature dimensions, whereas lower-saliency feature dimensions can best differentiate objects from distinct categories. Here I illustrate how feature saliency affects VCL, by also discussing kinds of supervisory information enabling reflective categorization. Arguably, principles debated here are often being ignored in categorization studies.

  17. Bottom-up electrochemical preparation of solid-state carbon nanodots directly from nitriles/ionic liquids using carbon-free electrodes and the applications in specific ferric ion detection and cell imaging

    Science.gov (United States)

    Niu, Fushuang; Xu, Yuanhong; Liu, Mengli; Sun, Jing; Guo, Pengran; Liu, Jingquan

    2016-03-01

    Carbon nanodots (C-dots), a new type of potential alternative to conventional semiconductor quantum dots, have attracted numerous attentions in various applications including bio-chemical sensing, cell imaging, etc., due to their chemical inertness, low toxicity and flexible functionalization. Various methods including electrochemical (EC) methods have been reported for the synthesis of C-dots. However, complex procedures and/or carbon source-containing electrodes are often required. Herein, solid-state C-dots were simply prepared by bottom-up EC carbonization of nitriles (e.g. acetonitrile) in the presence of an ionic liquid [e.g. 1-butyl-3-methylimidazolium hexafluorophosphate (BMIMPF6)], using carbon-free electrodes. Due to the positive charges of BMIM+ on the C-dots, the final products presented in a precipitate form on the cathode, and the unreacted nitriles and BMIMPF6 can be easily removed by simple vacuum filtration. The as-prepared solid-state C-dots can be well dispersed in an aqueous medium with excellent photoluminescence properties. The average size of the C-dots was found to be 3.02 +/- 0.12 nm as evidenced by transmission electron microscopy. Other techniques such as UV-vis spectroscopy, fluorescence spectroscopy, X-ray photoelectron spectroscopy and atomic force microscopy were applied for the characterization of the C-dots and to analyze the possible generation mechanism. These C-dots have been successfully applied in efficient cell imaging and specific ferric ion detection.Carbon nanodots (C-dots), a new type of potential alternative to conventional semiconductor quantum dots, have attracted numerous attentions in various applications including bio-chemical sensing, cell imaging, etc., due to their chemical inertness, low toxicity and flexible functionalization. Various methods including electrochemical (EC) methods have been reported for the synthesis of C-dots. However, complex procedures and/or carbon source-containing electrodes are often

  18. A novel bottom-up process to produce thymopentin nanoparticles and their formulation optimization%胸腺五肽纳米粒的制备及其处方优化

    Institute of Scientific and Technical Information of China (English)

    单紫筠; 谭银合; 杨志文; 余思琴; 陈宝; 吴传斌

    2012-01-01

    目的 建立制备胸腺五肽纳米粒的方法并对其处方进行优化,为制备符合要求的压力定量吸入气雾剂奠定基础.方法 将胸腺五肽、卵磷脂、乳糖溶于叔丁醇-水的混合溶剂中,冷冻干燥,将冻干产物用异丙醇混悬,离心除去多余的卵磷脂以得到纯药物纳米粒.采用星点设计-效应面法对其中水、卵磷脂、胸腺五肽的用量进行优化,因胸腺五肽的含量受此处方影响不显著,所以只选取纳米粒的粒径、粒径分布为结果考察指标.结果 最优处方:水∶叔丁醇、卵磷脂∶叔丁醇、胸腺五肽∶水的比例分别为0.5(mL∶mL) 、213.5(mg∶mL) 、17.0(mg∶mL),即水的用量1.5 mL、卵磷脂的用量640.57 mg、胸腺五肽的用量25.57 mg、叔丁醇的用量3.0 mL.按此处方制备的纳米粒粒径在150 nm左右,多分散系数为0.1以下,含量均能保持在98%以上.结论 采用该方法制备胸腺五肽纳米粒,质量佳,重现性好,方法简便,具有良好的应用前景.%Objective A noveJ bottom-up process was developed to produce nanoparticles containing thymopentin and the formulation was optimized to produce desirable nanoparticles for development of pressed metered dose inhaler ( pMDI) of thymopentin( TP-5 ). Methods A solution of TP-5 , lecithin and lactose in tert-butyl alcohol( TBA )/ water co-solvent system was freeze-dried to generate nanoparticles and residual lecithin was washed off in lyophilizate through eentrifugation. Formulation parameters such as lecithin content in organic phase,water content in TB A/water co-solvent, and TP-5 content in water were optimized with the central composite design-response surface methodology. As the retained content of TP-5 in nanoparticles did not significantly vary with the above formulation parameters, only particle size and size distribution of TP-5 nanoparticles was taken as response parameters. Results The ratios of water to TBA, lecithin to TBA and TP-5 to water in the

  19. Integrating top-down and bottom-up approaches to design a cost-effective and equitable programme of measures for adaptation of a river basin to global change

    Science.gov (United States)

    Girard, Corentin; Rinaudo, Jean-Daniel; Pulido-Velazquez, Manuel

    2016-04-01

    Adaptation to the multiple facets of global change challenges the conventional means of sustainably planning and managing water resources at the river basin scale. Numerous demand or supply management options are available, from which adaptation measures need to be selected in a context of high uncertainty of future conditions. Given the interdependency of water users, agreements need to be found at the local level to implement the most effective adaptation measures. Therefore, this work develops an approach combining economics and water resources engineering to select a cost-effective programme of adaptation measures in the context of climate change uncertainty, and to define an equitable allocation of the cost of the adaptation plan between the stakeholders involved. A framework is developed to integrate inputs from the two main approaches commonly used to plan for adaptation. The first, referred to as "top-down", consists of a modelling chain going from global greenhouse gases emission scenarios to local hydrological models used to assess the impact of climate change on water resources. Conversely, the second approach, called "bottom-up", starts from assessing vulnerability at the local level to then identify adaptation measures used to face an uncertain future. The methodological framework presented in this contribution relies on a combination of these two approaches to support the selection of adaptation measures at the local level. Outcomes from these two approaches are integrated to select a cost-effective combination of adaptation measures through a least-cost optimization model developed at the river basin scale. The performances of a programme of measures are assessed under different climate projections to identify cost-effective and least-regret adaptation measures. The issue of allocating the cost of the adaptation plan is considered through two complementary perspectives. The outcome of a negotiation process between the stakeholders is modelled through

  20. The Formation of Sustainable Urban Communities: A Bottom-up Perspective%可持续城市社区的形成:一个自下而上的视角

    Institute of Scientific and Technical Information of China (English)

    张慧; 莫嘉诗; 王斯福

    2015-01-01

    本文以在昆明的四个社区所进行的田野调查为基础,从居民的视角出发讨论城市社区形成过程中居民的社会关系、公共信任、社区归属感、冲突解决与社会融合问题.通过最基层城市社区的运行和实施情况分析社区管理和城市规划的可持续发展问题.该研究是欧盟"城镇化中国:中国可持续城镇化发展"的一部分,旨在以整体论的视角反思快速城市化给居民生活所带来的影响.%Based on research in four communities in Kunming, through a bottom-up perspective, the paper discusses the relationships among residents, public trust, senses of belonging, dispute resolution and social integration in the formation of sustainable urban communities. By understanding policy implementa?tion at the grassroots level, this paper aims to consider the issue of sustainability in both community man?agement and urban planning. As part of the EU funded"UrbaChina: Sustainable Urbanization in China", the aim of the research team is to reflect on the impact fast urbanization has on residents'way of life.

  1. Influence of Personality Traits in Self-Evaluative Salience, Motivational Salience and Self-Consciousness of Appearance

    Directory of Open Access Journals (Sweden)

    José Carlos da Silva Mendes

    2016-12-01

    Full Text Available AimThe aim of this study was to understand the possible influence of personality traits on the importance and significance of perception of body image and self-awareness of appearance in individuals.Method214 online recruited subjects between the ages of 17 and 64 years answered to a socio-demographic questionnaire, the Portuguese version of the instruments NEO-FFI (NEO-Five Factor Inventory, ASI-R (The Appearance Schemas Inventory – Revised and DAS-24 (Derriford Appearance Scale – short.ResultsIt was found that age, Neuroticism and Agreeableness dimensions significantly influence an individual's investment in body image and self-awareness of appearance. Sexual orientations differed with regard to Self-Evaluative Salience and Self-Consciousness of Appearance.ConclusionThe performed analysis showed that neuroticism and agreeableness are related to Self-Evaluative Salience and Self-Consciousness of Appearance.

  2. Key Object Discovery and Tracking Based on Context-Aware Saliency

    Directory of Open Access Journals (Sweden)

    Geng Zhang

    2013-01-01

    Full Text Available In this paper, we propose an online key object discovery and tracking system based on visual saliency. We formulate the problem as a temporally consistent binary labelling task on a conditional random field and solve it by using a particle filter. We also propose a context‐aware saliency measurement, which can be used to improve the accuracy of any static or dynamic saliency maps. Our refined saliency maps provide clearer indications as to where the key object lies. Based on good saliency cues, we can further segment the key object inside the resulting bounding box, considering the spatial and temporal context. We tested our system extensively on different video clips. The results show that our method has significantly improved the saliency maps and tracks the key object accurately.

  3. A top-down / bottom-up approach for multi-actors and multi-criteria assessment of mining projects for sustainable development. Application on Arlit Uranium mines (Niger); Une demarche Top-Down / Bottom-Up pour l'evaluation en termes multicriteres et multi-acteurs des projets miniers dans l'optique du developpement durable. Application sur les mines d'Uranium d'Arlit (Niger)

    Energy Technology Data Exchange (ETDEWEB)

    Chamaret, A

    2007-06-15

    This thesis aims to appraise the relevance of using an hybrid top-down / bottom-up approach to evaluate mining projects in the perspective of sustainable development. With the advent of corporate social responsibility and sustainable development concepts, new social expectations have appeared towards companies that go beyond a sole requirement of profit earning capacity. If companies do not answer to these expectations, they risk to lose their social legitimacy. Traditionally associated with social, environmental, economical and political impacts and risks, mining activity is particularly concerned by these new issues. Whereas mineral resources needs have never been so high, mining companies are now expected to limit their negative effects and to take into account their different audiences' expectations in order to define, together, the terms of their social license to operate. Considering the diversity of issues, scales, actors and contexts, the challenge is real and necessitates tools to better understand issues and to structure dialogues. Based on the Uranium mines of Arlit (Niger) case study, this work shows that associating participatory approaches to structuration tools and literature propositions, appears as an efficient formula to better organize issues diversity and to build a structured dialogue between mining companies and their stakeholders. First Part aims to present the theoretical, institutional and sectorial contexts of the thesis. Second Part exposes work and results of the evaluation carried out in Niger. And, Third Part, shows the conclusions that can be derived from this work and presents a proposal for an evaluation framework, potentially applicable to other mining sites. (author)

  4. Image Transformation using Modified Kmeans clustering algorithm for Parallel saliency map

    Directory of Open Access Journals (Sweden)

    Aman Sharma

    2013-08-01

    Full Text Available to design an image transformation system is Depending on the transform chosen, the input and output images may appear entirely different and have different interpretations. Image Transformationwith the help of certain module like input image, image cluster index, object in cluster and color index transformation of image. K-means clustering algorithm is used to cluster the image for bettersegmentation. In the proposed method parallel saliency algorithm with K-means clustering is used to avoid local minima and to find the saliency map. The region behind that of using parallel saliency algorithm is proved to be more than exiting saliency algorithm.

  5. Categorisation salience and ingroup bias: the buffering role of a multicultural ideology.

    Science.gov (United States)

    Costa-Lopes, Rui; Pereira, Cícero Roberto; Judd, Charles M

    2014-12-01

    The current work sought to test the moderating role of a multicultural ideology on the relationship between categorisation salience and ingroup bias. Accordingly, in one experimental study, we manipulated categorisation salience and the accessibility of a multicultural ideology, and measured intergroup attitudes. Results show that categorisation salience only leads to ingroup bias when a multiculturalism (MC) ideology is not made salient. Thus, MC ideology attenuates the negative effects of categorisation salience on ingroup bias. These results pertain to social psychology in general showing that the cognitive processes should be construed within the framework of ideological contexts.

  6. Motivational salience signal in the basal forebrain is coupled with faster and more precise decision speed.

    Science.gov (United States)

    Avila, Irene; Lin, Shih-Chieh

    2014-03-01

    The survival of animals depends critically on prioritizing responses to motivationally salient stimuli. While it is generally believed that motivational salience increases decision speed, the quantitative relationship between motivational salience and decision speed, measured by reaction time (RT), remains unclear. Here we show that the neural correlate of motivational salience in the basal forebrain (BF), defined independently of RT, is coupled with faster and also more precise decision speed. In rats performing a reward-biased simple RT task, motivational salience was encoded by BF bursting response that occurred before RT. We found that faster RTs were tightly coupled with stronger BF motivational salience signals. Furthermore, the fraction of RT variability reflecting the contribution of intrinsic noise in the decision-making process was actively suppressed in faster RT distributions with stronger BF motivational salience signals. Artificially augmenting the BF motivational salience signal via electrical stimulation led to faster and more precise RTs and supports a causal relationship. Together, these results not only describe for the first time, to our knowledge, the quantitative relationship between motivational salience and faster decision speed, they also reveal the quantitative coupling relationship between motivational salience and more precise RT. Our results further establish the existence of an early and previously unrecognized step in the decision-making process that determines both the RT speed and variability of the entire decision-making process and suggest that this novel decision step is dictated largely by the BF motivational salience signal. Finally, our study raises the hypothesis that the dysregulation of decision speed in conditions such as depression, schizophrenia, and cognitive aging may result from the functional impairment of the motivational salience signal encoded by the poorly understood noncholinergic BF neurons.

  7. Fixations on objects in natural scenes: dissociating importance from salience

    Directory of Open Access Journals (Sweden)

    Bernard Marius e’t Hart

    2013-07-01

    Full Text Available The relation of selective attention to understanding of natural scenes has been subject to intense behavioral research and computational modeling, and gaze is often used as a proxy for such attention. The probability of an image region to be fixated typically correlates with its contrast. However, this relation does not imply a causal role of contrast. Rather, contrast may relate to an object’s importance for a scene, which in turn drives attention. Here we operationalize importance by the probability that an observer names the object as characteristic for a scene. We modify luminance contrast of either a frequently named (common/important or a rarely named (rare/unimportant object, track the observers’ eye movements during scene viewing and ask them to provide keywords describing the scene immediately after.When no object is modified relative to the background, important objects draw more fixations than unimportant ones. Increases of contrast make an object more likely to be fixated, irrespective of whether it was important for the original scene, while decreases in contrast have little effect on fixations. Any contrast modification makes originally unimportant objects more important for the scene. Finally, important objects are fixated more centrally than unimportant objects, irrespective of contrast.Our data suggest a dissociation between object importance (relevance for the scene and salience (relevance for attention. If an object obeys natural scene statistics, important objects are also salient. However, when natural scene statistics are violated, importance and salience are differentially affected. Object salience is modulated by the expectation about object properties (e.g., formed by context or gist, and importance by the violation of such expectations. In addition, the dependence of fixated locations within an object on the object’s importance suggests an analogy to the effects of word frequency on landing positions in reading.

  8. Salience and Blindness: A Haptic Hike on Gins Mountain

    OpenAIRE

    Garnier, Marie-Dominique

    2016-01-01

    This paper attempts to follow an improbable ridge line between architecture, geography and linguistics, between the optic and haptic ends of the concept of salience, through a reading of Helen Keller Or Arakawa, Madeline Gins’s 1994 essay-cum-joint-biography partly devoted to “salience” approached through the blind figure of Helen Keller (1880-1968). In a chapter titled “Or Mountains Or Lines”, prominent features envisaged from a sighted perception give way, under the condition of blindness, ...

  9. Transitioning between Work and Family Roles as a Function of Boundary Flexibility and Role Salience

    Science.gov (United States)

    Winkel, Doan E.; Clayton, Russell W.

    2010-01-01

    This study investigates the manner in which people separate their work and family roles and how they manage the boundaries of these two important roles. Specifically, we focus on how role flexibility and salience influence transitions between roles. Results indicate that the ability and willingness to flex a role boundary and role salience are…

  10. Role salience of dual-career women managers

    Directory of Open Access Journals (Sweden)

    Anthony V Naidoo

    2002-10-01

    Full Text Available This study examines and contrasts the level of role participation, commitment and value expectation that dual career women invest in contending work and family roles. While the 162 married women managers were found to participate significantly more in the work role, they indicated greater commitment to and value expectation from the home and family role. A significant positive correlation between the commitment to the work role and commitment to the home and family role suggests that dual-career women may experience work and home as complimentary rather than conflicting roles. For dual-career women, work salience and career salience were found to be moderately correlated. Opsomming In hierdie studie word die vlakke van rol-deelname, rol-toegewydheid en rol-waardeverwagting wat dubbelloopbaan vroue onderskeidelik in die werk-en familierol investeer, gekonstrasteer. Terwyl dit geblyk het dat 162 getroude vroulike bestuurders beduidend meer deelneem in die werkrol, het hulle hoër toegewydheid en waardeverwagtings teenoor die huis-en-familie rol getoon. ‘n Beduidende positiewe korrelasie is gevind tussen toegewydheid tot die werksrol en toegewydheid tot die huis-en-familierol. Hierdie bevinding suggereer dat dubbelloopbaan vroue hulle werk en familie-rolle as komplimenterend eerder as konflikterend ervaar. Dit het verder geblyk dat werkrolbelangrikheid en loopbaanbelangrikheid matig gekorreleer is.

  11. Inherent Difference in Saliency for Generators with Different PM Materials

    Directory of Open Access Journals (Sweden)

    Sandra Eriksson

    2014-01-01

    Full Text Available The inherent differences between salient and nonsalient electrical machines are evaluated for two permanent magnet generators with different configurations. The neodymium based (NdFeB permanent magnets (PMs in a generator are substituted with ferrite magnets and the characteristics of the NdFeB generator and the ferrite generator are compared through FEM simulations. The NdFeB generator is a nonsalient generator, whereas the ferrite machine is a salient-pole generator, with small saliency. The two generators have almost identical properties at rated load operation. However, at overload the behaviour differs between the two generators. The salient-pole, ferrite generator has lower maximum torque than the NdFeB generator and a larger voltage drop at high current. It is concluded that, for applications where overload capability is important, saliency must be considered and the generator design adapted according to the behaviour at overload operation. Furthermore, if the maximum torque is the design criteria, additional PM mass will be required for the salient-pole machine.

  12. Perceptual Object Extraction Based on Saliency and Clustering

    Directory of Open Access Journals (Sweden)

    Qiaorong Zhang

    2010-08-01

    Full Text Available Object-based visual attention has received an increasing interest in recent years. Perceptual object is the basic attention unit of object-based visual attention. The definition and extraction of perceptual objects is one of the key technologies in object-based visual attention computation model. A novel perceptual object definition and extraction method is proposed in this paper. Based on Gestalt theory and visual feature integration theory, perceptual object is defined using homogeneity region, salient region and edges. An improved saliency map generating algorithm is employed first. Based on the saliency map, salient edges are extracted. Then graph-based clustering algorithm is introduced to get homogeneity regions in the image. Finally an integration strategy is adopted to combine salient edges and homogeneity regions to extract perceptual objects. The proposed perceptual object extraction method has been tested on lots of natural images. Experiment results and analysis are presented in this paper also. Experiment results show that the proposed method is reasonable and valid.

  13. Multimodal region-consistent saliency based on foreground and background priors for indoor scene

    Science.gov (United States)

    Zhang, J.; Wang, Q.; Zhao, Y.; Chen, S. Y.

    2016-09-01

    Visual saliency is a very important feature for object detection in a complex scene. However, image-based saliency is influenced by clutter background and similar objects in indoor scenes, and pixel-based saliency cannot provide consistent saliency to a whole object. Therefore, in this paper, we propose a novel method that computes visual saliency maps from multimodal data obtained from indoor scenes, whilst keeping region consistency. Multimodal data from a scene are first obtained by an RGB+D camera. This scene is then segmented into over-segments by a self-adapting approach to combine its colour image and depth map. Based on these over-segments, we develop two cues as domain knowledge to improve the final saliency map, including focus regions obtained from colour images, and planar background structures obtained from point cloud data. Thus, our saliency map is generated by compounding the information of the colour data, the depth data and the point cloud data in a scene. In the experiments, we extensively compare the proposed method with state-of-the-art methods, and we also apply the proposed method to a real robot system to detect objects of interest. The experimental results show that the proposed method outperforms other methods in terms of precisions and recall rates.

  14. Salience and Attention in Surprisal-Based Accounts of Language Processing

    Science.gov (United States)

    Zarcone, Alessandra; van Schijndel, Marten; Vogels, Jorrig; Demberg, Vera

    2016-01-01

    The notion of salience has been singled out as the explanatory factor for a diverse range of linguistic phenomena. In particular, perceptual salience (e.g., visual salience of objects in the world, acoustic prominence of linguistic sounds) and semantic-pragmatic salience (e.g., prominence of recently mentioned or topical referents) have been shown to influence language comprehension and production. A different line of research has sought to account for behavioral correlates of cognitive load during comprehension as well as for certain patterns in language usage using information-theoretic notions, such as surprisal. Surprisal and salience both affect language processing at different levels, but the relationship between the two has not been adequately elucidated, and the question of whether salience can be reduced to surprisal / predictability is still open. Our review identifies two main challenges in addressing this question: terminological inconsistency and lack of integration between high and low levels of representations in salience-based accounts and surprisal-based accounts. We capitalize upon work in visual cognition in order to orient ourselves in surveying the different facets of the notion of salience in linguistics and their relation with models of surprisal. We find that work on salience highlights aspects of linguistic communication that models of surprisal tend to overlook, namely the role of attention and relevance to current goals, and we argue that the Predictive Coding framework provides a unified view which can account for the role played by attention and predictability at different levels of processing and which can clarify the interplay between low and high levels of processes and between predictability-driven expectation and attention-driven focus. PMID:27375525

  15. Learning to predict where human gaze is using quaternion DCT based regional saliency detection

    Science.gov (United States)

    Li, Ting; Xu, Yi; Zhang, Chongyang

    2014-09-01

    Many current visual attention approaches used semantic features to accurately capture human gaze. However, these approaches demand high computational cost and can hardly be applied to daily use. Recently, some quaternion-based saliency detection models, such as PQFT (phase spectrum of Quaternion Fourier Transform), QDCT (Quaternion Discrete Cosine Transform), have been proposed to meet real-time requirement of human gaze tracking tasks. However, current saliency detection methods used global PQFT and QDCT to locate jump edges of the input, which can hardly detect the object boundaries accurately. To address the problem, we improved QDCT-based saliency detection model by introducing superpixel-wised regional saliency detection mechanism. The local smoothness of saliency value distribution is emphasized to distinguish noises of background from salient regions. Our algorithm called saliency confidence can distinguish the patches belonging to the salient object and those of the background. It decides whether the image patches belong to the same region. When an image patch belongs to a region consisting of other salient patches, this patch should be salient as well. Therefore, we use saliency confidence map to get background weight and foreground weight to do the optimization on saliency map obtained by QDCT. The optimization is accomplished by least square method. The optimization approach we proposed unifies local and global saliency by combination of QDCT and measuring the similarity between each image superpixel. We evaluate our model on four commonly-used datasets (Toronto, MIT, OSIE and ASD) using standard precision-recall curves (PR curves), the mean absolute error (MAE) and area under curve (AUC) measures. In comparison with most state-of-art models, our approach can achieve higher consistency with human perception without training. It can get accurate human gaze even in cluttered background. Furthermore, it achieves better compromise between speed and accuracy.

  16. Salience and attention in surprisal-based accounts of language processing

    Directory of Open Access Journals (Sweden)

    Alessandra eZarcone

    2016-06-01

    Full Text Available The notion of salience has been singled out as the explanatory factor for a diverse range oflinguistic phenomena. In particular, perceptual salience (e.g. visual salience of objects in the world,acoustic prominence of linguistic sounds and semantic-pragmatic salience (e.g. prominence ofrecently mentioned or topical referents have been shown to influence language comprehensionand production. A different line of research has sought to account for behavioral correlates ofcognitive load during comprehension as well as for certain patterns in language usage usinginformation-theoretic notions, such as surprisal. Surprisal and salience both affect languageprocessing at different levels, but the relationship between the two has not been adequatelyelucidated, and the question of whether salience can be reduced to surprisal / predictability isstill open. Our review identifies two main challenges in addressing this question: terminologicalinconsistency and lack of integration between high and low levels of representations in salience-based accounts and surprisal-based accounts. We capitalise upon work in visual cognition inorder to orient ourselves in surveying the different facets of the notion of salience in linguisticsand their relation with models of surprisal. We find that work on salience highlights aspects oflinguistic communication that models of surprisal tend to overlook, namely the role of attentionand relevance to current goals, and we argue that the Predictive Coding framework provides aunified view which can account for the role played by attention and predictability at different levelsof processing and which can clarify the interplay between low and high levels of processes andbetween predictability-driven expectation and attention-driven focus.

  17. Place recognition based on saliency for topological localization

    Institute of Scientific and Technical Information of China (English)

    WANG Lu; CAI Zi-xing

    2006-01-01

    Based on salient visual regions for mobile robot navigation in unknown environments, a new place recognition system was presented. The system uses monocular camera to acquire omni-directional images of the environment where the robot locates. Salient local regions are detected from these images using center-surround difference method, which computes opponencies of color and texture among multi-scale image spaces. And then they are organized using hidden Markov model (HMM) to form the vertex of topological map. So localization, that is place recognition in our system, can be converted to evaluation of HMM. Experimental results show that the saliency detection is immune to the changes of scale, 2D rotation and viewpoint etc. The created topological map has smaller size and a higher ratio of recognition is obtained.

  18. Mortality salience increases defensive distancing from people with terminal cancer.

    Science.gov (United States)

    Smith, Lauren M; Kasser, Tim

    2014-01-01

    Based on principles of terror management theory, the authors hypothesized that participants would distance more from a target person with terminal cancer than from a target with arthritis, and that this effect would be stronger following mortality salience. In Study 1, adults rated how similar their personalities were to a target person; in Study 2, participants arranged two chairs in preparation for meeting the target person. Both studies found that distancing from the person with terminal cancer increased after participants wrote about their own death (vs. giving a speech). Thus, death anxiety may explain why people avoid close contact with terminally ill people; further analyses suggest that gender and self-esteem may also influence such distancing from the terminally ill.

  19. Incentive salience attribution under reward uncertainty: A Pavlovian model.

    Science.gov (United States)

    Anselme, Patrick

    2015-02-01

    There is a vast literature on the behavioural effects of partial reinforcement in Pavlovian conditioning. Compared with animals receiving continuous reinforcement, partially rewarded animals typically show (a) a slower development of the conditioned response (CR) early in training and (b) a higher asymptotic level of the CR later in training. This phenomenon is known as the partial reinforcement acquisition effect (PRAE). Learning models of Pavlovian conditioning fail to account for it. In accordance with the incentive salience hypothesis, it is here argued that incentive motivation (or 'wanting') plays a more direct role in controlling behaviour than does learning, and reward uncertainty is shown to have an excitatory effect on incentive motivation. The psychological origin of that effect is discussed and a computational model integrating this new interpretation is developed. Many features of CRs under partial reinforcement emerge from this model.

  20. Mortality salience enhances racial in-group bias in empathic neural responses to others' suffering.

    Science.gov (United States)

    Li, Xiaoyang; Liu, Yi; Luo, Siyang; Wu, Bing; Wu, Xinhuai; Han, Shihui

    2015-09-01

    Behavioral research suggests that mortality salience (MS) leads to increased in-group identification and in-group favoritism in prosocial behavior. What remains unknown is whether and how MS influences brain activity that mediates emotional resonance with in-group and out-group members and is associated with in-group favoritism in helping behavior. The current work investigated MS effects on empathic neural responses to racial in-group and out-group members' suffering. Experiments 1 and 2 respectively recorded event related potentials (ERPs) and blood oxygen level dependent signals to pain/neutral expressions of Asian and Caucasian faces from Chinese adults who had been primed with MS or negative affect (NA). Experiment 1 found that an early frontal/central activity (P2) was more strongly modulated by pain vs. neutral expressions of Asian than Caucasian faces, but this effect was not affected by MS vs. NA priming. However, MS relative to NA priming enhanced racial in-group bias in long-latency neural response to pain expressions over the central/parietal regions (P3). Experiment 2 found that MS vs. NA priming increased racial in-group bias in empathic neural responses to pain expression in the anterior and mid-cingulate cortex. Our findings indicate that reminding mortality enhances brain activity that differentiates between racial in-group and out-group members' emotional states and suggest a neural basis of in-group favoritism under mortality threat.

  1. The Research and Application of Visual Saliency and Adaptive Support Vector Machine in Target Tracking Field

    Directory of Open Access Journals (Sweden)

    Yuantao Chen

    2013-01-01

    Full Text Available The efficient target tracking algorithm researches have become current research focus of intelligent robots. The main problems of target tracking process in mobile robot face environmental uncertainty. They are very difficult to estimate the target states, illumination change, target shape changes, complex backgrounds, and other factors and all affect the occlusion in tracking robustness. To further improve the target tracking’s accuracy and reliability, we present a novel target tracking algorithm to use visual saliency and adaptive support vector machine (ASVM. Furthermore, the paper’s algorithm has been based on the mixture saliency of image features. These features include color, brightness, and sport feature. The execution process used visual saliency features and those common characteristics have been expressed as the target’s saliency. Numerous experiments demonstrate the effectiveness and timeliness of the proposed target tracking algorithm in video sequences where the target objects undergo large changes in pose, scale, and illumination.

  2. Aspects of love: the effect of mortality salience and attachment style on romantic beliefs.

    Science.gov (United States)

    Smith, Rebecca; Massey, Emma

    Two studies are reported which explore romance as a means of terror management for participants with secure and insecure attachment styles. Mikulincer and Florian (2000) have shown that while mortality salience increases the desire for intimacy in securely attached individuals, the insecurely attached use cultural world views rather than close relationships to cope with fear of death. Study 1 used the romantic belief scale to compare the effects of attachment style and mortality salience on the cultural aspects of close relationships and showed that the only the insecurely attached were more romantic following mortality salience. Study 2 replicated this effect and demonstrated that this difference was not simply due to lower self-esteem in the insecurely attached. The additional inclusion of the Relationship assessment questionnaire failed to provide any evidence that the securely attached were affected by the mortality salience manipulation, even on a more interpersonal measure.

  3. Salience of Tactile Cues: An Examination of Tactor Actuator and Tactile Cue Characteristics

    Science.gov (United States)

    2015-08-01

    section offers one approach to conceptualization and investigation of tactile display effectiveness. 1.2.3.1 Tactor Characteristics Early studies of...and/or the ability of the user to attend to alerts. As an example, individuals with higher levels of neuroticism , emotional reactivity, and/or lower... conceptual framework of factors affecting tactile salience. Additional research is planned to investigate moderating variables on tactile salience. In

  4. Image Fusion Based on Nonsubsampled Contourlet Transform and Saliency-Motivated Pulse Coupled Neural Networks

    OpenAIRE

    Liang Xu; Junping Du; Qingping Li

    2013-01-01

    In the nonsubsampled contourlet transform (NSCT) domain, a novel image fusion algorithm based on the visual attention model and pulse coupled neural networks (PCNNs) is proposed. For the fusion of high-pass subbands in NSCT domain, a saliency-motivated PCNN model is proposed. The main idea is that high-pass subband coefficients are combined with their visual saliency maps as input to motivate PCNN. Coefficients with large firing times are employed as the fused high-pass subband coefficients. ...

  5. A Multi-Channel Salience Based Detail Exaggeration Technique for 3D Relief Surfaces

    Institute of Scientific and Technical Information of China (English)

    Yong-Wei Miao; Jie-Qing Feng; Jin-Rong Wang; Renato Pajarola

    2012-01-01

    Visual saliency can always persuade the viewer's visual attention to fine-scale mesostructure of 3D complex shapes.Owing to the multi-channel salience measure and salience-domain shape modeling technique,a novel visual saliency based shape depiction scheme is presented to exaggerate salient geometric details of the underlying relief surface.Our multi-channel salience measure is calculated by combining three feature maps,i.e.,the O-order feature map of local height distribution,the 1-order feature map of normal difference,and the 2-order feature map of mean curvature variation.The original relief surface is firstly manipulated by a salience-domain enhancement function,and the detail exaggeration surface can then be obtained by adjusting the surface normals of the original surface as the corresponding final normals of the manipulated surface.The advantage of our detail exaggeration technique is that it can adaptively alter the shading of the original shape to reveal visually salient features whilst keeping the desired appearance unimpaired.The experimental results demonstrate that our non-photorealistic shading scheme can enhance the surface mesostructure effectively and thus improving the shape depiction of the relief surfaces.

  6. DeepSaliency: Multi-Task Deep Neural Network Model for Salient Object Detection.

    Science.gov (United States)

    Li, Xi; Zhao, Liming; Wei, Lina; Yang, Ming-Hsuan; Wu, Fei; Zhuang, Yueting; Ling, Haibin; Wang, Jingdong

    2016-08-01

    A key problem in salient object detection is how to effectively model the semantic properties of salient objects in a data-driven manner. In this paper, we propose a multi-task deep saliency model based on a fully convolutional neural network with global input (whole raw images) and global output (whole saliency maps). In principle, the proposed saliency model takes a data-driven strategy for encoding the underlying saliency prior information, and then sets up a multi-task learning scheme for exploring the intrinsic correlations between saliency detection and semantic image segmentation. Through collaborative feature learning from such two correlated tasks, the shared fully convolutional layers produce effective features for object perception. Moreover, it is capable of capturing the semantic information on salient objects across different levels using the fully convolutional layers, which investigate the feature-sharing properties of salient object detection with a great reduction of feature redundancy. Finally, we present a graph Laplacian regularized nonlinear regression model for saliency refinement. Experimental results demonstrate the effectiveness of our approach in comparison with the state-of-the-art approaches.

  7. Object tracking algorithm based on contextual visual saliency

    Science.gov (United States)

    Fu, Bao; Peng, XianRong

    2016-09-01

    As to object tracking, the local context surrounding of the target could provide much effective information for getting a robust tracker. The spatial-temporal context (STC) learning algorithm proposed recently considers the information of the dense context around the target and has achieved a better performance. However STC only used image intensity as the object appearance model. But this appearance model not enough to deal with complicated tracking scenarios. In this paper, we propose a novel object appearance model learning algorithm. Our approach formulates the spatial-temporal relationships between the object of interest and its local context based on a Bayesian framework, which models the statistical correlation between high-level features (Circular-Multi-Block Local Binary Pattern) from the target and its surrounding regions. The tracking problem is posed by computing a visual saliency map, and obtaining the best target location by maximizing an object location likelihood function. Extensive experimental results on public benchmark databases show that our algorithm outperforms the original STC algorithm and other state-of-the-art tracking algorithms.

  8. Relative saliency of pitch versus phonetic cues in infancy

    Science.gov (United States)

    Cardillo, Gina; Kuhl, Patricia; Sundara, Megha

    2005-09-01

    Infants in their first year are highly sensitive to different acoustic components of speech, including phonetic detail and pitch information. The present investigation examined whether relative sensitivity to these two dimensions changes during this period, as the infant acquires language-specific phonetic categories. If pitch and phonetic discrimination are hierarchical, then the relative salience of pitch and phonetic change may become reversed between 8 and 12 months of age. Thirty-two- and 47-week-old infants were tested using an auditory preference paradigm in which they first heard a recording of a person singing a 4-note song (i.e., ``go-bi-la-tu'') and were then presented with both the familiar and an unfamiliar, modified version of that song. Modifications were either a novel pitch order (keeping syllables constant) or a novel syllable order (keeping melody constant). Compared to the younger group, older infants were predicted to show greater relative sensitivity to syllable order than pitch order, in accordance with an increased tendency to attend to linguistically relevant information (phonetic patterns) as opposed to cues that are initially more salient (pitch patterns). Preliminary data show trends toward the predicted interaction, with preference patterns commensurate with previously reported data. [Work supported by the McDonnell Foundation and NIH.

  9. The effects of mortality salience on escalation of commitment.

    Science.gov (United States)

    Yen, Chih-Long; Lin, Chun-Yu

    2012-01-01

    Based on propositions derived from terror management theory (TMT), the current study proposes that people who are reminded of their mortality exhibit a higher degree of self-justification behavior to maintain their self-esteem. For this reason, they could be expected to stick with their previous decisions and invest an increasing amount of resources in those decisions, despite the fact that negative feedback has clearly indicated that they might be on a course toward failure (i.e., "escalation of commitment"). Our experiment showed that people who were reminded of their mortality were more likely to escalate their level of commitment by maintaining their current course of action. Two imaginary scenarios were tested. One of the scenarios involved deciding whether to send additional troops into the battlefield when previous attempts had failed; the other involved deciding whether to continue developing an anti-radar fighter plane when the enemy had already developed a device to detect it. The results supported our hypothesis that mortality salience increases the tendency to escalate one's level of commitment.

  10. Subthalamic nucleus stimulation affects incentive salience attribution in Parkinson's disease.

    Science.gov (United States)

    Serranová, Tereza; Jech, Robert; Dušek, Petr; Sieger, Tomáš; Růžička, Filip; Urgošík, Dušan; Růžička, Evžen

    2011-10-01

    Deep brain stimulation (DBS) of the subthalamic nucleus (STN) can induce nonmotor side effects such as behavioral and mood disturbances or body weight gain in Parkinson's disease (PD) patients. We hypothesized that some of these problems could be related to an altered attribution of incentive salience (ie, emotional relevance) to rewarding and aversive stimuli. Twenty PD patients (all men; mean age ± SD, 58.3 ± 6 years) in bilateral STN DBS switched ON and OFF conditions and 18 matched controls rated pictures selected from the International Affective Picture System according to emotional valence (unpleasantness/pleasantness) and arousal on 2 independent visual scales ranging from 1 to 9. Eighty-four pictures depicting primary rewarding (erotica and food) and aversive fearful (victims and threat) and neutral stimuli were selected for this study. In the STN DBS ON condition, the PD patients attributed lower valence scores to the aversive pictures compared with the OFF condition (P weight gain correlated with arousal ratings from the food pictures in the STN DBS ON condition (P weight gain.

  11. [The effect of group size on salience of member desirability].

    Science.gov (United States)

    Sugimori, S

    1993-04-01

    This study tested the hypothesis that undesirable members are salient in a small group, while desirable members become salient in a larger group. One hundred and forty-five students were randomly assigned to twelve conditions, and read sentences desirably, undesirably, or neutrally describing each member of a college student club. The twelve clubs had one of three group sizes: 13, 39, or 52, and the proportion of the desirable or undesirable to the neutral was either 11:2 or 2:11, forming a three-way (3 x 2 x 2) factorial. Twelve subjects each were asked to make proportion judgments and impression ratings. Results indicated that proportion of the undesirable members was over estimated when the group size was 13, showing negativity bias, whereas proportion of the desirable was overestimated when the size was 52, displaying positivity bias. The size 39 showed neither positivity nor negativity bias. These results along with those from impression ratings suggested that salience of member desirability interacted with group size. It is argued that illusory correlation and group cognition studies may well take these effects into consideration.

  12. Giving good directions: order of mention reflects visual salience

    Directory of Open Access Journals (Sweden)

    Alasdair Daniel Francis Clarke

    2015-12-01

    Full Text Available In complex stimuli, there are many different possible ways to refer to a specified target. Previousstudies have shown that when people are faced with such a task, the content of their referringexpression reflects visual properties such as size, salience and clutter. Here, we extend thesefindings and present evidence that (i the influence of visual perception on sentence constructiongoes beyond content selection and in part determines the order in which different objects arementioned and (ii order of mention influences comprehension. Study 1 (a corpus study ofreference productions shows that when a speaker uses a relational description to mention asalient object, that object is treated as being in the common ground and is more likely to bementioned first. Study 2 (a visual search study asks participants to listen to referring expressionsand find the specified target; in keeping with the above result, we find that search for easy-to-findtargets is faster when the target is mentioned first, while search for harder-to-find targets isfacilitated by mentioning the target later, after a landmark in a relational description. Our findingsshow that seemingly low-level and disparate mental modules like perception and sentenceplanning interact at a high level and in task-dependent ways.

  13. Feature-saliency and feedback-information interactively impact visual category learning

    Directory of Open Access Journals (Sweden)

    Rubi eHammer

    2015-02-01

    Full Text Available Visual category learning (VCL involves detecting which features are most relevant for categorization. This requires attentional learning, which allows effectively redirecting attention to object’s features most relevant for categorization while also filtering out irrelevant features. When features relevant for categorization are not salient VCL relies also on perceptual learning, which enable becoming more sensitive to subtle yet important differences between objects. Little is known about how attentional learning and perceptual learning interact when VCL relies on both processes at the same time. Here we tested this interaction. Participants performed VCL tasks that varied in feature-saliency (low-saliency tasks that required perceptual learning vs. high-saliency tasks, and in feedback-information (tasks with mid-information, moderately ambiguous feedback that increased attentional load vs. tasks with high-information non-ambiguous feedback. Participants were required learning to categorize novel stimuli by detecting the feature-dimension relevant for categorization. We found that mid-information and high-information feedback were similarly effective for VCL in high-saliency tasks. This suggests that an increased attentional load associated with the processing of moderately ambiguous feedback does not compromise VCL when both the task relevant feature and irrelevant features are salient. In low-saliency VCL tasks performance improvement relied on slower perceptual learning, but when the feedback was highly-informative participants were ultimately capable reaching performances matching those observed in high-saliency VCL tasks. However, VCL was much compromised when features were with low-saliency and the feedback was ambiguous. We suggest that this later learning scenario is characterized by a ‘cognitive loop paradox’ where two interdependent learning processes have to take place simultaneously.

  14. Issue Salience and the Domestic Legitimacy Demands of European Integration. The Cases of Britain and Germany

    Directory of Open Access Journals (Sweden)

    Henrike Viehrig

    2008-04-01

    Full Text Available The salience of European issues to the general public is a major determinant of the domestic legitimacy demands that governments face when they devise their European policies. The higher the salience of these issues, the more restrictive will be the legitimacy demands that governments have to meet on the domestic level. Whereas the domestic legitimacy of European policy can rest on a permissive consensus among the public in cases of low issue salience, it requires the electorate’s explicit endorsement in cases of high issue salience. Polling data from Britain and Germany show that the salience of European issues is clearly higher in Britain than in Germany. We thus conclude that British governments face tougher domestic legitimacy demands when formulating their European policies than German governments. This may contribute to accounting for both countries’ different approaches to the integration process: Germany as a role model of a pro-integrationist member state and, in contrast, Britain as the eternal 'awkward partner'.

  15. Empathy, Social Dominance Orientation, Mortality Salience, and Perceptions of a Criminal Defendant

    Directory of Open Access Journals (Sweden)

    Donna Crawley

    2016-02-01

    Full Text Available In two studies, participants completed measures of trait empathy and social dominance orientation, read a summary of a hit and run trial, and provided reactions to the case. In Study 1, the three randomly assigned conditions included a prompt to empathize with the victims, the empathy prompt with a mortality salience manipulation, and a control condition. Participants high in trait empathy were harsher in their judgments of the defendant than were low empathy participants, particularly after having read the mortality salience prompt. The results indicated that mortality salience had triggered personality differences. Participants high in social dominance assigned harsher sentences across conditions. Study 2 involved the same paradigm, but the prompts were presented on behalf of the defendant. Despite the pro-defendant slant, the pattern of results was similar to Study 1. Differences by trait empathy were more apparent among participants experiencing mortality salience, and social dominance was related to sentence choices. There were no indications in either study of mortality salience increasing bias against defendants in general or increasing racial bias.

  16. Abnormal salience signaling in schizophrenia: The role of integrative beta oscillations.

    Science.gov (United States)

    Liddle, Elizabeth B; Price, Darren; Palaniyappan, Lena; Brookes, Matthew J; Robson, Siân E; Hall, Emma L; Morris, Peter G; Liddle, Peter F

    2016-04-01

    Aberrant salience attribution and cerebral dysconnectivity both have strong evidential support as core dysfunctions in schizophrenia. Aberrant salience arising from an excess of dopamine activity has been implicated in delusions and hallucinations, exaggerating the significance of everyday occurrences and thus leading to perceptual distortions and delusional causal inferences. Meanwhile, abnormalities in key nodes of a salience brain network have been implicated in other characteristic symptoms, including the disorganization and impoverishment of mental activity. A substantial body of literature reports disruption to brain network connectivity in schizophrenia. Electrical oscillations likely play a key role in the coordination of brain activity at spatially remote sites, and evidence implicates beta band oscillations in long-range integrative processes. We used magnetoencephalography and a task designed to disambiguate responses to relevant from irrelevant stimuli to investigate beta oscillations in nodes of a network implicated in salience detection and previously shown to be structurally and functionally abnormal in schizophrenia. Healthy participants, as expected, produced an enhanced beta synchronization to behaviorally relevant, as compared to irrelevant, stimuli, while patients with schizophrenia showed the reverse pattern: a greater beta synchronization in response to irrelevant than to relevant stimuli. These findings not only support both the aberrant salience and disconnectivity hypotheses, but indicate a common mechanism that allows us to integrate them into a single framework for understanding schizophrenia in terms of disrupted recruitment of contextually appropriate brain networks.

  17. The scent of salience--is there olfactory-trigeminal conditioning in humans?

    Science.gov (United States)

    Moessnang, C; Pauly, K; Kellermann, T; Krämer, J; Finkelmeyer, A; Hummel, T; Siegel, S J; Schneider, F; Habel, U

    2013-08-15

    Pavlovian fear conditioning has been thoroughly studied in the visual, auditory and somatosensory domain, but evidence is scarce with regard to the chemosensory modality. Under the assumption that Pavlovian conditioning relies on the supra-modal mechanism of salience attribution, the present study was set out to attest the existence of chemosensory aversive conditioning in humans as a specific instance of salience attribution. fMRI was performed in 29 healthy subjects during a differential aversive conditioning paradigm. Two odors (rose, vanillin) served as conditioned stimuli (CS), one of which (CS+) was intermittently coupled with intranasally administered CO2. On the neural level, a robust differential response to the CS+ emerged in frontal, temporal, occipito-parietal and subcortical brain regions, including the amygdala. These changes were paralleled by the development of a CS+-specific connectivity profile of the anterior midcingulate cortex (aMCC), which is a key structure for processing salience information in order to guide adaptive response selection. Increased coupling could be found between key nodes of the salience network (anterior insula, neo-cerebellum) and sensorimotor areas, representing putative input and output structures of the aMCC for exerting adaptive motor control. In contrast, behavioral and skin conductance responses did not show significant effects of conditioning, which has been attributed to contingency unawareness. These findings imply substantial similarities of conditioning involving chemosensory and other sensory modalities, and suggest that salience attribution and adaptive control represent a general, modality-independent principle underlying Pavlovian conditioning.

  18. Parietal cortex integrates contextual and saliency signals during the encoding of natural scenes in working memory.

    Science.gov (United States)

    Santangelo, Valerio; Di Francesco, Simona Arianna; Mastroberardino, Serena; Macaluso, Emiliano

    2015-12-01

    The Brief presentation of a complex scene entails that only a few objects can be selected, processed indepth, and stored in memory. Both low-level sensory salience and high-level context-related factors (e.g., the conceptual match/mismatch between objects and scene context) contribute to this selection process, but how the interplay between these factors affects memory encoding is largely unexplored. Here, during fMRI we presented participants with pictures of everyday scenes. After a short retention interval, participants judged the position of a target object extracted from the initial scene. The target object could be either congruent or incongruent with the context of the scene, and could be located in a region of the image with maximal or minimal salience. Behaviourally, we found a reduced impact of saliency on visuospatial working memory performance when the target was out-of-context. Encoding-related fMRI results showed that context-congruent targets activated dorsoparietal regions, while context-incongruent targets de-activated the ventroparietal cortex. Saliency modulated activity both in dorsal and ventral regions, with larger context-related effects for salient targets. These findings demonstrate the joint contribution of knowledge-based and saliency-driven attention for memory encoding, highlighting a dissociation between dorsal and ventral parietal regions.

  19. ERP evidence on the interaction between information structure and emotional salience of words.

    Science.gov (United States)

    Wang, Lin; Bastiaansen, Marcel; Yang, Yufang; Hagoort, Peter

    2013-06-01

    Both emotional words and words focused by information structure can capture attention. This study examined the interplay between emotional salience and information structure in modulating attentional resources in the service of integrating emotional words into sentence context. Event-related potentials (ERPs) to affectively negative, neutral, and positive words, which were either focused or nonfocused in question-answer pairs, were evaluated during sentence comprehension. The results revealed an early negative effect (90-200 ms), a P2 effect, as well as an effect in the N400 time window, for both emotional salience and information structure. Moreover, an interaction between emotional salience and information structure occurred within the N400 time window over right posterior electrodes, showing that information structure influences the semantic integration only for neutral words, but not for emotional words. This might reflect the fact that the linguistic salience of emotional words can override the effect of information structure on the integration of words into context. The interaction provides evidence for attention-emotion interactions at a later stage of processing. In addition, the absence of interaction in the early time window suggests that the processing of emotional information is highly automatic and independent of context. The results suggest independent attention capture systems of emotional salience and information structure at the early stage but an interaction between them at a later stage, during the semantic integration of words.

  20. Image Processing Strategies Based on a Visual Saliency Model for Object Recognition Under Simulated Prosthetic Vision.

    Science.gov (United States)

    Wang, Jing; Li, Heng; Fu, Weizhen; Chen, Yao; Li, Liming; Lyu, Qing; Han, Tingting; Chai, Xinyu

    2016-01-01

    Retinal prostheses have the potential to restore partial vision. Object recognition in scenes of daily life is one of the essential tasks for implant wearers. Still limited by the low-resolution visual percepts provided by retinal prostheses, it is important to investigate and apply image processing methods to convey more useful visual information to the wearers. We proposed two image processing strategies based on Itti's visual saliency map, region of interest (ROI) extraction, and image segmentation. Itti's saliency model generated a saliency map from the original image, in which salient regions were grouped into ROI by the fuzzy c-means clustering. Then Grabcut generated a proto-object from the ROI labeled image which was recombined with background and enhanced in two ways--8-4 separated pixelization (8-4 SP) and background edge extraction (BEE). Results showed that both 8-4 SP and BEE had significantly higher recognition accuracy in comparison with direct pixelization (DP). Each saliency-based image processing strategy was subject to the performance of image segmentation. Under good and perfect segmentation conditions, BEE and 8-4 SP obtained noticeably higher recognition accuracy than DP, and under bad segmentation condition, only BEE boosted the performance. The application of saliency-based image processing strategies was verified to be beneficial to object recognition in daily scenes under simulated prosthetic vision. They are hoped to help the development of the image processing module for future retinal prostheses, and thus provide more benefit for the patients.

  1. Quantifying individual variation in the propensity to attribute incentive salience to reward cues.

    Science.gov (United States)

    Meyer, Paul J; Lovic, Vedran; Saunders, Benjamin T; Yager, Lindsay M; Flagel, Shelly B; Morrow, Jonathan D; Robinson, Terry E

    2012-01-01

    If reward-associated cues acquire the properties of incentive stimuli they can come to powerfully control behavior, and potentially promote maladaptive behavior. Pavlovian incentive stimuli are defined as stimuli that have three fundamental properties: they are attractive, they are themselves desired, and they can spur instrumental actions. We have found, however, that there is considerable individual variation in the extent to which animals attribute Pavlovian incentive motivational properties ("incentive salience") to reward cues. The purpose of this paper was to develop criteria for identifying and classifying individuals based on their propensity to attribute incentive salience to reward cues. To do this, we conducted a meta-analysis of a large sample of rats (N = 1,878) subjected to a classic Pavlovian conditioning procedure. We then used the propensity of animals to approach a cue predictive of reward (one index of the extent to which the cue was attributed with incentive salience), to characterize two behavioral phenotypes in this population: animals that approached the cue ("sign-trackers") vs. others that approached the location of reward delivery ("goal-trackers"). This variation in Pavlovian approach behavior predicted other behavioral indices of the propensity to attribute incentive salience to reward cues. Thus, the procedures reported here should be useful for making comparisons across studies and for assessing individual variation in incentive salience attribution in small samples of the population, or even for classifying single animals.

  2. Quantifying individual variation in the propensity to attribute incentive salience to reward cues.

    Directory of Open Access Journals (Sweden)

    Paul J Meyer

    Full Text Available If reward-associated cues acquire the properties of incentive stimuli they can come to powerfully control behavior, and potentially promote maladaptive behavior. Pavlovian incentive stimuli are defined as stimuli that have three fundamental properties: they are attractive, they are themselves desired, and they can spur instrumental actions. We have found, however, that there is considerable individual variation in the extent to which animals attribute Pavlovian incentive motivational properties ("incentive salience" to reward cues. The purpose of this paper was to develop criteria for identifying and classifying individuals based on their propensity to attribute incentive salience to reward cues. To do this, we conducted a meta-analysis of a large sample of rats (N = 1,878 subjected to a classic Pavlovian conditioning procedure. We then used the propensity of animals to approach a cue predictive of reward (one index of the extent to which the cue was attributed with incentive salience, to characterize two behavioral phenotypes in this population: animals that approached the cue ("sign-trackers" vs. others that approached the location of reward delivery ("goal-trackers". This variation in Pavlovian approach behavior predicted other behavioral indices of the propensity to attribute incentive salience to reward cues. Thus, the procedures reported here should be useful for making comparisons across studies and for assessing individual variation in incentive salience attribution in small samples of the population, or even for classifying single animals.

  3. The impact of salience and visual working memory on the monitoring and control of saccadic behavior: An eye-tracking and EEG study.

    Science.gov (United States)

    Weaver, Matthew D; Hickey, Clayton; van Zoest, Wieske

    2017-01-10

    In a concurrent eye-tracking and EEG study, we investigated the impact of salience on the monitoring and control of eye movement behavior and the role of visual working memory (VWM) capacity in mediating this effect. Participants made eye movements to a unique line-segment target embedded in a search display also containing a unique distractor. Target and distractor salience was manipulated by varying degree of orientation offset from a homogenous background. VWM capacity was measured using a change-detection task. Results showed greater likelihood of incorrect saccades when the distractor was relatively more salient than when the target was salient. Misdirected saccades to salient distractors were strongly represented in the error-monitoring system by rapid and robust error-related negativity (ERN), which predicted a significant adjustment of oculomotor behavior. Misdirected saccades to less-salient distractors, while arguably representing larger errors, were not as well detected or utilized by the error/performance-monitoring system. This system was instead better engaged in tasks requiring greater cognitive control and by individuals with higher VWM capacity. Our findings show that relative salience of task-relevant and task-irrelevant stimuli can define situations where an increase in cognitive control is necessary, with individual differences in VWM capacity explaining significant variance in the degree of monitoring and control of goal-directed eye movement behavior. The present study supports a conflict-monitoring interpretation of the ERN, whereby the level of competition between different responses, and the stimuli that define these responses, was more important in the generation of an enhanced ERN than the error commission itself.

  4. Search for the best matching ultrasound frame based on spatial and temporal saliencies

    Science.gov (United States)

    Feng, Shaolei; Xiang, Xiaoyan; Zhou, S. Kevin; Lazebnik, Roee

    2011-03-01

    In this paper we present a generic system for fast and accurate retrieval of the best matching frame from Ultrasound video clips given a reference Ultrasound image. It is challenging to build a generic system to handle various lesion types without any prior information of the anatomic structures of the Ultrasound data. We propose to solve the problem based on both spatial and temporal saliency maps calculated from the Ultrasound images, which implicitly analyze the semantics of images and emphasize the anatomic regions of interest. The spatial saliency map describes the importance of the pixels of the reference image while the temporal saliency map further distinguishes the subtle changes of the anatomic structure in a video. A hierarchical comparison scheme based on a novel similarity measure is employed to locate the most similar frames quickly and precisely. Our system ensures the robustness, accuracy and efficiency. Experiments show that our system achieves more accurate results with fast speed.

  5. Aircraft Detection in High-Resolution SAR Images Based on a Gradient Textural Saliency Map

    Science.gov (United States)

    Tan, Yihua; Li, Qingyun; Li, Yansheng; Tian, Jinwen

    2015-01-01

    This paper proposes a new automatic and adaptive aircraft target detection algorithm in high-resolution synthetic aperture radar (SAR) images of airport. The proposed method is based on gradient textural saliency map under the contextual cues of apron area. Firstly, the candidate regions with the possible existence of airport are detected from the apron area. Secondly, directional local gradient distribution detector is used to obtain a gradient textural saliency map in the favor of the candidate regions. In addition, the final targets will be detected by segmenting the saliency map using CFAR-type algorithm. The real high-resolution airborne SAR image data is used to verify the proposed algorithm. The results demonstrate that this algorithm can detect aircraft targets quickly and accurately, and decrease the false alarm rate. PMID:26378543

  6. Image Fusion Based on Nonsubsampled Contourlet Transform and Saliency-Motivated Pulse Coupled Neural Networks

    Directory of Open Access Journals (Sweden)

    Liang Xu

    2013-01-01

    Full Text Available In the nonsubsampled contourlet transform (NSCT domain, a novel image fusion algorithm based on the visual attention model and pulse coupled neural networks (PCNNs is proposed. For the fusion of high-pass subbands in NSCT domain, a saliency-motivated PCNN model is proposed. The main idea is that high-pass subband coefficients are combined with their visual saliency maps as input to motivate PCNN. Coefficients with large firing times are employed as the fused high-pass subband coefficients. Low-pass subband coefficients are merged to develop a weighted fusion rule based on firing times of PCNN. The fused image contains abundant detailed contents from source images and preserves effectively the saliency structure while enhancing the image contrast. The algorithm can preserve the completeness and the sharpness of object regions. The fused image is more natural and can satisfy the requirement of human visual system (HVS. Experiments demonstrate that the proposed algorithm yields better performance.

  7. When death is not a problem: Regulating implicit negative affect under mortality salience.

    Science.gov (United States)

    Lüdecke, Christina; Baumann, Nicola

    2015-12-01

    Terror management theory assumes that death arouses existential anxiety in humans which is suppressed in focal attention. Whereas most studies provide indirect evidence for negative affect under mortality salience by showing cultural worldview defenses and self-esteem strivings, there is only little direct evidence for implicit negative affect under mortality salience. In the present study, we assume that this implicit affective reaction towards death depends on people's ability to self-regulate negative affect as assessed by the personality dimension of action versus state orientation. Consistent with our expectations, action-oriented participants judged artificial words to express less negative affect under mortality salience compared to control conditions whereas state-oriented participants showed the reversed pattern.

  8. Salience and conflict of work and family roles among employed men and women.

    Science.gov (United States)

    Knežević, Irena; Gregov, Ljiljana; Šimunić, Ana

    2016-06-01

    The aim of this research was to determine the salience of work and family roles and to study the connection between role salience and the interference of different types of roles among working men and women. Self-assessment measurement scales were applied. The research involved 206 participants; 103 employed married couples from different regions of Croatia. The results show that roles closely connected to family are considered the most salient. However, men are mostly dedicated behaviourally to the role of a worker. Women dedicate more time and energy to the roles of a spouse, a parent, and a family member whereas men are more oriented towards the leisurite role. The highest level of conflict was perceived when it comes to work disturbing leisure. Gender differences appeared only for work-to-marriage conflict, with men reporting higher conflict than women. The research found proof of only some low correlations between the salience of different types of roles and work-family conflict.

  9. Work demands and resources and the work-family interface : Testing a salience model on German service sector employees

    NARCIS (Netherlands)

    Beham, Barbara; Drobnic, Sonja; Prag, Patrick; Drobnič, S.; Präg, P.

    2011-01-01

    The present study tested an extended version of Voydanoffs "differential salience comparable salience model" in a sample of German service workers. Our findings par support the model in a different national/cultural context but also yielded some divei findings with respect to within-domain resources

  10. Toward isolating the role of dopamine in the acquisition of incentive salience attribution.

    Science.gov (United States)

    Chow, Jonathan J; Nickell, Justin R; Darna, Mahesh; Beckmann, Joshua S

    2016-10-01

    Stimulus-reward learning has been heavily linked to the reward-prediction error learning hypothesis and dopaminergic function. However, some evidence suggests dopaminergic function may not strictly underlie reward-prediction error learning, but may be specific to incentive salience attribution. Utilizing a Pavlovian conditioned approach procedure consisting of two stimuli that were equally reward-predictive (both undergoing reward-prediction error learning) but functionally distinct in regard to incentive salience (levers that elicited sign-tracking and tones that elicited goal-tracking), we tested the differential role of D1 and D2 dopamine receptors and nucleus accumbens dopamine in the acquisition of sign- and goal-tracking behavior and their associated conditioned reinforcing value within individuals. Overall, the results revealed that both D1 and D2 inhibition disrupted performance of sign- and goal-tracking. However, D1 inhibition specifically prevented the acquisition of sign-tracking to a lever, instead promoting goal-tracking and decreasing its conditioned reinforcing value, while neither D1 nor D2 signaling was required for goal-tracking in response to a tone. Likewise, nucleus accumbens dopaminergic lesions disrupted acquisition of sign-tracking to a lever, while leaving goal-tracking in response to a tone unaffected. Collectively, these results are the first evidence of an intraindividual dissociation of dopaminergic function in incentive salience attribution from reward-prediction error learning, indicating that incentive salience, reward-prediction error, and their associated dopaminergic signaling exist within individuals and are stimulus-specific. Thus, individual differences in incentive salience attribution may be reflective of a differential balance in dopaminergic function that may bias toward the attribution of incentive salience, relative to reward-prediction error learning only.

  11. Perspectives on the Salience and Magnitude of Dam Impacts for Hydro Development Scenarios in China

    Directory of Open Access Journals (Sweden)

    Desiree Tullos

    2010-06-01

    Survey results indicate differences in the perceived salience and magnitude of impacts across both expert groups and dam scenarios. Furthermore, surveys indicate that stakeholder perceptions changed as the information provided regarding dam impacts became more specific, suggesting that stakeholder evaluation may be influenced by quality of information. Finally, qualitative comments from the survey reflect some of the challenges of interdisciplinary dam assessment, including cross-disciplinary cooperation, data standardisation and weighting, and the distribution and potential mitigation of impacts. Given the complexity of data and perceptions around dam impacts, decision-support tools that integrate the objective magnitude and perceived salience of impacts are required urgently.

  12. “眼光向下”:科举民俗研究的价值、方法与目标%The Value,Approach and Objectives of Research on Folk Customs of the Imperial Examination:A Bottom-up Perspective

    Institute of Scientific and Technical Information of China (English)

    杜春燕

    2015-01-01

    As an important area in the studies of the imperial examination,research on folk customs of the Imperial Examination,featuring a bottom-up perspective,deals with the system of the imperial examination, social customs and influences in order to have a better understanding of the cultural characteristics and the value of the Imperial Examination. Its interdisciplinary nature and bottom-up approach entail that is has to draw on historical anthropology,sociology,folklore,education science,and linguistics. The study on folk customs of the Imperial Examination may broaden the academic vision,better explore folk historical materials and enriching research results.%科举民俗作为科举学研究的重要方向,是从“自下而上”的视角,探究科举考试制度、活动、习俗及社会影响,以加深对科举考试文化特质与价值的认识。科举民俗研究具有学科交叉的特点,需要进行跨学科研究。通过借鉴人类学、社会学、民俗学、教育学、语言学等学科理论与研究方法,科举民俗研究可拓展学术视野,发掘科举民间史料,深化和丰富科举学研究的内涵。

  13. Advert saliency distracts children's visual attention during task-oriented internet use

    Directory of Open Access Journals (Sweden)

    Nils eHolmberg

    2014-02-01

    Full Text Available The general research question of the present study was to assess the impact of visually salient online adverts on children's task-oriented internet use. In order to answer this question, an experimental study was constructed in which 9-year-old and 12-year-old Swedish children were asked to solve a number of tasks while interacting with a mockup website. In each trial, web adverts in several saliency conditions were presented. By both measuring children's task accuracy, as well as the visual processing involved in solving these tasks, this study allows us to infer how two types of visual saliency affect children's attentional behavior, and whether such behavioral effects also impacts their task performance. Analyses show that low-level visual features and task relevance in online adverts have different effects on performance measures and process measures respectively. Whereas task performance is stable with regard to several advert saliency conditions, a marked effect is seen on children's gaze behavior. On the other hand, task performance is shown to be more sensitive to individual differences such as age, gender and level of gaze control. The results provide evidence about cognitive and behavioral distraction effects in children's task-oriented internet use caused by visual saliency in online adverts. The experiment suggests that children to some extent are able to compensate for behavioral effects caused by distracting visual stimuli when solving prospective memory tasks. Suggestions are given for further research into the interdiciplinary area between media research and cognitive science.

  14. Advert saliency distracts children's visual attention during task-oriented internet use.

    Science.gov (United States)

    Holmberg, Nils; Sandberg, Helena; Holmqvist, Kenneth

    2014-01-01

    The general research question of the present study was to assess the impact of visually salient online adverts on children's task-oriented internet use. In order to answer this question, an experimental study was constructed in which 9- and 12-year-old Swedish children were asked to solve a number of tasks while interacting with a mockup website. In each trial, web adverts in several saliency conditions were presented. By both measuring children's task accuracy, as well as the visual processing involved in solving these tasks, this study allows us to infer how two types of visual saliency affect children's attentional behavior, and whether such behavioral effects also impacts their task performance. Analyses show that low-level visual features and task relevance in online adverts have different effects on performance measures and process measures respectively. Whereas task performance is stable with regard to several advert saliency conditions, a marked effect is seen on children's gaze behavior. On the other hand, task performance is shown to be more sensitive to individual differences such as age, gender and level of gaze control. The results provide evidence about cognitive and behavioral distraction effects in children's task-oriented internet use caused by visual saliency in online adverts. The experiment suggests that children to some extent are able to compensate for behavioral effects caused by distracting visual stimuli when solving prospective memory tasks. Suggestions are given for further research into the interdiciplinary area between media research and cognitive science.

  15. Basal forebrain motivational salience signal enhances cortical processing and decision speed

    Directory of Open Access Journals (Sweden)

    Sylvina M Raver

    2015-10-01

    Full Text Available The basal forebrain (BF contains major projections to the cerebral cortex, and plays a well-documented role in arousal, attention, decision-making, and in modulating cortical activity. BF neuronal degeneration is an early event in Alzheimer’s disease and dementias, and occurs in normal cognitive aging. While the BF is best known for its population of cortically projecting cholinergic neurons, the region is anatomically and neurochemically diverse, and also contains prominent populations of non-cholinergic projection neurons. In recent years, increasing attention has been dedicated to these non-cholinergic BF neurons in order to better understand how non-cholinergic BF circuits control cortical processing and behavioral performance. In this review, we focus on a unique population of putative non-cholinergic BF neurons that encodes the motivational salience of stimuli with a robust ensemble bursting response. We review recent studies that describe the specific physiological and functional characteristics of these BF salience-encoding neurons in behaving animals. These studies support the unifying hypothesis whereby BF salience-encoding neurons act as a gain modulation mechanism of the decision-making process to enhance cortical processing of behaviorally relevant stimuli, and thereby facilitate faster and more precise behavioral responses. This function of BF salience-encoding neurons represents a critical component in determining which incoming stimuli warrant an animal’s attention, and is therefore a fundamental and early requirement of behavioral flexibility.

  16. Learning-Based Visual Saliency Model for Detecting Diabetic Macular Edema in Retinal Image.

    Science.gov (United States)

    Zou, Xiaochun; Zhao, Xinbo; Yang, Yongjia; Li, Na

    2016-01-01

    This paper brings forth a learning-based visual saliency model method for detecting diagnostic diabetic macular edema (DME) regions of interest (RoIs) in retinal image. The method introduces the cognitive process of visual selection of relevant regions that arises during an ophthalmologist's image examination. To record the process, we collected eye-tracking data of 10 ophthalmologists on 100 images and used this database as training and testing examples. Based on analysis, two properties (Feature Property and Position Property) can be derived and combined by a simple intersection operation to obtain a saliency map. The Feature Property is implemented by support vector machine (SVM) technique using the diagnosis as supervisor; Position Property is implemented by statistical analysis of training samples. This technique is able to learn the preferences of ophthalmologist visual behavior while simultaneously considering feature uniqueness. The method was evaluated using three popular saliency model evaluation scores (AUC, EMD, and SS) and three quality measurements (classical sensitivity, specificity, and Youden's J statistic). The proposed method outperforms 8 state-of-the-art saliency models and 3 salient region detection approaches devised for natural images. Furthermore, our model successfully detects the DME RoIs in retinal image without sophisticated image processing such as region segmentation.

  17. A checklist for model credibility, salience, and legitimacy to improve information transfer in environmental policy assessments

    NARCIS (Netherlands)

    van Voorn, G.A.K; Verburg, R.W.; Kunseler, E.-M.; Vader, J.; Janssen, P.H.M.

    2016-01-01

    Modelers involved in environmental policy assessments are commonly confronted with the lack of uptake of model output by policy actors. Actors have different expectations of models, condensed into three quality criteria: credibility, salience, and legitimacy. The fulfilment of quality criteria is al

  18. A Comparison between Element Salience versus Context as Item Difficulty Factors in Raven's Matrices

    Science.gov (United States)

    Perez-Salas, Claudia P.; Streiner, David L.; Roberts, Maxwell J.

    2012-01-01

    The nature of contextual facilitation effects for items derived from Raven's Progressive Matrices was investigated in two experiments. For these, the original matrices were modified, creating either abstract versions with high element salience, or versions which comprised realistic entities set in familiar contexts. In order to replicate and…

  19. Fusion of infrared and visible images based on saliency scale-space in frequency domain

    Science.gov (United States)

    Chen, Yanfei; Sang, Nong; Dan, Zhiping

    2015-12-01

    A fusion algorithm of infrared and visible images based on saliency scale-space in the frequency domain was proposed. Focus of human attention is directed towards the salient targets which interpret the most important information in the image. For the given registered infrared and visible images, firstly, visual features are extracted to obtain the input hypercomplex matrix. Secondly, the Hypercomplex Fourier Transform (HFT) is used to obtain the salient regions of the infrared and visible images respectively, the convolution of the input hypercomplex matrix amplitude spectrum with a low-pass Gaussian kernel of an appropriate scale which is equivalent to an image saliency detector are done. The saliency maps are obtained by reconstructing the 2D signal using the original phase and the amplitude spectrum, filtered at a scale selected by minimizing saliency map entropy. Thirdly, the salient regions are fused with the adoptive weighting fusion rules, and the nonsalient regions are fused with the rule based on region energy (RE) and region sharpness (RS), then the fused image is obtained. Experimental results show that the presented algorithm can hold high spectrum information of the visual image, and effectively get the thermal targets information at different scales of the infrared image.

  20. Element Salience as a Predictor of Item Difficulty for Raven's Progressive Matrices

    Science.gov (United States)

    Meo, Maria; Roberts, Maxwell J.; Marucci, Francesco S.

    2007-01-01

    Raven's Progressive Matrices is a frequently used intelligence test, and it has been suggested that the major determinant of difficulty for each item is its numbers of elements and rules, and its rule complexity. The current study investigated another potential source of difficulty--element salience--items are harder where their elements are…

  1. Researcher Effects on Mortality Salience Research: A Meta-Analytic Moderator Analysis

    Science.gov (United States)

    Yen, Chih-Long; Cheng, Chung-Ping

    2013-01-01

    A recent meta-analysis of 164 terror management theory (TMT) papers indicated that mortality salience (MS) yields substantial effects (r = 0.35) on worldview and self-esteem-related dependent variables (B. L. Burke, A. Martens, & E. H. Faucher, 2010). This study reanalyzed the data to explore the researcher effects of TMT. By cluster-analyzing…

  2. Mortality Salience and Morality: Thinking about Death Makes People Less Utilitarian

    Science.gov (United States)

    Tremoliere, Bastien; De Neys, Wim; Bonnefon, Jean-Francois

    2012-01-01

    According to the dual-process model of moral judgment, utilitarian responses to moral conflict draw on limited cognitive resources. Terror Management Theory, in parallel, postulates that mortality salience mobilizes these resources to suppress thoughts of death out of focal attention. Consequently, we predicted that individuals under mortality…

  3. The Relationship of Liking and Choice to Attributes of an Alternative and Their Saliency

    Science.gov (United States)

    Farley, John U.; And Others

    1974-01-01

    Evaluation of attributes of a subcompact car were combined in linear regressions predicting liking and purchase intention. Of two forms--raw scales and scales weighted by the importance attached to each attribute by each subject--unweighted evaluations proved more consistent and important predictors than those weighted by their saliency. (Author)

  4. Obstacles Regions 3D-Perception Method for Mobile Robots Based on Visual Saliency

    Directory of Open Access Journals (Sweden)

    Tao Xu

    2015-01-01

    Full Text Available A novel mobile robots 3D-perception obstacle regions method in indoor environment based on Improved Salient Region Extraction (ISRE is proposed. This model acquires the original image by the Kinect sensor and then gains Original Salience Map (OSM and Intensity Feature Map (IFM from the original image by the salience filtering algorithm. The IFM was used as the input neutron of PCNN. In order to make the ignition range more exact, PCNN ignition pulse input was further improved as follows: point multiplication algorithm was taken between PCNN internal neuron and binarization salience image of OSM; then we determined the final ignition pulse input. The salience binarization region abstraction was fulfilled by improved PCNN multiple iterations finally. Finally, the binarization area was mapped to the depth map obtained by Kinect sensor, and mobile robot can achieve the obstacle localization function. The method was conducted on a mobile robot (Pioneer3-DX. The experimental results demonstrated the feasibility and effectiveness of the proposed algorithm.

  5. Hacking Health: Bottom-up Innovation for Healthcare

    Directory of Open Access Journals (Sweden)

    Jeeshan Chowdhury

    2012-07-01

    Full Text Available Healthcare is not sustainable and still functions with outdated technology (e.g., pagers, paper records. Top-down approaches by governments and corporations have failed to deliver digital technologies to modernize healthcare. Disruptive innovation must come from the ground up by bridging the gap between front-line health experts and innovators in the latest web and mobile technology. Hacking Health is a hackathon that is focused on social innovation more than technical innovation. Our approach to improve healthcare is to pair technological innovators with healthcare experts to build realistic, human-centric solutions to front-line healthcare problems.

  6. The Interplay of Top-Down and Bottom-Up

    DEFF Research Database (Denmark)

    Winkler, Till; Brown, Carol V.; Ozturk, Pinar

    2014-01-01

    positions before the HITECH funding. Based on our analyses of interview data collected from 34 leaders at the state, HIO, and provider level, our objective is to develop a model of contextual and operational factors that influence the sustainability of HIOs. The implications of our findings for other...

  7. Bottom-up Assembly of Engineered Protein Fibers

    Science.gov (United States)

    2015-02-15

    magnetite  templating   peptide ,   CMms6,  was   attached.   Alkyne-­‐functionalized   CMms6  was   attached   to   the   AHA...bearing   proteins   through   a   copper   catalyzed   click   chemistry   reaction   and   monitored  molecular  weight

  8. The nano revolution: bottom-up manufacturing with biomolecules

    Science.gov (United States)

    Li, Yi-Fen; Li, Jing; Paavola, Chad; Kagawa, Hiromi; Chan, Suzanne L.; Trent, Jonathan D.

    2007-05-01

    As the nano-scale becomes a focus for engineering electronic, photonic, medical, and other important devices, an unprecedented role for biomolecules is emerging to address one of the most formidable problems in nano-manufacturing: precise manipulation and organization of matter on the nano-scale. Biomolecules are a solution to this problem because they themselves are nanoscale particles with intrinsic properties that allow them to precisely self-assemble and self-organize into the amazing diversity of structures observed in nature. Indeed, there is ample evidence that the combination of molecular recognition and self-assembly combined with mutation, selection, and replication have the potential to create structures that could truly revolutionize manufacturing processes in many sectors of industry. Genetically engineered biomolecules are already being used to make the next generation of nano-scale templates, nano-detailed masks, and molecular scaffolds for the future manufacturing of electronic devices, medical diagnostic tools, and chemical engineering interfaces. Here we present an example of this type of technology by showing how a protein can be genetically modified to form a new structure and coated with metal to lead the way to producing "nano-wires," which may ultimately become the basis for self-assembled circuitry.

  9. QUALITY FUNCTION DEPLOYMENT IN BOTTOM UP PROCESS FOR DESIGN REUSE

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    To deal with a bottomup process model for design reuses a specific extended house of quality(EHOQ)is proposedTwo kinds of suppo rted functions,basic supported functions and new supported functions,are defined Two processes to determine two kinds of functions are presentedA kind of EHO Q matrix for a company is given and its management steps are studied

  10. Bottom up design of nanoparticles for anti-cancer diapeutics

    DEFF Research Database (Denmark)

    Needham, David; Arslanagic, Amina; Glud, Kasper

    2016-01-01

     nm particle would dissolve in less than a second! And so the nanoparticle design requires a highly water-insoluble drug, and a tight, encapsulating, impermeable lipid:cholesterol monolayer. While the "Y" junction can be used to mix an ethanolic solution with anti-solvent, we find that a "no...

  11. Bottom-up regulation of capelin, a keystone forage species.

    Directory of Open Access Journals (Sweden)

    Alejandro D Buren

    Full Text Available The Northwest Atlantic marine ecosystem off Newfoundland and Labrador, Canada, has been commercially exploited for centuries. Although periodic declines in various important commercial fish stocks have been observed in this ecosystem, the most drastic changes took place in the early 1990s when the ecosystem structure changed abruptly and has not returned to its previous configuration. In the Northwest Atlantic, food web dynamics are determined largely by capelin (Mallotus villosus, the focal forage species which links primary and secondary producers with the higher trophic levels. Notwithstanding the importance of capelin, the factors that influence its population dynamics have remained elusive. We found that a regime shift and ocean climate, acting via food availability, have discernible impacts on the regulation of this population. Capelin biomass and timing of spawning were well explained by a regime shift and seasonal sea ice dynamics, a key determinant of the pelagic spring bloom. Our findings are important for the development of ecosystem approaches to fisheries management and raise questions on the potential impacts of climate change on the structure and productivity of this marine ecosystem.

  12. The Heating of the Solar Atmosphere: from the Bottom Up?

    Science.gov (United States)

    Winebarger, Amy

    2014-01-01

    The heating of the solar atmosphere remains a mystery. Over the past several decades, scientists have examined the observational properties of structures in the solar atmosphere, notably their temperature, density, lifetime, and geometry, to determine the location, frequency, and duration of heating. In this talk, I will review these observational results, focusing on the wealth of information stored in the light curve of structures in different spectral lines or channels available in the Solar Dynamic Observatory's Atmospheric Imaging Assembly, Hinode's X-ray Telescope and Extreme-ultraviolet Imaging Spectrometer, and the Interface Region Imaging Spectrograph. I will discuss some recent results from combined data sets that support the heating of the solar atmosphere may be dominated by low, near-constant heating events.

  13. Towards Bottom-Up Analysis of Social Food

    OpenAIRE

    Rich, Jaclyn; Haddadi, Hamed; Hospedales, Timothy M.

    2016-01-01

    Social media provide a wealth of information for research into public health by providing a rich mix of personal data, location, hashtags, and social network information. Among these, Instagram has been recently the subject of many computational social science studies. However despite Instagram's focus on image sharing, most studies have exclusively focused on the hashtag and social network structure. In this paper we perform the first large scale content analysis of Instagram posts, addressi...

  14. Political will for better health, a bottom-up process.

    Science.gov (United States)

    De Ceukelaire, Wim; De Vos, Pol; Criel, Bart

    2011-09-01

    Lately, different voices in the global public health community have drawn attention to the interaction between the State and civil society in the context of reducing health inequities. A rights-based approach empowers people not only to claim their rights but also to demand accountability from the State. Lessons from history show that economic growth does not automatically have positive implications for population health. It may even be disruptive in the absence of strong stewardship and regulation by national and local public health authorities. The field research in which we have been involved over the past 20 years in the Philippines, Palestine, Cuba, and Europe confirms that organized communities and people's organizations can effectively pressure the state into action towards realizing the right to health. Class analysis, influencing power relations, and giving the State a central role have been identified as three key strategies of relevant social movements and NGOs. More interaction between academia and civil society organizations could contribute to enhance and safeguard the societal relevance of public health researches. Our own experience made us discover that social movements and public health researchers have a lot to learn from one another.

  15. Glycan Node Analysis: A Bottom-up Approach to Glycomics.

    Science.gov (United States)

    Zaare, Sahba; Aguilar, Jesús S; Hu, Yueming; Ferdosi, Shadi; Borges, Chad R

    2016-01-01

    Synthesized in a non-template-driven process by enzymes called glycosyltransferases, glycans are key players in various significant intra- and extracellular events. Many pathological conditions, notably cancer, affect gene expression, which can in turn deregulate the relative abundance and activity levels of glycoside hydrolase and glycosyltransferase enzymes. Unique aberrant whole glycans resulting from deregulated glycosyltransferase(s) are often present in trace quantities within complex biofluids, making their detection difficult and sometimes stochastic. However, with proper sample preparation, one of the oldest forms of mass spectrometry (gas chromatography-mass spectrometry, GC-MS) can routinely detect the collection of branch-point and linkage-specific monosaccharides ("glycan nodes") present in complex biofluids. Complementary to traditional top-down glycomics techniques, the approach discussed herein involves the collection and condensation of each constituent glycan node in a sample into a single independent analytical signal, which provides detailed structural and quantitative information about changes to the glycome as a whole and reveals potentially deregulated glycosyltransferases. Improvements to the permethylation and subsequent liquid/liquid extraction stages provided herein enhance reproducibility and overall yield by facilitating minimal exposure of permethylated glycans to alkaline aqueous conditions. Modifications to the acetylation stage further increase the extent of reaction and overall yield. Despite their reproducibility, the overall yields of N-acetylhexosamine (HexNAc) partially permethylated alditol acetates (PMAAs) are shown to be inherently lower than their expected theoretical value relative to hexose PMAAs. Calculating the ratio of the area under the extracted ion chromatogram (XIC) for each individual hexose PMAA (or HexNAc PMAA) to the sum of such XIC areas for all hexoses (or HexNAcs) provides a new normalization method that facilitates relative quantification of individual glycan nodes in a sample. Although presently constrained in terms of its absolute limits of detection, this method expedites the analysis of clinical biofluids and shows considerable promise as a complementary approach to traditional top-down glycomics.

  16. Quantitative bottom-up proteomics depends on digestion conditions.

    Science.gov (United States)

    Lowenthal, Mark S; Liang, Yuxue; Phinney, Karen W; Stein, Stephen E

    2014-01-07

    Accurate quantification is a fundamental requirement in the fields of proteomics and biomarker discovery, and for clinical diagnostic assays. To demonstrate the extent of quantitative variability in measurable peptide concentrations due to differences among "typical" protein digestion protocols, the model protein, human serum albumin (HSA), was subjected to enzymatic digestion using 12 different sample preparation methods, and separately, was examined through a comprehensive timecourse of trypsinolysis. A variety of digestion conditions were explored including differences in digestion time, denaturant, source of enzyme, sample cleanup, and denaturation temperature, among others. Timecourse experiments compared differences in relative peptide concentrations for tryptic digestions ranging from 15 min to 48 h. A predigested stable isotope-labeled ((15)N) form of the full-length (HSA) protein, expressed in yeast was spiked into all samples prior to LC-MS analysis to compare yields of numerous varieties of tryptic peptides. Relative quantification was achieved by normalization of integrated extracted ion chromatograms (XICs) using liquid chromatography-tandem mass spectrometry (LC-MS/MS) by multiple-reaction monitoring (MRM) on a triple quadrupole (QQQ) MS. Related peptide fragmentation transitions, and multiple peptide charge states, were monitored for validation of quantitative results. Results demonstrate that protein concentration was shown to be unequal to tryptic peptide concentrations for most peptides, including so-called "proteotypic" peptides. Peptide release during digestion displayed complex kinetics dependent on digestion conditions and, by inference, from denatured protein structure. Hydrolysis rates at tryptic cleavage sites were also shown to be affected by differences in nearest and next-nearest amino acid residues. The data suggesting nonstoichiometry of enzymatic protein digestions emphasizes the often overlooked difficulties for routine absolute protein quantification, and highlights the need for use of suitable internal standards and isotope dilution techniques.

  17. Spintronics in the «Bottom-up» Approach

    Directory of Open Access Journals (Sweden)

    Yu.A. Kruglyak

    2014-11-01

    Full Text Available Basic topics of spintronics such as spin valve, interface resistance due to the mismatch of conduction modes, spin potentials, non-local spin voltage, spin moment and its transport, Landau-Lifshitz-Gilbert equation, and explanation on its basis why a magnet has an “easy axis”, nanomagnet dynamics by spin current, polarizers and analyzers of spin current, diffusion equation for ballistic transport and current in terms of non-equilibrium potentials are discussed in the frame of the “bottom-up” approach of modern nanoelectronics.

  18. Pivots - A Bottom-Up Approach to Enhance Resilience

    Science.gov (United States)

    2015-12-01

    63 viii B. PROPOSED MODEL —WRAP-AROUND SERVICES BUSINESS INCUBATOR .......................................................................64 C... business entrepreneur xii SCORE Service Corps of Retired Executives SME small and medium sized enterprise SNA social network analysis SoVI...preparedness or mitigation. B. PROPOSED MODEL —WRAP-AROUND SERVICES BUSINESS INCUBATOR Numerous illustrations demonstrate how small business owners

  19. Bottom-Up Energy Analysis System - Methodology and Results

    Energy Technology Data Exchange (ETDEWEB)

    McNeil, Michael A. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Letschert, Virginie E. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Stephane, de la Rue du Can [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Ke, Jing [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

    2012-06-15

    The main objective of the development of BUENAS is to provide a global model with sufficient detail and accuracy for technical assessment of policy measures such as energy efficiency standards and labeling (EES&L) programs. In most countries where energy efficiency policies exist, the initial emphasis is on household appliances and lighting. Often, equipment used in commercial buildings, particularly heating, air conditioning and ventilation (HVAC) is also covered by EES&L programs. In the industrial sector, standards and labeling generally covers electric motors and distribution transformers, although a few more types of industrial equipment are covered by some programs, and there is a trend toward including more of them. In order to make a comprehensive estimate of the total potential impacts, development of the model prioritized coverage of as many end uses commonly targeted by EES&L programs as possible, for as many countries as possible.

  20. Fusion of Saliency Maps for Visual Attention Selection in Dynamic Scenes

    Directory of Open Access Journals (Sweden)

    Jiawei Xu

    2013-04-01

    Full Text Available Human vision system can optionally process the visual information and adjust the contradiction between the limited resources and the huge visual information. Building attention models similar to human visual attention system should be very beneficial to computer vision and machine intelligence; meanwhile, it has been a challenging task due to the complexity of human brain and limited understanding of the mechanisms underlying the human attention system. Previous studies emphasized on static attention, however the motion features, which are playing key roles in human attention system intuitively, have not been well integrated into the previous models. Motion features such as motion direction are assumed to be processed within the dorsal visual and the dorsal auditory pathways and there is no systematic approach to extract the motion cues well so far. In this paper, we proposed a generic Global Attention Model (GAM system based on visual attention analysis. The computational saliency map is superimposed by a set of saliency maps via different predefined approaches. We added three saliencies maps up together to reflect dominant motion features into the attention model, i.e., the fused saliency map at each frame is adjusted by the top-down, static and motion saliency maps. By doing this, the proposed attention model accommodating motion feature into the system so that it can responds to real visual events in a manner similar to the human visual attention system in a realistic circumstance. The visual challenges used in our experiments are selected from the benchmark video sequences. We tested the GAM on several dynamic scenes, such as traffic artery, parachuter landing and surfing, with high speed and cluttered background. The experiment results showed the GAM system demonstrated high robustness and real-time ability under complex dynamic scenes. Extensive evaluations based on comparisons with other approaches of the attention model results have