WorldWideScience

Sample records for scale collective modeling

  1. Large-scale collection and annotation of gene models for date palm (Phoenix dactylifera, L.).

    Science.gov (United States)

    Zhang, Guangyu; Pan, Linlin; Yin, Yuxin; Liu, Wanfei; Huang, Dawei; Zhang, Tongwu; Wang, Lei; Xin, Chengqi; Lin, Qiang; Sun, Gaoyuan; Ba Abdullah, Mohammed M; Zhang, Xiaowei; Hu, Songnian; Al-Mssallem, Ibrahim S; Yu, Jun

    2012-08-01

    The date palm (Phoenix dactylifera L.), famed for its sugar-rich fruits (dates) and cultivated by humans since 4,000 B.C., is an economically important crop in the Middle East, Northern Africa, and increasingly other places where climates are suitable. Despite a long history of human cultivation, the understanding of P. dactylifera genetics and molecular biology are rather limited, hindered by lack of basic data in high quality from genomics and transcriptomics. Here we report a large-scale effort in generating gene models (assembled expressed sequence tags or ESTs and mapped to a genome assembly) for P. dactylifera, using the long-read pyrosequencing platform (Roche/454 GS FLX Titanium) in high coverage. We built fourteen cDNA libraries from different P. dactylifera tissues (cultivar Khalas) and acquired 15,778,993 raw sequencing reads-about one million sequencing reads per library-and the pooled sequences were assembled into 67,651 non-redundant contigs and 301,978 singletons. We annotated 52,725 contigs based on the plant databases and 45 contigs based on functional domains referencing to the Pfam database. From the annotated contigs, we assigned GO (Gene Ontology) terms to 36,086 contigs and KEGG pathways to 7,032 contigs. Our comparative analysis showed that 70.6 % (47,930), 69.4 % (47,089), 68.4 % (46,441), and 69.3 % (47,048) of the P. dactylifera gene models are shared with rice, sorghum, Arabidopsis, and grapevine, respectively. We also assigned our gene models into house-keeping and tissue-specific genes based on their tissue specificity.

  2. Genome-Scale Models

    DEFF Research Database (Denmark)

    Bergdahl, Basti; Sonnenschein, Nikolaus; Machado, Daniel

    2016-01-01

    An introduction to genome-scale models, how to build and use them, will be given in this chapter. Genome-scale models have become an important part of systems biology and metabolic engineering, and are increasingly used in research, both in academica and in industry, both for modeling chemical...

  3. Wyoming greater sage-grouse habitat prioritization: a collection of multi-scale seasonal models and geographic information systems land management tools

    Science.gov (United States)

    O'Donnell, Michael S.; Aldridge, Cameron L.; Doherty, Kevin E.; Fedy, Bradley C.

    2015-01-01

    With rapidly changing landscape conditions within Wyoming and the potential effects of landscape changes on sage-grouse habitat, land managers and conservation planners, among others, need procedures to assess the location and juxtaposition of important habitats, land-cover, and land-use patterns to balance wildlife requirements with multiple human land uses. Biologists frequently develop habitat-selection studies to identify prioritization efforts for species of conservation concern to increase understanding and help guide habitat-conservation efforts. Recently, the authors undertook a large-scale collaborative effort that developed habitat-selection models for Greater Sage-grouse (Centrocercus urophasianus) across large landscapes in Wyoming, USA and for multiple life-stages (nesting, late brood-rearing, and winter). We developed these habitat models using resource selection functions, based upon sage-grouse telemetry data collected for localized studies and within each life-stage. The models allowed us to characterize and spatially predict seasonal sage-grouse habitat use in Wyoming. Due to the quantity of models, the diversity of model predictors (in the form of geographic information system data) produced by analyses, and the variety of potential applications for these data, we present here a resource that complements our published modeling effort, which will further support land managers.

  4. Wyoming greater sage-grouse habitat prioritization: A collection of multi-scale seasonal models and geographic information systems land management tools

    Science.gov (United States)

    O'Donnell, Michael S.; Aldridge, Cameron L.; Doherty, Kevin E.; Fedy, Bradley C.

    2015-01-01

    With rapidly changing landscape conditions within Wyoming and the potential effects of landscape changes on sage-grouse habitat, land managers and conservation planners, among others, need procedures to assess the location and juxtaposition of important habitats, land-cover, and land-use patterns to balance wildlife requirements with multiple human land uses. Biologists frequently develop habitat-selection studies to identify prioritization efforts for species of conservation concern to increase understanding and help guide habitat-conservation efforts. Recently, the authors undertook a large-scale collaborative effort that developed habitat-selection models for Greater Sage-grouse (Centrocercus urophasianus) across large landscapes in Wyoming, USA and for multiple life-stages (nesting, late brood-rearing, and winter). We developed these habitat models using resource selection functions, based upon sage-grouse telemetry data collected for localized studies and within each life-stage. The models allowed us to characterize and spatially predict seasonal sage-grouse habitat use in Wyoming. Due to the quantity of models, the diversity of model predictors (in the form of geographic information system data) produced by analyses, and the variety of potential applications for these data, we present here a resource that complements our published modeling effort, which will further support land managers.

  5. Multi-scale analysis of collective behavior in 2D self-propelled particle models of swarms: An Advection-Diffusion with Memory Approach

    Science.gov (United States)

    Raghib, Michael; Levin, Simon; Kevrekidis, Ioannis

    2010-05-01

    Self-propelled particle models (SPP's) are a class of agent-based simulations that have been successfully used to explore questions related to various flavors of collective motion, including flocking, swarming, and milling. These models typically consist of particle configurations, where each particle moves with constant speed, but changes its orientation in response to local averages of the positions and orientations of its neighbors found within some interaction region. These local averages are based on `social interactions', which include avoidance of collisions, attraction, and polarization, that are designed to generate configurations that move as a single object. Errors made by the individuals in the estimates of the state of the local configuration are modeled as a random rotation of the updated orientation resulting from the social rules. More recently, SPP's have been introduced in the context of collective decision-making, where the main innovation consists of dividing the population into naïve and `informed' individuals. Whereas naïve individuals follow the classical collective motion rules, members of the informed sub-population update their orientations according to a weighted average of the social rules and a fixed `preferred' direction, shared by all the informed individuals. Collective decision-making is then understood in terms of the ability of the informed sub-population to steer the whole group along the preferred direction. Summary statistics of collective decision-making are defined in terms of the stochastic properties of the random walk followed by the centroid of the configuration as the particles move about, in particular the scaling behavior of the mean squared displacement (msd). For the region of parameters where the group remains coherent , we note that there are two characteristic time scales, first there is an anomalous transient shared by both purely naïve and informed configurations, i.e. the scaling exponent lies between 1 and

  6. Dynamic scaling regimes of collective decision making

    CERN Document Server

    Gronlund, Andreas; Minnhagen, Petter

    2008-01-01

    We investigate a social system of agents faced with a binary choice. We assume there is a correct, or beneficial, outcome of this choice. Furthermore, we assume agents are influenced by others in making their decision, and that the agents can obtain information that may guide them towards making a correct decision. The dynamic model we propose is of nonequilibrium type, converging to a final decision. We run it on random graphs and scale-free networks. On random graphs, we find two distinct regions in terms of the "finalizing time" -- the time until all agents have finalized their decisions. On scale-free networks on the other hand, there does not seem to be any such distinct scaling regions.

  7. Large scale collective modeling the final 'freeze out' stages of energetic heavy ion reactions and calculation of single particle measurables from these models

    Energy Technology Data Exchange (ETDEWEB)

    Nyiri, Agnes

    2005-07-01

    The goal of this PhD project was to develop the already existing, but far not complete Multi Module Model, specially focusing on the last module which describes the final stages of a heavy ion collision, as this module was still missing. The major original achievements summarized in this thesis correspond to the freeze out problem and calculation of an important measurable, the anisotropic flow. Summary of results: Freeze out: The importance of freeze out models is that they allow the evaluation of observables, which then can be compared to the experimental results. Therefore, it is crucial to find a realistic freeze out description, which is proved to be a non-trivial task. Recently, several kinetic freeze out models have been developed. Based on the earlier results, we have introduced new ideas and improved models, which may contribute to a more realistic description of the freeze out process. We have investigated the applicability of the Boltzmann Transport Equation (BTE) to describe dynamical freeze out. We have introduced the so-called Modified Boltzmann Transport Equation, which has a form very similar to that of the BTE, but takes into account those characteristics of the FO process which the BTE can not handle, e.g. the rapid change of the phase-space distribution function in the direction normal to the finite FO layer. We have shown that the main features of earlier ad hoc kinetic FO models can be obtained from BTE and MBTE. We have discussed the qualitative differences between the two approaches and presented some quantitative comparison as well. Since the introduced modification of the BTE makes it very difficult to solve the FO problem from the first principles, it is important to work out simplified phenomenological models, which can explain the basic features of the FO process. We have built and discussed such a model. Flow analysis: The other main subject of this thesis has been the collective flow in heavy ion collisions. Collective flow from ultra

  8. Empirical questions for collective-behaviour modelling

    Indian Academy of Sciences (India)

    Abstract. The collective behaviour of groups of social animals has been an active topic of study across many disciplines, and has a long history of modelling. Classical models have been successful in capturing the large-scale patterns formed by animal aggregations, but fare less well in accounting for details, particularly for ...

  9. Empirical questions for collective-behaviour modelling

    Indian Academy of Sciences (India)

    2015-02-04

    Feb 4, 2015 ... The collective behaviour of groups of social animals has been an active topic of study across many disciplines, and has a long history of modelling. Classical models have been successful in capturing the large-scale patterns formed by animal aggregations, but fare less well in accounting for details, ...

  10. An intelligence collection management model.

    OpenAIRE

    Gandy, Thomas A.

    1984-01-01

    Approved for public release; distribution is unlimited This thesis examines the structure and functions of a generalized tactical intelligence collection system. Included are its position in the intelligence system structure, relationship with other activities in the intelligence system, and the organization and control of its components. A mathematical optimization model of a simplified intelligence collection system is developed to explore several issues related to intelligence collect...

  11. Collective action and rationality models

    Directory of Open Access Journals (Sweden)

    Luis Miguel Miller Moya

    2004-01-01

    Full Text Available The Olsonian theory of collective action (Olson, 1965 assumes a model of economic rationality, based on a simple calculus between costs and benefits, that can be hardly hold at present, given the models of rationality proposed recently by several fields of research. In relation to these fields, I will concentrate in two specific proposals, namely: evolutionary game theory and, over all, the theory of bounded rationality. Both alternatives are specially fruitful in order to propose models that do not need a maximizing rationality, or environments of complete and perfect information. Their approaches, based on the possibility of individual learning over the time, have contributed to the analysis of the emergence of social norms, which is something really necessary to the resolution of problems related to cooperation. Thus, this article asserts that these two new theoretical contributions make feasible a fundamental advance in the study of collective action.

  12. HOCOMOCO: towards a complete collection of transcription factor binding models for human and mouse via large-scale ChIP-Seq analysis

    KAUST Repository

    Kulakovskiy, Ivan V.

    2017-10-31

    We present a major update of the HOCOMOCO collection that consists of patterns describing DNA binding specificities for human and mouse transcription factors. In this release, we profited from a nearly doubled volume of published in vivo experiments on transcription factor (TF) binding to expand the repertoire of binding models, replace low-quality models previously based on in vitro data only and cover more than a hundred TFs with previously unknown binding specificities. This was achieved by systematic motif discovery from more than five thousand ChIP-Seq experiments uniformly processed within the BioUML framework with several ChIP-Seq peak calling tools and aggregated in the GTRD database. HOCOMOCO v11 contains binding models for 453 mouse and 680 human transcription factors and includes 1302 mononucleotide and 576 dinucleotide position weight matrices, which describe primary binding preferences of each transcription factor and reliable alternative binding specificities. An interactive interface and bulk downloads are available on the web: http://hocomoco.autosome.ru and http://www.cbrc.kaust.edu.sa/hocomoco11. In this release, we complement HOCOMOCO by MoLoTool (Motif Location Toolbox, http://molotool.autosome.ru) that applies HOCOMOCO models for visualization of binding sites in short DNA sequences.

  13. HOCOMOCO: towards a complete collection of transcription factor binding models for human and mouse via large-scale ChIP-Seq analysis.

    Science.gov (United States)

    Kulakovskiy, Ivan V; Vorontsov, Ilya E; Yevshin, Ivan S; Sharipov, Ruslan N; Fedorova, Alla D; Rumynskiy, Eugene I; Medvedeva, Yulia A; Magana-Mora, Arturo; Bajic, Vladimir B; Papatsenko, Dmitry A; Kolpakov, Fedor A; Makeev, Vsevolod J

    2018-01-04

    We present a major update of the HOCOMOCO collection that consists of patterns describing DNA binding specificities for human and mouse transcription factors. In this release, we profited from a nearly doubled volume of published in vivo experiments on transcription factor (TF) binding to expand the repertoire of binding models, replace low-quality models previously based on in vitro data only and cover more than a hundred TFs with previously unknown binding specificities. This was achieved by systematic motif discovery from more than five thousand ChIP-Seq experiments uniformly processed within the BioUML framework with several ChIP-Seq peak calling tools and aggregated in the GTRD database. HOCOMOCO v11 contains binding models for 453 mouse and 680 human transcription factors and includes 1302 mononucleotide and 576 dinucleotide position weight matrices, which describe primary binding preferences of each transcription factor and reliable alternative binding specificities. An interactive interface and bulk downloads are available on the web: http://hocomoco.autosome.ru and http://www.cbrc.kaust.edu.sa/hocomoco11. In this release, we complement HOCOMOCO by MoLoTool (Motif Location Toolbox, http://molotool.autosome.ru) that applies HOCOMOCO models for visualization of binding sites in short DNA sequences. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.

  14. International Symposia on Scale Modeling

    CERN Document Server

    Ito, Akihiko; Nakamura, Yuji; Kuwana, Kazunori

    2015-01-01

    This volume thoroughly covers scale modeling and serves as the definitive source of information on scale modeling as a powerful simplifying and clarifying tool used by scientists and engineers across many disciplines. The book elucidates techniques used when it would be too expensive, or too difficult, to test a system of interest in the field. Topics addressed in the current edition include scale modeling to study weather systems, diffusion of pollution in air or water, chemical process in 3-D turbulent flow, multiphase combustion, flame propagation, biological systems, behavior of materials at nano- and micro-scales, and many more. This is an ideal book for students, both graduate and undergraduate, as well as engineers and scientists interested in the latest developments in scale modeling. This book also: Enables readers to evaluate essential and salient aspects of profoundly complex systems, mechanisms, and phenomena at scale Offers engineers and designers a new point of view, liberating creative and inno...

  15. Collective Space-Sensing Coordinates Pattern Scaling in Engineered Bacteria

    National Research Council Canada - National Science Library

    Cao, Yangxiaolu; Ryser, Marc D; Payne, Stephen; Li, Bochong; Rao, Christopher V; You, Lingchong

    2016-01-01

    .... We found that the ring width exhibits perfect scale invariance to the colony size. Our analysis revealed a collective space-sensing mechanism, which entails sequential actions of an integral feedback loop and an incoherent feedforward loop...

  16. Collective memory in primate conflict implied by temporal scaling collapse.

    Science.gov (United States)

    Lee, Edward D; Daniels, Bryan C; Krakauer, David C; Flack, Jessica C

    2017-09-01

    In biological systems, prolonged conflict is costly, whereas contained conflict permits strategic innovation and refinement. Causes of variation in conflict size and duration are not well understood. We use a well-studied primate society model system to study how conflicts grow. We find conflict duration is a 'first to fight' growth process that scales superlinearly, with the number of possible pairwise interactions. This is in contrast with a 'first to fail' process that characterizes peaceful durations. Rescaling conflict distributions reveals a universal curve, showing that the typical time scale of correlated interactions exceeds nearly all individual fights. This temporal correlation implies collective memory across pairwise interactions beyond those assumed in standard models of contagion growth or iterated evolutionary games. By accounting for memory, we make quantitative predictions for interventions that mitigate or enhance the spread of conflict. Managing conflict involves balancing the efficient use of limited resources with an intervention strategy that allows for conflict while keeping it contained and controlled. © 2017 The Author(s).

  17. Leadership solves collective action problems in small-scale societies

    Science.gov (United States)

    Glowacki, Luke; von Rueden, Chris

    2015-01-01

    Observation of leadership in small-scale societies offers unique insights into the evolution of human collective action and the origins of sociopolitical complexity. Using behavioural data from the Tsimane forager-horticulturalists of Bolivia and Nyangatom nomadic pastoralists of Ethiopia, we evaluate the traits of leaders and the contexts in which leadership becomes more institutional. We find that leaders tend to have more capital, in the form of age-related knowledge, body size or social connections. These attributes can reduce the costs leaders incur and increase the efficacy of leadership. Leadership becomes more institutional in domains of collective action, such as resolution of intragroup conflict, where collective action failure threatens group integrity. Together these data support the hypothesis that leadership is an important means by which collective action problems are overcome in small-scale societies. PMID:26503683

  18. Leadership solves collective action problems in small-scale societies.

    Science.gov (United States)

    Glowacki, Luke; von Rueden, Chris

    2015-12-05

    Observation of leadership in small-scale societies offers unique insights into the evolution of human collective action and the origins of sociopolitical complexity. Using behavioural data from the Tsimane forager-horticulturalists of Bolivia and Nyangatom nomadic pastoralists of Ethiopia, we evaluate the traits of leaders and the contexts in which leadership becomes more institutional. We find that leaders tend to have more capital, in the form of age-related knowledge, body size or social connections. These attributes can reduce the costs leaders incur and increase the efficacy of leadership. Leadership becomes more institutional in domains of collective action, such as resolution of intragroup conflict, where collective action failure threatens group integrity. Together these data support the hypothesis that leadership is an important means by which collective action problems are overcome in small-scale societies. © 2015 The Author(s).

  19. Highly Scalable Trip Grouping for Large Scale Collective Transportation Systems

    DEFF Research Database (Denmark)

    Gidofalvi, Gyozo; Pedersen, Torben Bach; Risch, Tore

    2008-01-01

    Transportation-related problems, like road congestion, parking, and pollution, are increasing in most cities. In order to reduce traffic, recent work has proposed methods for vehicle sharing, for example for sharing cabs by grouping "closeby" cab requests and thus minimizing transportation cost...... and utilizing cab space. However, the methods published so far do not scale to large data volumes, which is necessary to facilitate large-scale collective transportation systems, e.g., ride-sharing systems for large cities. This paper presents highly scalable trip grouping algorithms, which generalize previous...

  20. Leadership solves collective action problems in small-scale societies

    OpenAIRE

    Glowacki, Luke; von Rueden, Chris

    2015-01-01

    Observation of leadership in small-scale societies offers unique insights into the evolution of human collective action and the origins of sociopolitical complexity. Using behavioural data from the Tsimane forager-horticulturalists of Bolivia and Nyangatom nomadic pastoralists of Ethiopia, we evaluate the traits of leaders and the contexts in which leadership becomes more institutional. We find that leaders tend to have more capital, in the form of age-related knowledge, body size or social c...

  1. An Intelligence Collection Management Model.

    Science.gov (United States)

    1984-06-01

    classification of inteligence collection requirements in terms of. the a-.- metnodo"c, .ev--e in Chaster Five. 116 APPgENDIX A A METHOD OF RANKING...of Artificial Intelligence Tools and Technigues to!TN’X n~l is n rs aa~emfft-.3-ufnyva: ’A TZ Ashby W. Ecss. An Introduction to Cybernetics. New York

  2. Modelling of rate effects at multiple scales

    DEFF Research Database (Denmark)

    Pedersen, R.R.; Simone, A.; Sluys, L. J.

    2008-01-01

    , the length scale in the meso-model and the macro-model can be coupled. In this fashion, a bridging of length scales can be established. A computational analysis of  a Split Hopkinson bar test at medium and high impact load is carried out at macro-scale and meso-scale including information from  the micro-scale....

  3. Finite-size scaling a collection of reprints

    CERN Document Server

    1988-01-01

    Over the past few years, finite-size scaling has become an increasingly important tool in studies of critical systems. This is partly due to an increased understanding of finite-size effects by analytical means, and partly due to our ability to treat larger systems with large computers. The aim of this volume was to collect those papers which have been important for this progress and which illustrate novel applications of the method. The emphasis has been placed on relatively recent developments, including the use of the &egr;-expansion and of conformal methods.

  4. Integrating Local Scale Drainage Measures in Meso Scale Catchment Modelling

    Directory of Open Access Journals (Sweden)

    Sandra Hellmers

    2017-01-01

    Full Text Available This article presents a methodology to optimize the integration of local scale drainage measures in catchment modelling. The methodology enables to zoom into the processes (physically, spatially and temporally where detailed physical based computation is required and to zoom out where lumped conceptualized approaches are applied. It allows the definition of parameters and computation procedures on different spatial and temporal scales. Three methods are developed to integrate features of local scale drainage measures in catchment modelling: (1 different types of local drainage measures are spatially integrated in catchment modelling by a data mapping; (2 interlinked drainage features between data objects are enabled on the meso, local and micro scale; (3 a method for modelling multiple interlinked layers on the micro scale is developed. For the computation of flow routing on the meso scale, the results of the local scale measures are aggregated according to their contributing inlet in the network structure. The implementation of the methods is realized in a semi-distributed rainfall-runoff model. The implemented micro scale approach is validated with a laboratory physical model to confirm the credibility of the model. A study of a river catchment of 88 km2 illustrated the applicability of the model on the regional scale.

  5. Harvesting Collective Trend Observations from Large Scale Study Trips

    DEFF Research Database (Denmark)

    Eriksen, Kaare; Ovesen, Nis

    2014-01-01

    trips for engineering students in architecture & design and the results from crowd-collecting a large amount of trend observations as well as the derived experience from using the method on a large scale study trip. The method has been developed and formalized in relation to study trips with large......To enhance industrial design students’ decoding and understanding of the technological possibilities and the diversity of needs and preferences in different cultures it is not unusual to arrange study trips where such students acquire a broader view to strengthen their professional skills...... numbers of students to the annual Milan Design Week and the Milan fair ‘I Saloni’ in Italy. The present paper describes and evaluates the method, the theory behind it, the practical execution of the trend registration, the results from the activities and future perspectives....

  6. Memoised Garbage Collection for Software Model Checking

    NARCIS (Netherlands)

    Nguyen, V.Y.; Ruys, T.C.; Kowalewski, S.; Philippou, A.

    Virtual machine based software model checkers like JPF and MoonWalker spend up to half of their veri��?cation time on garbage collection. This is no surprise as after nearly each transition the heap has to be cleaned from garbage. To improve this, this paper presents the Memoised Garbage Collection

  7. Modeling and simulation of blood collection systems.

    Science.gov (United States)

    Alfonso, Edgar; Xie, Xiaolan; Augusto, Vincent; Garraud, Olivier

    2012-03-01

    This paper addresses the modeling and simulation of blood collection systems in France for both fixed site and mobile blood collection with walk in whole blood donors and scheduled plasma and platelet donors. Petri net models are first proposed to precisely describe different blood collection processes, donor behaviors, their material/human resource requirements and relevant regulations. Petri net models are then enriched with quantitative modeling of donor arrivals, donor behaviors, activity times and resource capacity. Relevant performance indicators are defined. The resulting simulation models can be straightforwardly implemented with any simulation language. Numerical experiments are performed to show how the simulation models can be used to select, for different walk in donor arrival patterns, appropriate human resource planning and donor appointment strategies.

  8. Brane World Models Need Low String Scale

    CERN Document Server

    Antoniadis, Ignatios; Calmet, Xavier

    2011-01-01

    Models with large extra dimensions offer the possibility of the Planck scale being of order the electroweak scale, thus alleviating the gauge hierarchy problem. We show that these models suffer from a breakdown of unitarity at around three quarters of the low effective Planck scale. An obvious candidate to fix the unitarity problem is string theory. We therefore argue that it is necessary for the string scale to appear below the effective Planck scale and that the first signature of such models would be string resonances. We further translate experimental bounds on the string scale into bounds on the effective Planck scale.

  9. Modeling collective cell migration in geometric confinement

    Science.gov (United States)

    Tarle, Victoria; Gauquelin, Estelle; Vedula, S. R. K.; D'Alessandro, Joseph; Lim, C. T.; Ladoux, Benoit; Gov, Nir S.

    2017-06-01

    Monolayer expansion has generated great interest as a model system to study collective cell migration. During such an expansion the culture front often develops ‘fingers’, which we have recently modeled using a proposed feedback between the curvature of the monolayer’s leading edge and the outward motility of the edge cells. We show that this model is able to explain the puzzling observed increase of collective cellular migration speed of a monolayer expanding into thin stripes, as well as describe the behavior within different confining geometries that were recently observed in experiments. These comparisons give support to the model and emphasize the role played by the edge cells and the edge shape during collective cell motion.

  10. Modelling fuel consumption in kerbside source segregated food waste collection: separate collection and co-collection.

    Science.gov (United States)

    Chu, T W; Heaven, S; Gredmaier, L

    2015-01-01

    Source separated food waste is a valuable feedstock for renewable energy production through anaerobic digestion, and a variety of collection schemes for this material have recently been introduced. The aim of this study was to identify options that maximize collection efficiency and reduce fuel consumption as part of the overall energy balance. A mechanistic model was developed to calculate the fuel consumption of kerbside collection of source segregated food waste, co-mingled dry recyclables and residual waste. A hypothetical city of 20,000 households was considered and nine scenarios were tested with different combinations of collection frequencies, vehicle types and waste types. The results showed that the potential fuel savings from weekly and fortnightly co-collection of household waste range from 7.4% to 22.4% and 1.8% to 26.6%, respectively, when compared to separate collection. A compartmentalized vehicle split 30:70 always performed better than one with two compartments of equal size. Weekly food waste collection with alternate weekly collection of the recyclables and residual waste by two-compartment collection vehicles was the best option to reduce the overall fuel consumption.

  11. Consistent quadrupole-octupole collective model

    Science.gov (United States)

    Dobrowolski, A.; Mazurek, K.; Góźdź, A.

    2016-11-01

    Within this work we present a consistent approach to quadrupole-octupole collective vibrations coupled with the rotational motion. A realistic collective Hamiltonian with variable mass-parameter tensor and potential obtained through the macroscopic-microscopic Strutinsky-like method with particle-number-projected BCS (Bardeen-Cooper-Schrieffer) approach in full vibrational and rotational, nine-dimensional collective space is diagonalized in the basis of projected harmonic oscillator eigensolutions. This orthogonal basis of zero-, one-, two-, and three-phonon oscillator-like functions in vibrational part, coupled with the corresponding Wigner function is, in addition, symmetrized with respect to the so-called symmetrization group, appropriate to the collective space of the model. In the present model it is D4 group acting in the body-fixed frame. This symmetrization procedure is applied in order to provide the uniqueness of the Hamiltonian eigensolutions with respect to the laboratory coordinate system. The symmetrization is obtained using the projection onto the irreducible representation technique. The model generates the quadrupole ground-state spectrum as well as the lowest negative-parity spectrum in 156Gd nucleus. The interband and intraband B (E 1 ) and B (E 2 ) reduced transition probabilities are also calculated within those bands and compared with the recent experimental results for this nucleus. Such a collective approach is helpful in searching for the fingerprints of the possible high-rank symmetries (e.g., octahedral and tetrahedral) in nuclear collective bands.

  12. Locust Collective Motion and Its Modeling.

    Directory of Open Access Journals (Sweden)

    Gil Ariel

    2015-12-01

    Full Text Available Over the past decade, technological advances in experimental and animal tracking techniques have motivated a renewed theoretical interest in animal collective motion and, in particular, locust swarming. This review offers a comprehensive biological background followed by comparative analysis of recent models of locust collective motion, in particular locust marching, their settings, and underlying assumptions. We describe a wide range of recent modeling and simulation approaches, from discrete agent-based models of self-propelled particles to continuous models of integro-differential equations, aimed at describing and analyzing the fascinating phenomenon of locust collective motion. These modeling efforts have a dual role: The first views locusts as a quintessential example of animal collective motion. As such, they aim at abstraction and coarse-graining, often utilizing the tools of statistical physics. The second, which originates from a more biological perspective, views locust swarming as a scientific problem of its own exceptional merit. The main goal should, thus, be the analysis and prediction of natural swarm dynamics. We discuss the properties of swarm dynamics using the tools of statistical physics, as well as the implications for laboratory experiments and natural swarms. Finally, we stress the importance of a combined-interdisciplinary, biological-theoretical effort in successfully confronting the challenges that locusts pose at both the theoretical and practical levels.

  13. Mob control models of threshold collective behavior

    CERN Document Server

    Breer, Vladimir V; Rogatkin, Andrey D

    2017-01-01

    This book presents mathematical models of mob control with threshold (conformity) collective decision-making of the agents. Based on the results of analysis of the interconnection between the micro- and macromodels of active network structures, it considers the static (deterministic, stochastic and game-theoretic) and dynamic (discrete- and continuous-time) models of mob control, and highlights models of informational confrontation. Many of the results are applicable not only to mob control problems, but also to control problems arising in social groups, online social networks, etc. Aimed at researchers and practitioners, it is also a valuable resource for undergraduate and postgraduate students as well as doctoral candidates specializing in the field of collective behavior modeling.

  14. The modelling cycle for collective animal behaviour.

    Science.gov (United States)

    Sumpter, David J T; Mann, Richard P; Perna, Andrea

    2012-12-06

    Collective animal behaviour is the study of how interactions between individuals produce group level patterns, and why these interactions have evolved. This study has proved itself uniquely interdisciplinary, involving physicists, mathematicians, engineers as well as biologists. Almost all experimental work in this area is related directly or indirectly to mathematical models, with regular movement back and forth between models, experimental data and statistical fitting. In this paper, we describe how the modelling cycle works in the study of collective animal behaviour. We classify studies as addressing questions at different levels or linking different levels, i.e. as local, local to global, global to local or global. We also describe three distinct approaches-theory-driven, data-driven and model selection-to these questions. We show, with reference to our own research on species across different taxa, how we move between these different levels of description and how these various approaches can be applied to link levels together.

  15. Developing New Models for Collection Development.

    Science.gov (United States)

    Stoffle, Carla J.; Fore, Janet; Allen, Barbara

    1999-01-01

    Discusses the need to develop new models for collection development in academic libraries, based on experiences at the University of Arizona. Highlights include changes in the organizational chart; focusing on users' information goals and needs; integrative services; shared resources; interlibrary loans; digital technology; and funding. (LRW)

  16. Collection Efficiency and Ice Accretion Characteristics of Two Full Scale and One 1/4 Scale Business Jet Horizontal Tails

    Science.gov (United States)

    Bidwell, Colin S.; Papadakis, Michael

    2005-01-01

    Collection efficiency and ice accretion calculations have been made for a series of business jet horizontal tail configurations using a three-dimensional panel code, an adaptive grid code, and the NASA Glenn LEWICE3D grid based ice accretion code. The horizontal tail models included two full scale wing tips and a 25 percent scale model. Flow solutions for the horizontal tails were generated using the PMARC panel code. Grids used in the ice accretion calculations were generated using the adaptive grid code ICEGRID. The LEWICE3D grid based ice accretion program was used to calculate impingement efficiency and ice shapes. Ice shapes typifying rime and mixed icing conditions were generated for a 30 minute hold condition. All calculations were performed on an SGI Octane computer. The results have been compared to experimental flow and impingement data. In general, the calculated flow and collection efficiencies compared well with experiment, and the ice shapes appeared representative of the rime and mixed icing conditions for which they were calculated.

  17. Large Scale Computations in Air Pollution Modelling

    DEFF Research Database (Denmark)

    Zlatev, Z.; Brandt, J.; Builtjes, P. J. H.

    Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998......Proceedings of the NATO Advanced Research Workshop on Large Scale Computations in Air Pollution Modelling, Sofia, Bulgaria, 6-10 July 1998...

  18. Getting started with digital collections scaling to fit your organization

    CERN Document Server

    Monson, D

    2017-01-01

    This easy-to-follow guide to digitization fundamentals will ensure that readers gain a solid grasp of the knowledge and resources available for getting started on their own digital collection projects.

  19. Scaling limits of a model for selection at two scales

    Science.gov (United States)

    Luo, Shishi; Mattingly, Jonathan C.

    2017-04-01

    The dynamics of a population undergoing selection is a central topic in evolutionary biology. This question is particularly intriguing in the case where selective forces act in opposing directions at two population scales. For example, a fast-replicating virus strain outcompetes slower-replicating strains at the within-host scale. However, if the fast-replicating strain causes host morbidity and is less frequently transmitted, it can be outcompeted by slower-replicating strains at the between-host scale. Here we consider a stochastic ball-and-urn process which models this type of phenomenon. We prove the weak convergence of this process under two natural scalings. The first scaling leads to a deterministic nonlinear integro-partial differential equation on the interval [0,1] with dependence on a single parameter, λ. We show that the fixed points of this differential equation are Beta distributions and that their stability depends on λ and the behavior of the initial data around 1. The second scaling leads to a measure-valued Fleming-Viot process, an infinite dimensional stochastic process that is frequently associated with a population genetics.

  20. Simple scaling model for exploding pusher targets

    Energy Technology Data Exchange (ETDEWEB)

    Storm, E.K.; Larsen, J.T.; Nuckolls, J.H.; Ahlstrom, H.G.; Manes, K.R.

    1977-11-04

    A simple model has been developed which when normalized by experiment or Lasnex calculations can be used to scale neutron yields for variations in laser input power and pulse length and target radius and wall thickness. The model also illucidates some of the physical processes occurring in this regime of laser fusion experiments. Within certain limitations on incident intensity and target geometry, the model scales with experiments and calculations to within a factor of two over six decades in neutron yield.

  1. Functional Scaling of Musculoskeletal Models

    DEFF Research Database (Denmark)

    Lund, Morten Enemark; Andersen, Michael Skipper; de Zee, Mark

    specific to the patient. This is accomplished using optimisation methods to determine patient-specific joint positions and orientations, which minimise the least-squares error between model markers and the recorded markers from a motion capture experiment. Functional joint positions and joint axis...

  2. Field collection, preservation and large scale DNA extraction ...

    African Journals Online (AJOL)

    Some genetic studies using molecular methods such as diversity assessment or marker-assisted selection require collection of a large number of samples from fields located in the vicinity or in remote areas, followed by isolation of good quality DNA in a short time span. In the present study, different tissue preservation ...

  3. Field collection, preservation and large scale DNA extraction ...

    African Journals Online (AJOL)

    STORAGESEVER

    2009-08-04

    Aug 4, 2009 ... Some genetic studies using molecular methods such as diversity assessment or marker-assisted selection require collection of a large number of samples from fields located in the vicinity or in remote areas, followed by isolation of good quality DNA in a short time span. In the present study, different.

  4. Microsatellite diversity and broad scale geographic structure in a model legume: building a set of nested core collection for studying naturally occurring variation in Medicago truncatula

    DEFF Research Database (Denmark)

    Ronfort, Joelle; Bataillon, Thomas; Santoni, Sylvain

    2006-01-01

    microsatellite loci distributed throughout the genome. Results We confirm the uniqueness of all these genotypes and reveal a large amount of genetic diversity and allelic variation within this autogamous species. Spatial genetic correlation was found only for individuals originating from the same population......Abstract               Acknowledgements References   Background Exploiting genetic diversity requires previous knowledge of the extent and structure of the variation occurring in a species. Such knowledge can in turn be used to build a core-collection, i.e. a subset of accessions that aim...... at representing the genetic diversity of this species with a minimum of repetitiveness. We investigate the patterns of genetic diversity and population structure in a collection of 346 inbred lines representing the breadth of naturally occurring diversity in the Legume plant model Medicago truncatula using 13...

  5. Weak synchronization and large-scale collective oscillation in dense bacterial suspensions

    Science.gov (United States)

    Wu, Yilin

    Collective oscillatory behavior is ubiquitous in nature and it plays a vital role in many biological processes. Collective oscillations in biological multicellular systems often arise from coupling mediated by diffusive chemicals, by electrochemical mechanisms, or by biomechanical interaction between cells and their physical environment. In these examples, the phase of some oscillatory intracellular degree of freedom is synchronized. Here, in contrast, we discovered a unique 'weak synchronization' mechanism that does not require long-range coupling, nor even inherent oscillation of individual cells: We found that millions of motile cells in dense bacterial suspensions can self-organize into highly robust collective oscillatory motion, while individuals move in an erratic manner. Over large spatial scales we found that the phase of the oscillations is in fact organized into a centimeter scale traveling wave. We present a model of noisy self-propelled particles with strictly local interactions that accounts faithfully for our observations. These findings expand our knowledge of biological self-organization and reveal a new type of long-range order in active matter systems. The mechanism of collective oscillation uncovered here may inspire new strategies to control the self-organization of active matter and swarming robots. This work is supported by funding from CUHK Direct research Grants (4053019, 4053079, 4053130), the Research Grants Council of HKSAR (RGC Ref. No. CUHK 409713), and from the National Natural Science Foundation of China (NSFC 21473152).

  6. Modeling interactome: scale-free or geometric?

    Science.gov (United States)

    Przulj, N; Corneil, D G; Jurisica, I

    2004-12-12

    Networks have been used to model many real-world phenomena to better understand the phenomena and to guide experiments in order to predict their behavior. Since incorrect models lead to incorrect predictions, it is vital to have as accurate a model as possible. As a result, new techniques and models for analyzing and modeling real-world networks have recently been introduced. One example of large and complex networks involves protein-protein interaction (PPI) networks. We analyze PPI networks of yeast Saccharomyces cerevisiae and fruitfly Drosophila melanogaster using a newly introduced measure of local network structure as well as the standardly used measures of global network structure. We examine the fit of four different network models, including Erdos-Renyi, scale-free and geometric random network models, to these PPI networks with respect to the measures of local and global network structure. We demonstrate that the currently accepted scale-free model of PPI networks fails to fit the data in several respects and show that a random geometric model provides a much more accurate model of the PPI data. We hypothesize that only the noise in these networks is scale-free. We systematically evaluate how well-different network models fit the PPI networks. We show that the structure of PPI networks is better modeled by a geometric random graph than by a scale-free model. Supplementary information is available at http://www.cs.utoronto.ca/~juris/data/data/ppiGRG04/

  7. Attraction Based Models of Collective Motion

    OpenAIRE

    Strömbom, Daniel

    2013-01-01

    Animal groups often exhibit highly coordinated collective motion in a variety of situations. For example, bird flocks, schools of fish, a flock of sheep being herded by a dog and highly efficient traffic on an ant trail. Although these phenomena can be observed every day all over the world our knowledge of what rules the individual's in such groups use is very limited. Questions of this type has been studied using so called self-propelled particle (SPP) models, most of which assume that colle...

  8. Informing species conservation at multiple scales using data collected for marine mammal stock assessments.

    Directory of Open Access Journals (Sweden)

    Alana Grech

    Full Text Available BACKGROUND: Conservation planning and the design of marine protected areas (MPAs requires spatially explicit information on the distribution of ecological features. Most species of marine mammals range over large areas and across multiple planning regions. The spatial distributions of marine mammals are difficult to predict using habitat modelling at ecological scales because of insufficient understanding of their habitat needs, however, relevant information may be available from surveys conducted to inform mandatory stock assessments. METHODOLOGY AND RESULTS: We use a 20-year time series of systematic aerial surveys of dugong (Dugong dugong abundance to create spatially-explicit models of dugong distribution and relative density at the scale of the coastal waters of northeast Australia (∼136,000 km(2. We interpolated the corrected data at the scale of 2 km * 2 km planning units using geostatistics. Planning units were classified as low, medium, high and very high dugong density on the basis of the relative density of dugongs estimated from the models and a frequency analysis. Torres Strait was identified as the most significant dugong habitat in northeast Australia and the most globally significant habitat known for any member of the Order Sirenia. The models are used by local, State and Federal agencies to inform management decisions related to the Indigenous harvest of dugongs, gill-net fisheries and Australia's National Representative System of Marine Protected Areas. CONCLUSION/SIGNIFICANCE: In this paper we demonstrate that spatially-explicit population models add value to data collected for stock assessments, provide a robust alternative to predictive habitat distribution models, and inform species conservation at multiple scales.

  9. Informing species conservation at multiple scales using data collected for marine mammal stock assessments.

    Science.gov (United States)

    Grech, Alana; Sheppard, James; Marsh, Helene

    2011-03-28

    Conservation planning and the design of marine protected areas (MPAs) requires spatially explicit information on the distribution of ecological features. Most species of marine mammals range over large areas and across multiple planning regions. The spatial distributions of marine mammals are difficult to predict using habitat modelling at ecological scales because of insufficient understanding of their habitat needs, however, relevant information may be available from surveys conducted to inform mandatory stock assessments. We use a 20-year time series of systematic aerial surveys of dugong (Dugong dugong) abundance to create spatially-explicit models of dugong distribution and relative density at the scale of the coastal waters of northeast Australia (∼136,000 km(2)). We interpolated the corrected data at the scale of 2 km * 2 km planning units using geostatistics. Planning units were classified as low, medium, high and very high dugong density on the basis of the relative density of dugongs estimated from the models and a frequency analysis. Torres Strait was identified as the most significant dugong habitat in northeast Australia and the most globally significant habitat known for any member of the Order Sirenia. The models are used by local, State and Federal agencies to inform management decisions related to the Indigenous harvest of dugongs, gill-net fisheries and Australia's National Representative System of Marine Protected Areas. In this paper we demonstrate that spatially-explicit population models add value to data collected for stock assessments, provide a robust alternative to predictive habitat distribution models, and inform species conservation at multiple scales.

  10. Collective model for isovector quadrupole vibrations

    Energy Technology Data Exchange (ETDEWEB)

    Nojarov, R.; Faessler, A.

    1987-03-01

    The vibrational model is extended by introducing isospin-dependent collective coordinates, permitting a description out-of-phase neutron-proton vibrations coupled by a density-dependent symmetry energy. The restoring force is calculated microscopically using the wavefunctions of a Woods-Saxon potential and the coupling with three-phonon states is taken into account. The model is able to describe the available experimental data (energies and multipole mixing ratios) on low-lying 2/sup +/ states, which were observed recently in nuclei near the shell closures (/sup 124/Te, /sup 140/Ba, /sup 142/Ce and /sup 144/Nd), supporting the identification of these states as isovector quadrupole vibrations and predicting such states in /sup 126 -130/ Te.

  11. Models for wind turbines - a collection

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, G.C.; Hansen, M.H. (eds.); Baumgart, A.

    2002-02-01

    This report is a collection of notes which were intended to be short communications. Main target of the work presented is to supply new approaches to stability investigations of wind turbines. The authors opinion is that an efficient, systematic stability analysis can not be performed for large systems of differential equations (i.e. the order of the differential equations > 100), because numerical 'effects' in the solution of the equations of motion as initial value problem, eigenvalue problem or whatsoever become predominant. It is therefore necessary to find models which are reduced to the elementary coordinates but which can still describe the physical processes under consideration with sufficiently good accuracy. Such models are presented. (au)

  12. Agri-Environmental Resource Management by Large-Scale Collective Action: Determining KEY Success Factors

    Science.gov (United States)

    Uetake, Tetsuya

    2015-01-01

    Purpose: Large-scale collective action is necessary when managing agricultural natural resources such as biodiversity and water quality. This paper determines the key factors to the success of such action. Design/Methodology/Approach: This paper analyses four large-scale collective actions used to manage agri-environmental resources in Canada and…

  13. Rhythms of the collective brain: Metastable synchronization and cross-scale interactions in connected multitudes

    CERN Document Server

    Aguilera, Miguel

    2016-01-01

    Collective social events operate at many levels of organization -- from individuals to crowds -- presenting a variety of temporal and spatial scales of activity, whose causal interactions challenge our understanding of social systems. Large data sets of social media activity provide an unprecedented opportunity to investigate the processes that govern the coordination within and between those scales. Using as a case study a data set comprising 1.5 million Twitter messages of the activity around the 15M movement in Spain as an example of multitudinous self-organization, we propose a generic description of the coordination dynamics of the system based on phase-locking statistics at different frequencies using wavelet functions, identifying 8 frequency bands of entrained oscillations between 15 geographical urban nodes. We apply maximum entropy inference methods to extract Ising models capturing phase-locking activity between geographical nodes in our data at each frequency band. Inspecting the properties of the...

  14. Scaling up Ecological Measurements of Coral Reefs Using Semi-Automated Field Image Collection and Analysis

    Directory of Open Access Journals (Sweden)

    Manuel González-Rivero

    2016-01-01

    Full Text Available Ecological measurements in marine settings are often constrained in space and time, with spatial heterogeneity obscuring broader generalisations. While advances in remote sensing, integrative modelling and meta-analysis enable generalisations from field observations, there is an underlying need for high-resolution, standardised and geo-referenced field data. Here, we evaluate a new approach aimed at optimising data collection and analysis to assess broad-scale patterns of coral reef community composition using automatically annotated underwater imagery, captured along 2 km transects. We validate this approach by investigating its ability to detect spatial (e.g., across regions and temporal (e.g., over years change, and by comparing automated annotation errors to those of multiple human annotators. Our results indicate that change of coral reef benthos can be captured at high resolution both spatially and temporally, with an average error below 5%, among key benthic groups. Cover estimation errors using automated annotation varied between 2% and 12%, slightly larger than human errors (which varied between 1% and 7%, but small enough to detect significant changes among dominant groups. Overall, this approach allows a rapid collection of in-situ observations at larger spatial scales (km than previously possible, and provides a pathway to link, calibrate, and validate broader analyses across even larger spatial scales (10–10,000 km2.

  15. Seamless cross-scale modeling with SCHISM

    Science.gov (United States)

    Zhang, Yinglong J.; Ye, Fei; Stanev, Emil V.; Grashorn, Sebastian

    2016-06-01

    We present a new 3D unstructured-grid model (SCHISM) which is an upgrade from an existing model (SELFE). The new advection scheme for the momentum equation includes an iterative smoother to reduce excess mass produced by higher-order kriging method, and a new viscosity formulation is shown to work robustly for generic unstructured grids and effectively filter out spurious modes without introducing excessive dissipation. A new higher-order implicit advection scheme for transport (TVD2) is proposed to effectively handle a wide range of Courant numbers as commonly found in typical cross-scale applications. The addition of quadrangular elements into the model, together with a recently proposed, highly flexible vertical grid system (Zhang et al., A new vertical coordinate system for a 3D unstructured-grid model. Ocean Model. 85, 2015), leads to model polymorphism that unifies 1D/2DH/2DV/3D cells in a single model grid. Results from several test cases demonstrate the model's good performance in the eddying regime, which presents greater challenges for unstructured-grid models and represents the last missing link for our cross-scale model. The model can thus be used to simulate cross-scale processes in a seamless fashion (i.e. from deep ocean into shallow depths).

  16. Site-Scale Saturated Zone Flow Model

    Energy Technology Data Exchange (ETDEWEB)

    G. Zyvoloski

    2003-12-17

    The purpose of this model report is to document the components of the site-scale saturated-zone flow model at Yucca Mountain, Nevada, in accordance with administrative procedure (AP)-SIII.lOQ, ''Models''. This report provides validation and confidence in the flow model that was developed for site recommendation (SR) and will be used to provide flow fields in support of the Total Systems Performance Assessment (TSPA) for the License Application. The output from this report provides the flow model used in the ''Site-Scale Saturated Zone Transport'', MDL-NBS-HS-000010 Rev 01 (BSC 2003 [162419]). The Site-Scale Saturated Zone Transport model then provides output to the SZ Transport Abstraction Model (BSC 2003 [164870]). In particular, the output from the SZ site-scale flow model is used to simulate the groundwater flow pathways and radionuclide transport to the accessible environment for use in the TSPA calculations. Since the development and calibration of the saturated-zone flow model, more data have been gathered for use in model validation and confidence building, including new water-level data from Nye County wells, single- and multiple-well hydraulic testing data, and new hydrochemistry data. In addition, a new hydrogeologic framework model (HFM), which incorporates Nye County wells lithology, also provides geologic data for corroboration and confidence in the flow model. The intended use of this work is to provide a flow model that generates flow fields to simulate radionuclide transport in saturated porous rock and alluvium under natural or forced gradient flow conditions. The flow model simulations are completed using the three-dimensional (3-D), finite-element, flow, heat, and transport computer code, FEHM Version (V) 2.20 (software tracking number (STN): 10086-2.20-00; LANL 2003 [161725]). Concurrently, process-level transport model and methodology for calculating radionuclide transport in the saturated zone at Yucca

  17. Sub-Grid Scale Plume Modeling

    Directory of Open Access Journals (Sweden)

    Greg Yarwood

    2011-08-01

    Full Text Available Multi-pollutant chemical transport models (CTMs are being routinely used to predict the impacts of emission controls on the concentrations and deposition of primary and secondary pollutants. While these models have a fairly comprehensive treatment of the governing atmospheric processes, they are unable to correctly represent processes that occur at very fine scales, such as the near-source transport and chemistry of emissions from elevated point sources, because of their relatively coarse horizontal resolution. Several different approaches have been used to address this limitation, such as using fine grids, adaptive grids, hybrid modeling, or an embedded sub-grid scale plume model, i.e., plume-in-grid (PinG modeling. In this paper, we first discuss the relative merits of these various approaches used to resolve sub-grid scale effects in grid models, and then focus on PinG modeling which has been very effective in addressing the problems listed above. We start with a history and review of PinG modeling from its initial applications for ozone modeling in the Urban Airshed Model (UAM in the early 1980s using a relatively simple plume model, to more sophisticated and state-of-the-science plume models, that include a full treatment of gas-phase, aerosol, and cloud chemistry, embedded in contemporary models such as CMAQ, CAMx, and WRF-Chem. We present examples of some typical results from PinG modeling for a variety of applications, discuss the implications of PinG on model predictions of source attribution, and discuss possible future developments and applications for PinG modeling.

  18. Community structure and scale-free collections of Erdős-Rényi graphs.

    Science.gov (United States)

    Seshadhri, C; Kolda, Tamara G; Pinar, Ali

    2012-05-01

    Community structure plays a significant role in the analysis of social networks and similar graphs, yet this structure is little understood and not well captured by most models. We formally define a community to be a subgraph that is internally highly connected and has no deeper substructure. We use tools of combinatorics to show that any such community must contain a dense Erdős-Rényi (ER) subgraph. Based on mathematical arguments, we hypothesize that any graph with a heavy-tailed degree distribution and community structure must contain a scale-free collection of dense ER subgraphs. These theoretical observations corroborate well with empirical evidence. From this, we propose the Block Two-Level Erdős-Rényi (BTER) model, and demonstrate that it accurately captures the observable properties of many real-world social networks.

  19. A catchment scale water balance model for FIFE

    Science.gov (United States)

    Famiglietti, J. S.; Wood, E. F.; Sivapalan, M.; Thongs, D. J.

    1992-01-01

    A catchment scale water balance model is presented and used to predict evaporation from the King's Creek catchment at the First ISLSCP Field Experiment site on the Konza Prairie, Kansas. The model incorporates spatial variability in topography, soils, and precipitation to compute the land surface hydrologic fluxes. A network of 20 rain gages was employed to measure rainfall across the catchment in the summer of 1987. These data were spatially interpolated and used to drive the model during storm periods. During interstorm periods the model was driven by the estimated potential evaporation, which was calculated using net radiation data collected at site 2. Model-computed evaporation is compared to that observed, both at site 2 (grid location 1916-BRS) and the catchment scale, for the simulation period from June 1 to October 9, 1987.

  20. Collective behavior of large-scale neural networks with GPU acceleration.

    Science.gov (United States)

    Qu, Jingyi; Wang, Rubin

    2017-12-01

    In this paper, the collective behaviors of a small-world neuronal network motivated by the anatomy of a mammalian cortex based on both Izhikevich model and Rulkov model are studied. The Izhikevich model can not only reproduce the rich behaviors of biological neurons but also has only two equations and one nonlinear term. Rulkov model is in the form of difference equations that generate a sequence of membrane potential samples in discrete moments of time to improve computational efficiency. These two models are suitable for the construction of large scale neural networks. By varying some key parameters, such as the connection probability and the number of nearest neighbor of each node, the coupled neurons will exhibit types of temporal and spatial characteristics. It is demonstrated that the implementation of GPU can achieve more and more acceleration than CPU with the increasing of neuron number and iterations. These two small-world network models and GPU acceleration give us a new opportunity to reproduce the real biological network containing a large number of neurons.

  1. Phenomenology of Low Quantum Gravity Scale Models

    CERN Document Server

    Benakli, Karim

    1999-01-01

    We study some phenomenological implications of models where the scale of quantum gravity effects lies much below the four-dimensional Planck scale. These models arise from M-theory vacua where either the internal space volume is large or the string coupling is very small. We provide a critical analysis of ways to unify electroweak, strong and gravitational interactions in M-theory. We discuss the relations between different scales in two M-vacua: Type I strings and Ho\\v rava--Witten supergravity models. The latter allows possibilities for an eleven-dimensional scale at TeV energies with one large dimension below separating our four-dimensional world from a hidden one. Different mechanisms for breaking supersymmetry (gravity mediated, gauge mediated and Scherk-Schwarz mechanisms) are discussed in this framework. Some phenomenological issues such as dark matter (with masses that may vary in time), origin of neutrino masses and axion scale are discussed. We suggest that these are indications that the string scal...

  2. Scaling model for symmetric star polymers

    Science.gov (United States)

    Ramachandran, Ram; Rai, Durgesh K.; Beaucage, Gregory

    2010-03-01

    Neutron scattering data from symmetric star polymers with six poly (urethane-ether) arms, chemically bonded to a C-60 molecule are fitted using a new scaling model and scattering function. The new scaling function can describe both good solvent and theta solvent conditions as well as resolve deviations in chain conformation due to steric interactions between star arms. The scaling model quantifies the distinction between invariant topological features for this star polymer and chain tortuosity which changes with goodness of solvent and steric interaction. Beaucage G, Phys. Rev. E 70 031401 (2004).; Ramachandran R, et al. Macromolecules 41 9802-9806 (2008).; Ramachandran R, et al. Macromolecules, 42 4746-4750 (2009); Rai DK et al. Europhys. Lett., (Submitted 10/2009).

  3. Cooperation, collective action, and the archeology of large-scale societies.

    Science.gov (United States)

    Carballo, David M; Feinman, Gary M

    2016-11-01

    Archeologists investigating the emergence of large-scale societies in the past have renewed interest in examining the dynamics of cooperation as a means of understanding societal change and organizational variability within human groups over time. Unlike earlier approaches to these issues, which used models designated voluntaristic or managerial, contemporary research articulates more explicitly with frameworks for cooperation and collective action used in other fields, thereby facilitating empirical testing through better definition of the costs, benefits, and social mechanisms associated with success or failure in coordinated group action. Current scholarship is nevertheless bifurcated along lines of epistemology and scale, which is understandable but problematic for forging a broader, more transdisciplinary field of cooperation studies. Here, we point to some areas of potential overlap by reviewing archeological research that places the dynamics of social cooperation and competition in the foreground of the emergence of large-scale societies, which we define as those having larger populations, greater concentrations of political power, and higher degrees of social inequality. We focus on key issues involving the communal-resource management of subsistence and other economic goods, as well as the revenue flows that undergird political institutions. Drawing on archeological cases from across the globe, with greater detail from our area of expertise in Mesoamerica, we offer suggestions for strengthening analytical methods and generating more transdisciplinary research programs that address human societies across scalar and temporal spectra. © 2016 Wiley Periodicals, Inc.

  4. Landscape modelling at Regional to Continental scales

    Science.gov (United States)

    Kirkby, M. J.

    Most work on simulating landscape evolution has been focused at scales of about 1 Ha, there are still limitations, particularly in understanding the links between hillslope process rates and climate, soils and channel initiation. However, the need for integration with GCM outputs and with Continental Geosystems now imposes an urgent need for scaling up to Regional and Continental scales. This is reinforced by a need to incorporate estimates of soil erosion and desertification rates into national and supra-national policy. Relevant time-scales range from decadal to geological. Approaches at these regional to continental scales are critical to a fuller collaboration between geomorphologists and others interested in Continental Geosystems. Two approaches to the problem of scaling up are presented here for discussion. The first (MEDRUSH) is to embed representative hillslope flow strips into sub-catchments within a larger catchment of up to 5,000 km2. The second is to link one-dimensional models of SVAT type within DEMs at up to global scales (CSEP/SEDWEB). The MEDRUSH model is being developed as part of the EU Desertification Programme (MEDALUS project), primarily for semi-natural vegetation in southern Europe over time spans of up to 100 years. Catchments of up to 2500 km2 are divided into 50-200 sub-catchments on the basis of flow paths derived from DEMs with a horizontal resolution of 50 m or better. Within each sub-catchment a representative flow strip is selected and Hydrology, Sediment Transport and Vegetation change are simulated in detail for the flow strip, using a 1 hour time step. Changes within each flow strip are transferred back to the appropriate sub-catchment and flows of water and sediment are then routed through the channel network, generating changes in flood plain morphology.

  5. Automation on the generation of genome-scale metabolic models.

    Science.gov (United States)

    Reyes, R; Gamermann, D; Montagud, A; Fuente, D; Triana, J; Urchueguía, J F; de Córdoba, P Fernández

    2012-12-01

    Nowadays, the reconstruction of genome-scale metabolic models is a nonautomatized and interactive process based on decision making. This lengthy process usually requires a full year of one person's work in order to satisfactory collect, analyze, and validate the list of all metabolic reactions present in a specific organism. In order to write this list, one manually has to go through a huge amount of genomic, metabolomic, and physiological information. Currently, there is no optimal algorithm that allows one to automatically go through all this information and generate the models taking into account probabilistic criteria of unicity and completeness that a biologist would consider. This work presents the automation of a methodology for the reconstruction of genome-scale metabolic models for any organism. The methodology that follows is the automatized version of the steps implemented manually for the reconstruction of the genome-scale metabolic model of a photosynthetic organism, Synechocystis sp. PCC6803. The steps for the reconstruction are implemented in a computational platform (COPABI) that generates the models from the probabilistic algorithms that have been developed. For validation of the developed algorithm robustness, the metabolic models of several organisms generated by the platform have been studied together with published models that have been manually curated. Network properties of the models, like connectivity and average shortest mean path of the different models, have been compared and analyzed.

  6. 77 FR 68104 - Proposed Information Collection; Comment Request; Socio-Economic Profile of Small-Scale...

    Science.gov (United States)

    2012-11-15

    ... socio-economic data about small scale fishermen and seafood dealers operating in the U.S. Caribbean. The... information sought will be collected via in- person, telephone and mail surveys. III. Data OMB Control Number... information; (c) ways to enhance the quality, utility, and clarity of the information to be collected; and (d...

  7. Evaluation of Icing Scaling on Swept NACA 0012 Airfoil Models

    Science.gov (United States)

    Tsao, Jen-Ching; Lee, Sam

    2012-01-01

    Icing scaling tests in the NASA Glenn Icing Research Tunnel (IRT) were performed on swept wing models using existing recommended scaling methods that were originally developed for straight wing. Some needed modifications on the stagnation-point local collection efficiency (i.e., beta(sub 0) calculation and the corresponding convective heat transfer coefficient for swept NACA 0012 airfoil models have been studied and reported in 2009, and the correlations will be used in the current study. The reference tests used a 91.4-cm chord, 152.4-cm span, adjustable sweep airfoil model of NACA 0012 profile at velocities of 100 and 150 knot and MVD of 44 and 93 mm. Scale-to-reference model size ratio was 1:2.4. All tests were conducted at 0deg angle of attack (AoA) and 45deg sweep angle. Ice shape comparison results were presented for stagnation-point freezing fractions in the range of 0.4 to 1.0. Preliminary results showed that good scaling was achieved for the conditions test by using the modified scaling methods developed for swept wing icing.

  8. Towards dynamic genome-scale models.

    Science.gov (United States)

    Gilbert, David; Heiner, Monika; Jayaweera, Yasoda; Rohr, Christian

    2017-10-13

    The analysis of the dynamic behaviour of genome-scale models of metabolism (GEMs) currently presents considerable challenges because of the difficulties of simulating such large and complex networks. Bacterial GEMs can comprise about 5000 reactions and metabolites, and encode a huge variety of growth conditions; such models cannot be used without sophisticated tool support. This article is intended to aid modellers, both specialist and non-specialist in computerized methods, to identify and apply a suitable combination of tools for the dynamic behaviour analysis of large-scale metabolic designs. We describe a methodology and related workflow based on publicly available tools to profile and analyse whole-genome-scale biochemical models. We use an efficient approximative stochastic simulation method to overcome problems associated with the dynamic simulation of GEMs. In addition, we apply simulative model checking using temporal logic property libraries, clustering and data analysis, over time series of reaction rates and metabolite concentrations. We extend this to consider the evolution of reaction-oriented properties of subnets over time, including dead subnets and functional subsystems. This enables the generation of abstract views of the behaviour of these models, which can be large-up to whole genome in size-and therefore impractical to analyse informally by eye. We demonstrate our methodology by applying it to a reduced model of the whole-genome metabolism of Escherichia coli K-12 under different growth conditions. The overall context of our work is in the area of model-based design methods for metabolic engineering and synthetic biology. © The Author 2017. Published by Oxford University Press.

  9. Drift-Scale THC Seepage Model

    Energy Technology Data Exchange (ETDEWEB)

    C.R. Bryan

    2005-02-17

    The purpose of this report (REV04) is to document the thermal-hydrologic-chemical (THC) seepage model, which simulates the composition of waters that could potentially seep into emplacement drifts, and the composition of the gas phase. The THC seepage model is processed and abstracted for use in the total system performance assessment (TSPA) for the license application (LA). This report has been developed in accordance with ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Post-Processing Analysis for THC Seepage) Report Integration'' (BSC 2005 [DIRS 172761]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this report. The plan for validation of the models documented in this report is given in Section 2.2.2, ''Model Validation for the DS THC Seepage Model,'' of the TWP. The TWP (Section 3.2.2) identifies Acceptance Criteria 1 to 4 for ''Quantity and Chemistry of Water Contacting Engineered Barriers and Waste Forms'' (NRC 2003 [DIRS 163274]) as being applicable to this report; however, in variance to the TWP, Acceptance Criterion 5 has also been determined to be applicable, and is addressed, along with the other Acceptance Criteria, in Section 4.2 of this report. Also, three FEPS not listed in the TWP (2.2.10.01.0A, 2.2.10.06.0A, and 2.2.11.02.0A) are partially addressed in this report, and have been added to the list of excluded FEPS in Table 6.1-2. This report has been developed in accordance with LP-SIII.10Q-BSC, ''Models''. This report documents the THC seepage model and a derivative used for validation, the Drift Scale Test (DST) THC submodel. The THC seepage model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral

  10. Original article Validation of the Polish version of the Collective Self-Esteem Scale

    OpenAIRE

    Róża Bazińska

    2015-01-01

    Background The aim of this article is to present research on the validity and reliability of the Collective Self-Esteem Scale (CSES) for the Polish population. The CSES is a measure of individual differences in collective self-esteem, understood as the global evaluation of one’s own social (collective) identity. Participants and procedure Participants from two samples (n = 466 and n = 1,009) completed a paper-pencil set of questionnaires which contained the CSES and the Ro...

  11. Scale Anchoring with the Rasch Model.

    Science.gov (United States)

    Wyse, Adam E

    Scale anchoring is a method to provide additional meaning to particular scores at different points along a score scale by identifying representative items associated with the particular scores. These items are then analyzed to write statements of what types of performance can be expected of a person with the particular scores to help test takers and other stakeholders better understand what it means to achieve the different scores. This article provides simple formulas that can be used to identify possible items to serve as scale anchors with the Rasch model. Specific attention is given to practical considerations and challenges that may be encountered when applying the formulas in different contexts. An illustrative example using data from a medical imaging certification program demonstrates how the formulas can be applied in practice.

  12. How scale-free networks and large-scale collective cooperation emerge in complex homogeneous social systems.

    Science.gov (United States)

    Li, Wei; Zhang, Xiaoming; Hu, Gang

    2007-10-01

    We study how heterogeneous degree distributions and large-scale collective cooperation in social networks emerge in complex homogeneous systems by a simple local rule: learning from the best in both strategy selections and linking choices. The prisoner's dilemma game is used as the local dynamics. We show that the social structure may evolve into single-scale, broad-scale, and scale-free (SF) degree distributions for different control parameters. In particular, in a relatively strong-selfish parameter region the SF property can be self-organized in social networks by dynamic evolutions and these SF structures help the whole node community to reach a high level of cooperation under the poor condition of a high selfish intention of individuals.

  13. Genome scale metabolic modeling of cancer

    DEFF Research Database (Denmark)

    Nilsson, Avlant; Nielsen, Jens

    2017-01-01

    Cancer cells reprogram metabolism to support rapid proliferation and survival. Energy metabolism is particularly important for growth and genes encoding enzymes involved in energy metabolism are frequently altered in cancer cells. A genome scale metabolic model (GEM) is a mathematical formalization...... of metabolism which allows simulation and hypotheses testing of metabolic strategies. It has successfully been applied to many microorganisms and is now used to study cancer metabolism. Generic models of human metabolism have been reconstructed based on the existence of metabolic genes in the human genome....... Cancer specific models of metabolism have also been generated by reducing the number of reactions in the generic model based on high throughput expression data, e.g. transcriptomics and proteomics. Targets for drugs and bio markers for diagnostics have been identified using these models. They have also...

  14. Pore-Scale Model for Microbial Growth

    Science.gov (United States)

    Tartakovsky, G.; Tartakovsky, A. M.; Scheibe, T. D.

    2011-12-01

    A lagrangian particle model based on smoothed particle hydrodynamics (SPH) is used to simulate pore-scale flow, reactive transport and biomass growth which is controlled by the mixing of an electron donor and acceptor, in a microfluidic porous cell. The experimental results described in Ch. Zhang et al "Effects of pore-scale heterogeneity and transverse mixing on bacterial growth in porous media" were used for this study. The model represents the homogeneous pore structure of a uniform array of cylindrical posts with microbes uniformly distributed on the grain surfaces. Each one of the two solutes (electron donor and electron acceptor) enters the domain unmixed through separate inlets. In the model, pair-wise particle-particle interactions are used to simulate interactions within the biomass, and both biomass-fluid and biomass-soil grain interactions. The biomass growth rate is described by double Monod kinetics. For the set of parameters used in the simulations the model predicts that: 1) biomass grows in the shape of bridges connecting soil grains and oriented in the direction of flow so as to minimize resistance to the fluid flow; and 2) the biomass growth occurs only in the mixing zone. Using parameters available in the literature, the biomass growth model agrees qualitatively with the experimental results. In order to achieve quantitative agreement, model calibration is required.

  15. Models for wind turbines - a collection

    DEFF Research Database (Denmark)

    2002-01-01

    This report is a collection of notes which were intended to be short communications. Main target of the work presented is to supply new approaches to stability investigations of wind turbines. The author's opinion is that an efficient, systematicstability analysis can not be performed for large...

  16. Anomalous scalings in differential models of turbulence

    CERN Document Server

    Thalabard, Simon; Galtier, Sebastien; Sergey, Medvedev

    2015-01-01

    Differential models for hydrodynamic, passive-scalar and wave turbulence given by nonlinear first- and second-order evolution equations for the energy spectrum in the $k$-space were analysed. Both types of models predict formation an anomalous transient power-law spectra. The second-order models were analysed in terms of self-similar solutions of the second kind, and a phenomenological formula for the anomalous spectrum exponent was constructed using numerics for a broad range of parameters covering all known physical examples. The first-order models were examined analytically, including finding an analytical prediction for the anomalous exponent of the transient spectrum and description of formation of the Kolmogorov-type spectrum as a reflection wave from the dissipative scale back into the inertial range. The latter behaviour was linked to pre-shock/shock singularities similar to the ones arising in the Burgers equation. Existence of the transient anomalous scaling and the reflection-wave scenario are argu...

  17. Multi-scale modeling of composites

    DEFF Research Database (Denmark)

    Azizi, Reza

    behavior and the trapped free energy in the material, in addition to the plastic behavior in terms of the anisotropic development of the yield surface. It is shown that a generalization of Hill’s anisotropic yield criterion can be used to model the Bauschinger effect, in addition to the pressure and size...... is analyzed using a Representative Volume Element (RVE), while the homogenized data are saved and used as an input to the macro scale. The dependence of fiber size is analyzed using a higher order plasticity theory, where the free energy is stored due to plastic strain gradients at the micron scale. Hill...... dependence. The development of the macroscopic yield surface upon deformation is investigated in terms of the anisotropic hardening (expansion of the yield surface) and kinematic hardening (translation of the yield surface). The kinematic hardening law is based on trapped free energy in the material due...

  18. Three Proposed Data Collection Models for Annual Inventories

    Science.gov (United States)

    Greg Reams; Bill Smith; Bill Bechtold; Ron McRoberts; Frank Spirek; Chuck Liff

    2005-01-01

    Three competing data collection models for the U.S. Department of Agriculture Forest Service Forest Inventory and Analysis (FIA) program?s annual inventories are presented. We show that in the presence of panel creep, the model now in place does not meet requirements of an annual inventory system mandated by the 1998 Farm Bill. Two data-collection models that use...

  19. Multi-scale Modelling of Segmentation

    DEFF Research Database (Denmark)

    Hartmann, Martin; Lartillot, Olivier; Toiviainen, Petri

    2016-01-01

    While listening to music, people often unwittingly break down musical pieces into constituent chunks such as verses and choruses. Music segmentation studies have suggested that some consensus regarding boundary perception exists, despite individual differences. However, neither the effects...... of experimental task (i.e., real-time vs. annotated segmentation), nor of musicianship on boundary perception are clear. Our study assesses musicianship effects and differences between segmentation tasks. We conducted a real-time experiment to collect segmentations by musicians and nonmusicians from nine musical...... indication density, although this might be contingent on stimuli and other factors. In line with other studies, no musicianship effects were found: our results showed high agreement between groups and similar inter-subject correlations. Also consistent with previous work, time scales between one and two...

  20. Multi-scale modelling and dynamics

    Science.gov (United States)

    Müller-Plathe, Florian

    Moving from a fine-grained particle model to one of lower resolution leads, with few exceptions, to an acceleration of molecular mobility, higher diffusion coefficient, lower viscosities and more. On top of that, the level of acceleration is often different for different dynamical processes as well as for different state points. While the reasons are often understood, the fact that coarse-graining almost necessarily introduces unpredictable acceleration of the molecular dynamics severely limits its usefulness as a predictive tool. There are several attempts under way to remedy these shortcoming of coarse-grained models. On the one hand, we follow bottom-up approaches. They attempt already when the coarse-graining scheme is conceived to estimate their impact on the dynamics. This is done by excess-entropy scaling. On the other hand, we also pursue a top-down development. Here we start with a very coarse-grained model (dissipative particle dynamics) which in its native form produces qualitatively wrong polymer dynamics, as its molecules cannot entangle. This model is modified by additional temporary bonds, so-called slip springs, to repair this defect. As a result, polymer melts and solutions described by the slip-spring DPD model show correct dynamical behaviour. Read more: ``Excess entropy scaling for the segmental and global dynamics of polyethylene melts'', E. Voyiatzis, F. Müller-Plathe, and M.C. Böhm, Phys. Chem. Chem. Phys. 16, 24301-24311 (2014). [DOI: 10.1039/C4CP03559C] ``Recovering the Reptation Dynamics of Polymer Melts in Dissipative Particle Dynamics Simulations via Slip-Springs'', M. Langeloth, Y. Masubuchi, M. C. Böhm, and F. Müller-Plathe, J. Chem. Phys. 138, 104907 (2013). [DOI: 10.1063/1.4794156].

  1. Original article Validation of the Polish version of the Collective Self-Esteem Scale

    Directory of Open Access Journals (Sweden)

    Róża Bazińska

    2015-07-01

    Full Text Available Background The aim of this article is to present research on the validity and reliability of the Collective Self-Esteem Scale (CSES for the Polish population. The CSES is a measure of individual differences in collective self-esteem, understood as the global evaluation of one’s own social (collective identity. Participants and procedure Participants from two samples (n = 466 and n = 1,009 completed a paper-pencil set of questionnaires which contained the CSES and the Rosenberg Self-Esteem Scale (RSES, and subsets of participants completed scales related to a sense of belonging, well-being and psychological distress (anxiety and depression. Results Like the original version, the Polish version of the CSES comprises 16 items which form the four dimensions of collective self-esteem: Public collective self-esteem, Private collective self-esteem, Membership esteem and Importance of Identity. The results confirm the four-factor structure of the Polish version of the CSES, support the whole Polish version of the CSES as well as its subscales, which represent satisfactory reliability and stability, and provide initial evidence of construct validity. Conclusions As the results of the study indicate, the Polish version of the CSES is a valid and reliable self-report measure for assessing the global self-esteem derived from membership of a group and has proved to be useful in the Polish context.

  2. MODELLING FINE SCALE MOVEMENT CORRIDORS FOR THE TRICARINATE HILL TURTLE

    Directory of Open Access Journals (Sweden)

    I. Mondal

    2016-06-01

    Full Text Available Habitat loss and the destruction of habitat connectivity can lead to species extinction by isolation of population. Identifying important habitat corridors to enhance habitat connectivity is imperative for species conservation by preserving dispersal pattern to maintain genetic diversity. Circuit theory is a novel tool to model habitat connectivity as it considers habitat as an electronic circuit board and species movement as a certain amount of current moving around through different resistors in the circuit. Most studies involving circuit theory have been carried out at small scales on large ranging animals like wolves or pumas, and more recently on tigers. This calls for a study that tests circuit theory at a large scale to model micro-scale habitat connectivity. The present study on a small South-Asian geoemydid, the Tricarinate Hill-turtle (Melanochelys tricarinata, focuses on habitat connectivity at a very fine scale. The Tricarinate has a small body size (carapace length: 127–175 mm and home range (8000–15000 m2, with very specific habitat requirements and movement patterns. We used very high resolution Worldview satellite data and extensive field observations to derive a model of landscape permeability at 1 : 2,000 scale to suit the target species. Circuit theory was applied to model potential corridors between core habitat patches for the Tricarinate Hill-turtle. The modelled corridors were validated by extensive ground tracking data collected using thread spool technique and found to be functional. Therefore, circuit theory is a promising tool for accurately identifying corridors, to aid in habitat studies of small species.

  3. Modelling Fine Scale Movement Corridors for the Tricarinate Hill Turtle

    Science.gov (United States)

    Mondal, I.; Kumar, R. S.; Habib, B.; Talukdar, G.

    2016-06-01

    Habitat loss and the destruction of habitat connectivity can lead to species extinction by isolation of population. Identifying important habitat corridors to enhance habitat connectivity is imperative for species conservation by preserving dispersal pattern to maintain genetic diversity. Circuit theory is a novel tool to model habitat connectivity as it considers habitat as an electronic circuit board and species movement as a certain amount of current moving around through different resistors in the circuit. Most studies involving circuit theory have been carried out at small scales on large ranging animals like wolves or pumas, and more recently on tigers. This calls for a study that tests circuit theory at a large scale to model micro-scale habitat connectivity. The present study on a small South-Asian geoemydid, the Tricarinate Hill-turtle (Melanochelys tricarinata), focuses on habitat connectivity at a very fine scale. The Tricarinate has a small body size (carapace length: 127-175 mm) and home range (8000-15000 m2), with very specific habitat requirements and movement patterns. We used very high resolution Worldview satellite data and extensive field observations to derive a model of landscape permeability at 1 : 2,000 scale to suit the target species. Circuit theory was applied to model potential corridors between core habitat patches for the Tricarinate Hill-turtle. The modelled corridors were validated by extensive ground tracking data collected using thread spool technique and found to be functional. Therefore, circuit theory is a promising tool for accurately identifying corridors, to aid in habitat studies of small species.

  4. Constructing Multidatabase Collections Using Extended ODMG Object Model

    Directory of Open Access Journals (Sweden)

    Adrian Skehill Mark Roantree

    1999-11-01

    Full Text Available Collections are an important feature in database systems. They provide us with the ability to group objects of interest together, and then to manipulate them in the required fashion. The OASIS project is focused on the construction a multidatabase prototype which uses the ODMG model and a canonical model. As part of this work we have extended the base model to provide a more powerful collection mechanism, and to permit the construction of a federated collection, a collection of heterogenous objects taken from distributed data sources

  5. Modelling landscape evolution at the flume scale

    Science.gov (United States)

    Cheraghi, Mohsen; Rinaldo, Andrea; Sander, Graham C.; Barry, D. Andrew

    2017-04-01

    The ability of a large-scale Landscape Evolution Model (LEM) to simulate the soil surface morphological evolution as observed in a laboratory flume (1-m × 2-m surface area) was investigated. The soil surface was initially smooth, and was subjected to heterogeneous rainfall in an experiment designed to avoid rill formation. Low-cohesive fine sand was placed in the flume while the slope and relief height were 5 % and 20 cm, respectively. Non-uniform rainfall with an average intensity of 85 mm h-1 and a standard deviation of 26 % was applied to the sediment surface for 16 h. We hypothesized that the complex overland water flow can be represented by a drainage discharge network, which was calculated via the micro-morphology and the rainfall distribution. Measurements included high resolution Digital Elevation Models that were captured at intervals during the experiment. The calibrated LEM captured the migration of the main flow path from the low precipitation area into the high precipitation area. Furthermore, both model and experiment showed a steep transition zone in soil elevation that moved upstream during the experiment. We conclude that the LEM is applicable under non-uniform rainfall and in the absence of surface incisions, thereby extending its applicability beyond that shown in previous applications. Keywords: Numerical simulation, Flume experiment, Particle Swarm Optimization, Sediment transport, River network evolution model.

  6. A Computational Model of Crowds for Collective Intelligence

    OpenAIRE

    Prpic, John; Jackson, Piper; Nguyen, Thai

    2014-01-01

    In this work, we present a high-level computational model of IT-mediated crowds for collective intelligence. We introduce the Crowd Capital perspective as an organizational-level model of collective intelligence generation from IT-mediated crowds, and specify a computational system including agents, forms of IT, and organizational knowledge.

  7. Scaling in a Multispecies Network Model Ecosystem

    CERN Document Server

    Solé, R V; McKane, A; Sole, Ricard V.; Alonso, David; Kane, Alan Mc

    1999-01-01

    A new model ecosystem consisting of many interacting species is introduced. The species are connected through a random matrix with a given connectivity. It is shown that the system is organized close to a boundary of marginal stability in such a way that fluctuations follow power law distributions both in species abundance and their lifetimes for some slow-driving (immigration) regime. The connectivity and the number of species are linked through a scaling relation which is the one observed in real ecosystems. These results suggest that the basic macroscopic features of real, species-rich ecologies might be linked with a critical state. A natural link between lognormal and power law distributions of species abundances is suggested.

  8. Collection of intraoral findings in corpse with small-scale color dental scanner system.

    Science.gov (United States)

    Yoshida, Masaki; Hanaoka, Yoichi; Tsuzuki, Tamiyuki; Ueno, Asao; Takagi, Tetsuya; Iwahara, Kaori; Yasuda, Mamoru; Sato, Yoshinobu; Minaguchi, Kiyoshi

    2009-03-10

    Together with X-ray radiography and the description in the dental chart (odontogram), the collection of intraoral images is extremely important in dental identification. Recently, thanks to advances in digital devices for taking images in the oral cavity, problems with developing images and images being lost due to scanning errors have been minimized. However, in corpses where postmortem rigidity has firmly set in and burned bodies where the jaw has to be forced open, it is difficult to open the jaw enough to allow images to be taken. In addition, collection of intraoral images requires skill. Our goal was to determine the efficacy of a newly developed, small-scale color dental scanner in collecting intraoral images. The results showed that it was comparatively easy to obtain an entire image of the oral cavity with even a minimum degree of jaw opening. This should enable even a non-expert to perform oral image collection.

  9. Collective Influence of Multiple Spreaders Evaluated by Tracing Real Information Flow in Large-Scale Social Networks

    CERN Document Server

    Teng, Xian; Morone, Flaviano; Makse, Hernán A

    2016-01-01

    Identifying the most influential spreaders that maximize information flow is a central question in network theory. Recently, a highly scalable method called "Collective Influence (CI)" has been put forward through collective influence maximization. In contrast to previous heuristic methods evaluating nodes' significance separately, CI method inspects the collective influence of multiple spreaders. Despite that CI applies to the influence maximization problem in percolation model, it is still important to examine its efficacy in realistic information spreading. Here, we examine real-world information flow in various social platforms including American Physical Society, Facebook, Twitter and LiveJournal. Since empirical data cannot be directly mapped to ideal multi-source spreading, we leverage the behavioral patterns of users extracted from data to construct "virtual" information spreading processes. Our consistent results demonstrate that the set of spreaders selected by CI indeed can induce larger scale of i...

  10. Large-scale general collection of wild-plant DNA in Mustang, Nepal.

    Science.gov (United States)

    Tsukaya, Hirokazu; Iokawa, Yu; Kondo, Makiko; Ohba, Hideaki

    2005-02-01

    The deposit of DNA samples of wild plants that correspond to voucher specimens is highly informative and greatly enhances the value of the herbarium specimens. The Society of Himalayan Botany (SHB), Tokyo, has assembled general collections of flowering plants of the Sino-Himalayan region for more than 40 years. In a trial of the collection of these types of bioresources for use in basic research, we adopted FTA cards, which have recently been used for large-scale collection of DNA of humans, microorganisms and viruses, for the general collection of DNA samples of wild plants during a botanical expedition in Mustang, Nepal, in 2003. Three hundred and fifty-five plant specimens from Mustang, Nepal, were collected along with the corresponding DNA samples. Examination of the quality of the DNA samples by PCR demonstrated the utility of the collection system. The identification of all of the specimens collected, as well as data from the specimens, will be presented on the Flora of Nepal Database website (http://ti.um.u-tokyo.ac.jp/default.htm), which is open to the public. The DNA resources will be identified on the website and distributed openly by the SHB to researchers worldwide for basic research.

  11. Global-scale modeling of groundwater recharge

    Science.gov (United States)

    Döll, P.; Fiedler, K.

    2008-05-01

    Long-term average groundwater recharge, which is equivalent to renewable groundwater resources, is the major limiting factor for the sustainable use of groundwater. Compared to surface water resources, groundwater resources are more protected from pollution, and their use is less restricted by seasonal and inter-annual flow variations. To support water management in a globalized world, it is necessary to estimate groundwater recharge at the global scale. Here, we present a best estimate of global-scale long-term average diffuse groundwater recharge (i.e. renewable groundwater resources) that has been calculated by the most recent version of the WaterGAP Global Hydrology Model WGHM (spatial resolution of 0.5° by 0.5°, daily time steps). The estimate was obtained using two state-of-the-art global data sets of gridded observed precipitation that we corrected for measurement errors, which also allowed to quantify the uncertainty due to these equally uncertain data sets. The standard WGHM groundwater recharge algorithm was modified for semi-arid and arid regions, based on independent estimates of diffuse groundwater recharge, which lead to an unbiased estimation of groundwater recharge in these regions. WGHM was tuned against observed long-term average river discharge at 1235 gauging stations by adjusting, individually for each basin, the partitioning of precipitation into evapotranspiration and total runoff. We estimate that global groundwater recharge was 12 666 km3/yr for the climate normal 1961-1990, i.e. 32% of total renewable water resources. In semi-arid and arid regions, mountainous regions, permafrost regions and in the Asian Monsoon region, groundwater recharge accounts for a lower fraction of total runoff, which makes these regions particularly vulnerable to seasonal and inter-annual precipitation variability and water pollution. Average per-capita renewable groundwater resources of countries vary between 8 m3/(capita yr) for Egypt to more than 1 million m3

  12. Global-scale modeling of groundwater recharge

    Science.gov (United States)

    Döll, P.; Fiedler, K.

    2007-11-01

    Long-term average groundwater recharge, which is equivalent to renewable groundwater resources, is the major limiting factor for the sustainable use of groundwater. Compared to surface water resources, groundwater resources are more protected from pollution, and their use is less restricted by seasonal and inter-annual flow variations. To support water management in a globalized world, it is necessary to estimate groundwater recharge at the global scale. Here, we present a best estimate of global-scale long-term average diffuse groundwater recharge (i.e. renewable groundwater resources) that has been calculated by the most recent version of the WaterGAP Global Hydrology Model WGHM (spatial resolution of 0.5° by 0.5°, daily time steps). The estimate was obtained using two state-of-the art global data sets of gridded observed precipitation that we corrected for measurement errors, which also allowed to quantify the uncertainty due to these equally uncertain data sets. The standard WGHM groundwater recharge algorithm was modified for semi-arid and arid regions, based on independent estimates of diffuse groundwater recharge, which lead to an unbiased estimation of groundwater recharge in these regions. WGHM was tuned against observed long-term average river discharge at 1235 gauging stations by adjusting, individually for each basin, the partitioning of precipitation into evapotranspiration and total runoff. We estimate that global groundwater recharge was 12 666 km3/yr for the climate normal 1961-1990, i.e. 32% of total renewable water resources. In semi-arid and arid regions, mountainous regions, permafrost regions and in the Asian Monsoon region, groundwater recharge accounts for a lower fraction of total runoff, which makes these regions particularly vulnerable to seasonal and inter-annual precipitation variability and water pollution. Average per-capita renewable groundwater resources of countries vary between 8 m3/(capita yr) for Egypt to more than 1 million m3

  13. Global-scale modeling of groundwater recharge

    Directory of Open Access Journals (Sweden)

    P. Döll

    2008-05-01

    Full Text Available Long-term average groundwater recharge, which is equivalent to renewable groundwater resources, is the major limiting factor for the sustainable use of groundwater. Compared to surface water resources, groundwater resources are more protected from pollution, and their use is less restricted by seasonal and inter-annual flow variations. To support water management in a globalized world, it is necessary to estimate groundwater recharge at the global scale. Here, we present a best estimate of global-scale long-term average diffuse groundwater recharge (i.e. renewable groundwater resources that has been calculated by the most recent version of the WaterGAP Global Hydrology Model WGHM (spatial resolution of 0.5° by 0.5°, daily time steps. The estimate was obtained using two state-of-the-art global data sets of gridded observed precipitation that we corrected for measurement errors, which also allowed to quantify the uncertainty due to these equally uncertain data sets. The standard WGHM groundwater recharge algorithm was modified for semi-arid and arid regions, based on independent estimates of diffuse groundwater recharge, which lead to an unbiased estimation of groundwater recharge in these regions. WGHM was tuned against observed long-term average river discharge at 1235 gauging stations by adjusting, individually for each basin, the partitioning of precipitation into evapotranspiration and total runoff. We estimate that global groundwater recharge was 12 666 km3/yr for the climate normal 1961–1990, i.e. 32% of total renewable water resources. In semi-arid and arid regions, mountainous regions, permafrost regions and in the Asian Monsoon region, groundwater recharge accounts for a lower fraction of total runoff, which makes these regions particularly vulnerable to seasonal and inter-annual precipitation variability and water pollution. Average per-capita renewable groundwater resources of countries vary between 8 m3

  14. Patient participation in collective healthcare decision making: the Dutch model

    NARCIS (Netherlands)

    van de Bovenkamp, H.; Trappenburg, M.J.|info:eu-repo/dai/nl/111650836; Grit, K.

    2010-01-01

    Objective To study whether the Dutch participation model is a good model of participation. Background Patient participation is on the agenda, both on the individual and the collective level. In this study, we focus on the latter by looking at the Dutch model in which patient organizations are

  15. Patient participation in collective healthcare decision making: the Dutch model

    NARCIS (Netherlands)

    van de Bovenkamp, H.M.; Trappenburg, M.J.; Grit, K.J.

    2010-01-01

    Objective  To study whether the Dutch participation model is a good model of participation. Background  Patient participation is on the agenda, both on the individual and the collective level. In this study, we focus on the latter by looking at the Dutch model in which patient organizations are

  16. Systems metabolic engineering: Genome-scale models and beyond

    Science.gov (United States)

    Blazeck, John; Alper, Hal

    2010-01-01

    The advent of high throughput genome-scale bioinformatics has led to an exponential increase in available cellular system data. Systems metabolic engineering attempts to use data-driven approaches – based on the data collected with high throughput technologies – to identify gene targets and optimize phenotypical properties on a systems level. Current systems metabolic engineering tools are limited for predicting and defining complex phenotypes such as chemical tolerances and other global, multigenic traits. The most pragmatic systems-based tool for metabolic engineering to arise is the in silico genome-scale metabolic reconstruction. This tool has seen wide adoption for modeling cell growth and predicting beneficial gene knockouts, and we examine here how this approach can be expanded for novel organisms. This review will highlight advances of the systems metabolic engineering approach with a focus on de novo development and use of genome-scale metabolic reconstructions for metabolic engineering applications. We will then discuss the challenges and prospects for this emerging field to enable model-based metabolic engineering. Specifically, we argue that current state-of-the-art systems metabolic engineering techniques represent a viable first step for improving product yield that still must be followed by combinatorial techniques or random strain mutagenesis to achieve optimal cellular systems. PMID:20151446

  17. Systems metabolic engineering: genome-scale models and beyond.

    Science.gov (United States)

    Blazeck, John; Alper, Hal

    2010-07-01

    The advent of high throughput genome-scale bioinformatics has led to an exponential increase in available cellular system data. Systems metabolic engineering attempts to use data-driven approaches--based on the data collected with high throughput technologies--to identify gene targets and optimize phenotypical properties on a systems level. Current systems metabolic engineering tools are limited for predicting and defining complex phenotypes such as chemical tolerances and other global, multigenic traits. The most pragmatic systems-based tool for metabolic engineering to arise is the in silico genome-scale metabolic reconstruction. This tool has seen wide adoption for modeling cell growth and predicting beneficial gene knockouts, and we examine here how this approach can be expanded for novel organisms. This review will highlight advances of the systems metabolic engineering approach with a focus on de novo development and use of genome-scale metabolic reconstructions for metabolic engineering applications. We will then discuss the challenges and prospects for this emerging field to enable model-based metabolic engineering. Specifically, we argue that current state-of-the-art systems metabolic engineering techniques represent a viable first step for improving product yield that still must be followed by combinatorial techniques or random strain mutagenesis to achieve optimal cellular systems.

  18. Measurement and Modelling of Scaling Minerals

    DEFF Research Database (Denmark)

    Villafafila Garcia, Ada

    2005-01-01

    -liquid equilibrium of sulphate scaling minerals (SrSO4, BaSO4, CaSO4 and CaSO4•2H2O) at temperatures up to 300ºC and pressures up to 1000 bar is described in chapter 4. Results for the binary systems (M2+, )-H2O; the ternary systems (Na+, M2+, )-H2O, and (Na+, M2+, Cl-)-H2O; and the quaternary systems (Na+, M2+)(Cl...... to 1000 bar. The solubility of CO2 in pure water, and the solubility of CO2 in solutions of different salts (NaCl and Na2SO4) have also been correlated. Results for the binary systems MCO3-H2O, and CO2-H2O; the ternary systems MCO3-CO2-H2O, CO2-NaCl-H2O, and CO2-Na2SO4-H2O; and the quaternary system CO2....... Chapter 2 is focused on thermodynamics of the systems studied and on the calculation of vapour-liquid, solid-liquid, and speciation equilibria. The effects of both temperature and pressure on the solubility are addressed, and explanation of the model calculations is also given. Chapter 3 presents...

  19. Multi-scale models for cell adhesion

    Science.gov (United States)

    Wu, Yinghao; Chen, Jiawen; Xie, Zhong-Ru

    2014-03-01

    The interactions of membrane receptors during cell adhesion play pivotal roles in tissue morphogenesis during development. Our lab focuses on developing multi-scale models to decompose the mechanical and chemical complexity in cell adhesion. Recent experimental evidences show that clustering is a generic process for cell adhesive receptors. However, the physical basis of such receptor clustering is not understood. We introduced the effect of molecular flexibility to evaluate the dynamics of receptors. By delivering new theory to quantify the changes of binding free energy in different cellular environments, we revealed that restriction of molecular flexibility upon binding of membrane receptors from apposing cell surfaces (trans) causes large entropy loss, which dramatically increases their lateral interactions (cis). This provides a new molecular mechanism to initialize receptor clustering on the cell-cell interface. By using the subcellular simulations, we further found that clustering is a cooperative process requiring both trans and cis interactions. The detailed binding constants during these processes are calculated and compared with experimental data from our collaborator's lab.

  20. Modeling cancer metabolism on a genome scale

    Science.gov (United States)

    Yizhak, Keren; Chaneton, Barbara; Gottlieb, Eyal; Ruppin, Eytan

    2015-01-01

    Cancer cells have fundamentally altered cellular metabolism that is associated with their tumorigenicity and malignancy. In addition to the widely studied Warburg effect, several new key metabolic alterations in cancer have been established over the last decade, leading to the recognition that altered tumor metabolism is one of the hallmarks of cancer. Deciphering the full scope and functional implications of the dysregulated metabolism in cancer requires both the advancement of a variety of omics measurements and the advancement of computational approaches for the analysis and contextualization of the accumulated data. Encouragingly, while the metabolic network is highly interconnected and complex, it is at the same time probably the best characterized cellular network. Following, this review discusses the challenges that genome-scale modeling of cancer metabolism has been facing. We survey several recent studies demonstrating the first strides that have been done, testifying to the value of this approach in portraying a network-level view of the cancer metabolism and in identifying novel drug targets and biomarkers. Finally, we outline a few new steps that may further advance this field. PMID:26130389

  1. HD Hydrological modelling at catchment scale using rainfall radar observations

    Science.gov (United States)

    Ciampalini Rossano. Ciampalini@Gmail. Com), Rossano; Follain, Stéphane; Raclot, Damien; Crabit, Armand; Pastor, Amandine; Augas, Julien; Moussa, Roger; Colin, François; Le Bissonnais, Yves

    2017-04-01

    Hydrological simulations at catchment scale repose on the quality and data availability both for soil and rainfall data. Soil data are quite easy to be collected, although their quality depends on the resources devoted to this task, rainfall data observations, instead, need further effort because of their spatiotemporal variability. Rainfalls are normally recorded with rain gauges located in the catchment, they can provide detailed temporal data, but, the representativeness is limited to the point where the data are collected. Combining different gauges in space can provide a better representation of the rainfall event but the spatialization is often the main obstacle to obtain data close to the reality. Since several years, radar observations overcome this gap providing continuous data registration, that, when properly calibrated, can offer an adequate, continuous, cover in space and time for medium-wide catchments. Here, we use radar records for the south of the France on the La Peyne catchment with the protocol there adopted by the national meteo agency, with resolution of 1 km space and 5' time scale observations. We present here the realisation of a model able to perform from rainfall radar observations, continuous hydrological and soil erosion simulations. The model is semi-theoretically based, once it simulates water fluxes (infiltration-excess overland flow, saturation overland flow, infiltration and channel routing) with a cinematic wave using the St. Venant equation on a simplified "bucket" conceptual model for ground water, and, an empirical representation of sediment load as adopted in models such as STREAM-LANDSOIL (Cerdan et al., 2002, Ciampalini et al., 2012). The advantage of this approach is to furnish a dynamic representation - simulation of the rainfall-runoff events more easily than using spatialized rainfalls from meteo stations and to offer a new look on the spatial component of the events.

  2. Downscaling modelling system for multi-scale air quality forecasting

    Science.gov (United States)

    Nuterman, R.; Baklanov, A.; Mahura, A.; Amstrup, B.; Weismann, J.

    2010-09-01

    Urban modelling for real meteorological situations, in general, considers only a small part of the urban area in a micro-meteorological model, and urban heterogeneities outside a modelling domain affect micro-scale processes. Therefore, it is important to build a chain of models of different scales with nesting of higher resolution models into larger scale lower resolution models. Usually, the up-scaled city- or meso-scale models consider parameterisations of urban effects or statistical descriptions of the urban morphology, whereas the micro-scale (street canyon) models are obstacle-resolved and they consider a detailed geometry of the buildings and the urban canopy. The developed system consists of the meso-, urban- and street-scale models. First, it is the Numerical Weather Prediction (HIgh Resolution Limited Area Model) model combined with Atmospheric Chemistry Transport (the Comprehensive Air quality Model with extensions) model. Several levels of urban parameterisation are considered. They are chosen depending on selected scales and resolutions. For regional scale, the urban parameterisation is based on the roughness and flux corrections approach; for urban scale - building effects parameterisation. Modern methods of computational fluid dynamics allow solving environmental problems connected with atmospheric transport of pollutants within urban canopy in a presence of penetrable (vegetation) and impenetrable (buildings) obstacles. For local- and micro-scales nesting the Micro-scale Model for Urban Environment is applied. This is a comprehensive obstacle-resolved urban wind-flow and dispersion model based on the Reynolds averaged Navier-Stokes approach and several turbulent closures, i.e. k -ɛ linear eddy-viscosity model, k - ɛ non-linear eddy-viscosity model and Reynolds stress model. Boundary and initial conditions for the micro-scale model are used from the up-scaled models with corresponding interpolation conserving the mass. For the boundaries a

  3. Upscaling a catchment-scale ecohydrology model for regional-scale earth system modeling

    Science.gov (United States)

    Adam, J. C.; Tague, C.; Liu, M.; Garcia, E.; Choate, J.; Mullis, T.; Hull, R.; Vaughan, J. K.; Kalyanaraman, A.; Nguyen, T.

    2014-12-01

    With a focus on the U.S. Pacific Northwest (PNW), BioEarth is an Earth System Model (EaSM) currently in development that explores the interactions between coupled C:N:H2O dynamics and resource management actions at the regional scale. Capturing coupled biogeochemical processes within EaSMs like BioEarth is important for exploring the response of the land surface to changes in climate and resource management actions; information that is important for shaping decisions that promote sustainable use of our natural resources. However, many EaSM frameworks do not adequately represent landscape-scale ( 10 km) are necessitated by computational limitations. Spatial heterogeneity in a landscape arises due to spatial differences in underlying soil and vegetation properties that control moisture, energy and nutrient fluxes; as well as differences that arise due to spatially-organized connections that may drive an ecohydrologic response by the land surface. While many land surface models used in EaSM frameworks capture the first type of heterogeneity, few account for the influence of lateral connectivity on land surface processes. This type of connectivity can be important when considering soil moisture and nutrient redistribution. The RHESSys model is utilized by BioEarth to enable a "bottom-up" approach that preserves fine spatial-scale sensitivities and lateral connectivity that may be important for coupled C:N:H2O dynamics over larger scales. RHESSys is a distributed eco-hydrologic model that was originally developed to run at relatively fine but computationally intensive spatial resolutions over small catchments. The objective of this presentation is to describe two developments to enable implementation of RHESSys over the PNW. 1) RHESSys is being adapted for BioEarth to allow for moderately coarser resolutions and the flexibility to capture both types of heterogeneity at biome-specific spatial scales. 2) A Kepler workflow is utilized to enable RHESSys implementation over

  4. Evaluation of a distributed catchment scale water balance model

    Science.gov (United States)

    Troch, Peter A.; Mancini, Marco; Paniconi, Claudio; Wood, Eric F.

    1993-01-01

    The validity of some of the simplifying assumptions in a conceptual water balance model is investigated by comparing simulation results from the conceptual model with simulation results from a three-dimensional physically based numerical model and with field observations. We examine, in particular, assumptions and simplifications related to water table dynamics, vertical soil moisture and pressure head distributions, and subsurface flow contributions to stream discharge. The conceptual model relies on a topographic index to predict saturation excess runoff and on Philip's infiltration equation to predict infiltration excess runoff. The numerical model solves the three-dimensional Richards equation describing flow in variably saturated porous media, and handles seepage face boundaries, infiltration excess and saturation excess runoff production, and soil driven and atmosphere driven surface fluxes. The study catchments (a 7.2 sq km catchment and a 0.64 sq km subcatchment) are located in the North Appalachian ridge and valley region of eastern Pennsylvania. Hydrologic data collected during the MACHYDRO 90 field experiment are used to calibrate the models and to evaluate simulation results. It is found that water table dynamics as predicted by the conceptual model are close to the observations in a shallow water well and therefore, that a linear relationship between a topographic index and the local water table depth is found to be a reasonable assumption for catchment scale modeling. However, the hydraulic equilibrium assumption is not valid for the upper 100 cm layer of the unsaturated zone and a conceptual model that incorporates a root zone is suggested. Furthermore, theoretical subsurface flow characteristics from the conceptual model are found to be different from field observations, numerical simulation results, and theoretical baseflow recession characteristics based on Boussinesq's groundwater equation.

  5. Native American Resources: A Model for Collection Development

    Science.gov (United States)

    Taylor, Rhonda Harris; Patterson, Lotsee

    2004-01-01

    This construct for collection development as it relates to Native American resources utilizes Thomas Mann's "Library Research Methods" (1993) concepts of the Traditional Model, the Actual-Practice Model, and the Principle of Least Effort to organize recommendations for both strategies and resources. The three-pronged hierarchical approach to…

  6. Cost Factors in Scaling in SfM Collections and Processing Solutions

    Science.gov (United States)

    Cherry, J. E.

    2015-12-01

    In this talk I will discuss the economics of scaling Structure from Motion (SfM)-style collections from 1 km2 and below to 100's and 1000's of square kilometers. Considerations include the costs of the technical equipment: comparisons of small, medium, and large-format camera systems, as well as various GPS-INS systems and their impact on processing accuracy for various Ground Sampling Distances. Tradeoffs between camera formats and flight time are central. Weather conditions and planning high altitude versus low altitude flights are another economic factor, particularly in areas of persistently bad weather and in areas where ground logistics (i.e. hotel rooms and pilot incidentals) are expensive. Unique costs associated with UAS collections and experimental payloads will be discussed. Finally, the costs of equipment and labor differs in SfM processing than in conventional orthomosaic and LiDAR processing. There are opportunities for 'economies of scale' in SfM collections under certain circumstances but whether the accuracy specifications are firm/fixed or 'best effort' makes a difference.

  7. Modelling across bioreactor scales: methods, challenges and limitations

    DEFF Research Database (Denmark)

    Gernaey, Krist

    Scale-up and scale-down of bioreactors are very important in industrial biotechnology, especially with the currently available knowledge on the occurrence of gradients in industrial-scale bioreactors. Moreover, it becomes increasingly appealing to model such industrial scale systems, considering...... that it is challenging and expensive to acquire experimental data of good quality that can be used for characterizing gradients occurring inside a large industrial scale bioreactor. But which model building methods are available? And how can one ensure that the parameters in such a model are properly estimated? And what...... are the limitations of different types of mod - els? This paper will provide examples of models that have been published in the literature for use across bioreactor scales, including computational fluid dynamics (CFD) and population balance models. Furthermore, the importance of good modeling practice...

  8. Static Aeroelastic Scaling and Analysis of a Sub-Scale Flexible Wing Wind Tunnel Model

    Science.gov (United States)

    Ting, Eric; Lebofsky, Sonia; Nguyen, Nhan; Trinh, Khanh

    2014-01-01

    This paper presents an approach to the development of a scaled wind tunnel model for static aeroelastic similarity with a full-scale wing model. The full-scale aircraft model is based on the NASA Generic Transport Model (GTM) with flexible wing structures referred to as the Elastically Shaped Aircraft Concept (ESAC). The baseline stiffness of the ESAC wing represents a conventionally stiff wing model. Static aeroelastic scaling is conducted on the stiff wing configuration to develop the wind tunnel model, but additional tailoring is also conducted such that the wind tunnel model achieves a 10% wing tip deflection at the wind tunnel test condition. An aeroelastic scaling procedure and analysis is conducted, and a sub-scale flexible wind tunnel model based on the full-scale's undeformed jig-shape is developed. Optimization of the flexible wind tunnel model's undeflected twist along the span, or pre-twist or wash-out, is then conducted for the design test condition. The resulting wind tunnel model is an aeroelastic model designed for the wind tunnel test condition.

  9. A New Method of Building Scale-Model Houses

    Science.gov (United States)

    Richard N. Malcolm

    1978-01-01

    Scale-model houses are used to display new architectural and construction designs.Some scale-model houses will not withstand the abuse of shipping and handling.This report describes how to build a solid-core model house which is rigid, lightweight, and sturdy.

  10. Mathematical models in marketing a collection of abstracts

    CERN Document Server

    Funke, Ursula H

    1976-01-01

    Mathematical models can be classified in a number of ways, e.g., static and dynamic; deterministic and stochastic; linear and nonlinear; individual and aggregate; descriptive, predictive, and normative; according to the mathematical technique applied or according to the problem area in which they are used. In marketing, the level of sophistication of the mathe­ matical models varies considerably, so that a nurnber of models will be meaningful to a marketing specialist without an extensive mathematical background. To make it easier for the nontechnical user we have chosen to classify the models included in this collection according to the major marketing problem areas in which they are applied. Since the emphasis lies on mathematical models, we shall not as a rule present statistical models, flow chart models, computer models, or the empirical testing aspects of these theories. We have also excluded competitive bidding, inventory and transportation models since these areas do not form the core of ·the market...

  11. U(6)-Phonon model of nuclear collective motion

    Science.gov (United States)

    Ganev, H. G.

    2015-05-01

    The U(6)-phonon model of nuclear collective motion with the semi-direct product structure [HW(21)]U(6) is obtained as a hydrodynamic (macroscopic) limit of the fully microscopic proton-neutron symplectic model (PNSM) with Sp(12, R) dynamical group. The phonon structure of the [HW(21)]U(6) model enables it to simultaneously include the giant monopole and quadrupole, as well as dipole resonances and their coupling to the low-lying collective states. The U(6) intrinsic structure of the [HW(21)]U(6) model, from the other side, gives a framework for the simultaneous shell-model interpretation of the ground state band and the other excited low-lying collective bands. It follows then that the states of the whole nuclear Hilbert space which can be put into one-to-one correspondence with those of a 21-dimensional oscillator with an intrinsic (base) U(6) structure. The latter can be determined in such a way that it is compatible with the proton-neutron structure of the nucleus. The macroscopic limit of the Sp(12, R) algebra, therefore, provides a rigorous mechanism for implementing the unified model ideas of coupling the valence particles to the core collective degrees of freedom within a fully microscopic framework without introducing redundant variables or violating the Pauli principle.

  12. Universal model of individual and population mobility on diverse spatial scales.

    Science.gov (United States)

    Yan, Xiao-Yong; Wang, Wen-Xu; Gao, Zi-You; Lai, Ying-Cheng

    2017-11-21

    Studies of human mobility in the past decade revealed a number of general scaling laws. However, to reproduce the scaling behaviors quantitatively at both the individual and population levels simultaneously remains to be an outstanding problem. Moreover, recent evidence suggests that spatial scales have a significant effect on human mobility, raising the need for formulating a universal model suited for human mobility at different levels and spatial scales. Here we develop a general model by combining memory effect and population-induced competition to enable accurate prediction of human mobility based on population distribution only. A variety of individual and collective mobility patterns such as scaling behaviors and trajectory motifs are accurately predicted for different countries and cities of diverse spatial scales. Our model establishes a universal underlying mechanism capable of explaining a variety of human mobility behaviors, and has significant applications for understanding many dynamical processes associated with human mobility.

  13. Gauge coupling unification in a classically scale invariant model

    Energy Technology Data Exchange (ETDEWEB)

    Haba, Naoyuki; Ishida, Hiroyuki [Graduate School of Science and Engineering, Shimane University,Matsue 690-8504 (Japan); Takahashi, Ryo [Graduate School of Science, Tohoku University,Sendai, 980-8578 (Japan); Yamaguchi, Yuya [Graduate School of Science and Engineering, Shimane University,Matsue 690-8504 (Japan); Department of Physics, Faculty of Science, Hokkaido University,Sapporo 060-0810 (Japan)

    2016-02-08

    There are a lot of works within a class of classically scale invariant model, which is motivated by solving the gauge hierarchy problem. In this context, the Higgs mass vanishes at the UV scale due to the classically scale invariance, and is generated via the Coleman-Weinberg mechanism. Since the mass generation should occur not so far from the electroweak scale, we extend the standard model only around the TeV scale. We construct a model which can achieve the gauge coupling unification at the UV scale. In the same way, the model can realize the vacuum stability, smallness of active neutrino masses, baryon asymmetry of the universe, and dark matter relic abundance. The model predicts the existence vector-like fermions charged under SU(3){sub C} with masses lower than 1 TeV, and the SM singlet Majorana dark matter with mass lower than 2.6 TeV.

  14. Global fits of GUT-scale SUSY models with GAMBIT

    Science.gov (United States)

    Athron, Peter; Balázs, Csaba; Bringmann, Torsten; Buckley, Andy; Chrząszcz, Marcin; Conrad, Jan; Cornell, Jonathan M.; Dal, Lars A.; Edsjö, Joakim; Farmer, Ben; Jackson, Paul; Krislock, Abram; Kvellestad, Anders; Mahmoudi, Farvah; Martinez, Gregory D.; Putze, Antje; Raklev, Are; Rogan, Christopher; de Austri, Roberto Ruiz; Saavedra, Aldo; Savage, Christopher; Scott, Pat; Serra, Nicola; Weniger, Christoph; White, Martin

    2017-12-01

    We present the most comprehensive global fits to date of three supersymmetric models motivated by grand unification: the constrained minimal supersymmetric standard model (CMSSM), and its Non-Universal Higgs Mass generalisations NUHM1 and NUHM2. We include likelihoods from a number of direct and indirect dark matter searches, a large collection of electroweak precision and flavour observables, direct searches for supersymmetry at LEP and Runs I and II of the LHC, and constraints from Higgs observables. Our analysis improves on existing results not only in terms of the number of included observables, but also in the level of detail with which we treat them, our sampling techniques for scanning the parameter space, and our treatment of nuisance parameters. We show that stau co-annihilation is now ruled out in the CMSSM at more than 95% confidence. Stop co-annihilation turns out to be one of the most promising mechanisms for achieving an appropriate relic density of dark matter in all three models, whilst avoiding all other constraints. We find high-likelihood regions of parameter space featuring light stops and charginos, making them potentially detectable in the near future at the LHC. We also show that tonne-scale direct detection will play a largely complementary role, probing large parts of the remaining viable parameter space, including essentially all models with multi-TeV neutralinos.

  15. Characteristic length scale of input data in distributed models: implications for modeling grain size

    Science.gov (United States)

    Artan, Guleid A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  16. Characteristic length scale of input data in distributed models: implications for modeling grid size

    Science.gov (United States)

    Artan, G. A.; Neale, C. M. U.; Tarboton, D. G.

    2000-01-01

    The appropriate spatial scale for a distributed energy balance model was investigated by: (a) determining the scale of variability associated with the remotely sensed and GIS-generated model input data; and (b) examining the effects of input data spatial aggregation on model response. The semi-variogram and the characteristic length calculated from the spatial autocorrelation were used to determine the scale of variability of the remotely sensed and GIS-generated model input data. The data were collected from two hillsides at Upper Sheep Creek, a sub-basin of the Reynolds Creek Experimental Watershed, in southwest Idaho. The data were analyzed in terms of the semivariance and the integral of the autocorrelation. The minimum characteristic length associated with the variability of the data used in the analysis was 15 m. Simulated and observed radiometric surface temperature fields at different spatial resolutions were compared. The correlation between agreement simulated and observed fields sharply declined after a 10×10 m2 modeling grid size. A modeling grid size of about 10×10 m2 was deemed to be the best compromise to achieve: (a) reduction of computation time and the size of the support data; and (b) a reproduction of the observed radiometric surface temperature.

  17. Health Literacy Scale and Causal Model of Childhood Overweight.

    Science.gov (United States)

    Intarakamhang, Ungsinun; Intarakamhang, Patrawut

    2017-01-28

    WHO focuses on developing health literacy (HL) referring to cognitive and social skills. Our objectives were to develop a scale for evaluating the HL level of Thai childhood overweight, and develop a path model of health behavior (HB) for preventing obesity. A cross-sectional study. This research used a mixed method. Overall, 2,000 school students were aged 9 to 14 yr collected by stratified random sampling from all parts of Thailand in 2014. Data were analyzed by CFA, LISREL. Reliability of HL and HB scale ranged 0.62 to 0.82 and factor loading ranged 0.33 to 0.80, the subjects had low level of HL (60.0%) and fair level of HB (58.4%), and the path model of HB, could be influenced by HL from three paths. Path 1 started from the health knowledge and understanding that directly influenced the eating behavior (effect sized - β was 0.13, Pliteracy, and making appropriate health-related decision β=0.07, 0.98, and 0.05, respectively. Path 3 the accessing the information and services that influenced communicating for added skills, media literacy, and making appropriate health-related decision β=0.63, 0.93, 0.98, and 0.05. Finally, basic level of HL measured from health knowledge and understanding and accessing the information and services that influenced HB through interactive, and critical level β= 0.76, 0.97, and 0.55, respectively. HL Scale for Thai childhood overweight should be implemented as a screening tool developing HL by the public policy for health promotion.

  18. The Permanent Collection of 1925: Oslo Modernism in Paper and Models

    Directory of Open Access Journals (Sweden)

    Mari Lending

    2014-03-01

    Full Text Available In 1925, architect Georg Eliassen took the initiative to establish a collection of drawings, photography and scale models in response to an increasing frustration among Norwegian architect of not being able to participate in international architectural exhibitions. The so-called Permanent Collection was founded on a principle of absolute contemporaneity, making de-acquisition as important as acquisition in the management of the collection. Nevertheless, the collection kept increasing. By the mid 1930s it included hundreds of models and innumerable drawings and photos and was seen as nucleus of an entire museum of Norwegian architecture. This ambition failed, and the material that had been so intensively displayed in Kiel, Budapest, Helsinki, Berlin, Prague, and Paris, before making its last appearance at the World’s Fair in New York in 1939, was buried in storage, dispersed, or destroyed. Based on extensive archival research, this article chronicles a forgotten collection, framing it within a modernist culture of collecting and exhibiting architecture.  In November and December 2013, Mari Lending and Mari Hvattum salvaged parts of the Permanent Collection in the exhibition “Model as Ruin” at Kunstnernes Hus (House of Artists in Oslo, the venue that hosted the most important display of the collection in 1931.

  19. Holography for chiral scale-invariant models

    NARCIS (Netherlands)

    Caldeira Costa, R.N.; Taylor, M.

    2011-01-01

    Deformation of any d-dimensional conformal field theory by a constant null source for a vector operator of dimension (d + z -1) is exactly marginal with respect to anisotropic scale invariance, of dynamical exponent z. The holographic duals to such deformations are AdS plane waves, with z=2 being

  20. Holography for chiral scale-invariant models

    NARCIS (Netherlands)

    Caldeira Costa, R.N.; Taylor, M.

    2010-01-01

    Deformation of any d-dimensional conformal field theory by a constant null source for a vector operator of dimension (d + z -1) is exactly marginal with respect to anisotropic scale invariance, of dynamical exponent z. The holographic duals to such deformations are AdS plane waves, with z=2 being

  1. Collective Influence of Multiple Spreaders Evaluated by Tracing Real Information Flow in Large-Scale Social Networks

    Science.gov (United States)

    Teng, Xian; Pei, Sen; Morone, Flaviano; Makse, Hernán A.

    2016-10-01

    Identifying the most influential spreaders that maximize information flow is a central question in network theory. Recently, a scalable method called “Collective Influence (CI)” has been put forward through collective influence maximization. In contrast to heuristic methods evaluating nodes’ significance separately, CI method inspects the collective influence of multiple spreaders. Despite that CI applies to the influence maximization problem in percolation model, it is still important to examine its efficacy in realistic information spreading. Here, we examine real-world information flow in various social and scientific platforms including American Physical Society, Facebook, Twitter and LiveJournal. Since empirical data cannot be directly mapped to ideal multi-source spreading, we leverage the behavioral patterns of users extracted from data to construct “virtual” information spreading processes. Our results demonstrate that the set of spreaders selected by CI can induce larger scale of information propagation. Moreover, local measures as the number of connections or citations are not necessarily the deterministic factors of nodes’ importance in realistic information spreading. This result has significance for rankings scientists in scientific networks like the APS, where the commonly used number of citations can be a poor indicator of the collective influence of authors in the community.

  2. Collective Influence of Multiple Spreaders Evaluated by Tracing Real Information Flow in Large-Scale Social Networks

    Science.gov (United States)

    Teng, Xian; Pei, Sen; Morone, Flaviano; Makse, Hernán A.

    2016-01-01

    Identifying the most influential spreaders that maximize information flow is a central question in network theory. Recently, a scalable method called “Collective Influence (CI)” has been put forward through collective influence maximization. In contrast to heuristic methods evaluating nodes’ significance separately, CI method inspects the collective influence of multiple spreaders. Despite that CI applies to the influence maximization problem in percolation model, it is still important to examine its efficacy in realistic information spreading. Here, we examine real-world information flow in various social and scientific platforms including American Physical Society, Facebook, Twitter and LiveJournal. Since empirical data cannot be directly mapped to ideal multi-source spreading, we leverage the behavioral patterns of users extracted from data to construct “virtual” information spreading processes. Our results demonstrate that the set of spreaders selected by CI can induce larger scale of information propagation. Moreover, local measures as the number of connections or citations are not necessarily the deterministic factors of nodes’ importance in realistic information spreading. This result has significance for rankings scientists in scientific networks like the APS, where the commonly used number of citations can be a poor indicator of the collective influence of authors in the community. PMID:27782207

  3. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Directory of Open Access Journals (Sweden)

    Richard P Mann

    2012-01-01

    Full Text Available Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis. We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture fine scale rules of interaction, which are primarily mediated by physical contact. Conversely, the Markovian self-propelled particle model captures the fine scale rules of interaction but fails to reproduce global dynamics. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.

  4. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Science.gov (United States)

    Mann, Richard P; Perna, Andrea; Strömbom, Daniel; Garnett, Roman; Herbert-Read, James E; Sumpter, David J T; Ward, Ashley J W

    2012-01-01

    Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis). We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture fine scale rules of interaction, which are primarily mediated by physical contact. Conversely, the Markovian self-propelled particle model captures the fine scale rules of interaction but fails to reproduce global dynamics. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects. © 2012 Mann et al.

  5. Informing Species Conservation at Multiple Scales Using Data Collected for Marine Mammal Stock Assessments

    OpenAIRE

    Alana Grech; James Sheppard; Helene Marsh

    2011-01-01

    BACKGROUND: Conservation planning and the design of marine protected areas (MPAs) requires spatially explicit information on the distribution of ecological features. Most species of marine mammals range over large areas and across multiple planning regions. The spatial distributions of marine mammals are difficult to predict using habitat modelling at ecological scales because of insufficient understanding of their habitat needs, however, relevant information may be available from surveys con...

  6. The sense and non-sense of plot-scale, catchment-scale, continental-scale and global-scale hydrological modelling

    Science.gov (United States)

    Bronstert, Axel; Heistermann, Maik; Francke, Till

    2017-04-01

    Hydrological models aim at quantifying the hydrological cycle and its constituent processes for particular conditions, sites or periods in time. Such models have been developed for a large range of spatial and temporal scales. One must be aware that the question which is the appropriate scale to be applied depends on the overall question under study. Therefore, it is not advisable to give a general applicable guideline on what is "the best" scale for a model. This statement is even more relevant for coupled hydrological, ecological and atmospheric models. Although a general statement about the most appropriate modelling scale is not recommendable, it is worth to have a look on what are the advantages and the shortcomings of micro-, meso- and macro-scale approaches. Such an appraisal is of increasing importance, since increasingly (very) large / global scale approaches and models are under operation and therefore the question arises how far and for what purposes such methods may yield scientifically sound results. It is important to understand that in most hydrological (and ecological, atmospheric and other) studies process scale, measurement scale, and modelling scale differ from each other. In some cases, the differences between theses scales can be of different orders of magnitude (example: runoff formation, measurement and modelling). These differences are a major source of uncertainty in description and modelling of hydrological, ecological and atmospheric processes. Let us now summarize our viewpoint of the strengths (+) and weaknesses (-) of hydrological models of different scales: Micro scale (e.g. extent of a plot, field or hillslope): (+) enables process research, based on controlled experiments (e.g. infiltration; root water uptake; chemical matter transport); (+) data of state conditions (e.g. soil parameter, vegetation properties) and boundary fluxes (e.g. rainfall or evapotranspiration) are directly measurable and reproducible; (+) equations based on

  7. PERSEUS-HUB: Interactive and Collective Exploration of Large-Scale Graphs

    Directory of Open Access Journals (Sweden)

    Di Jin

    2017-07-01

    Full Text Available Graphs emerge naturally in many domains, such as social science, neuroscience, transportation engineering, and more. In many cases, such graphs have millions or billions of nodes and edges, and their sizes increase daily at a fast pace. How can researchers from various domains explore large graphs interactively and efficiently to find out what is ‘important’? How can multiple researchers explore a new graph dataset collectively and “help” each other with their findings? In this article, we present Perseus-Hub, a large-scale graph mining tool that computes a set of graph properties in a distributed manner, performs ensemble, multi-view anomaly detection to highlight regions that are worth investigating, and provides users with uncluttered visualization and easy interaction with complex graph statistics. Perseus-Hub uses a Spark cluster to calculate various statistics of large-scale graphs efficiently, and aggregates the results in a summary on the master node to support interactive user exploration. In Perseus-Hub, the visualized distributions of graph statistics provide preliminary analysis to understand a graph. To perform a deeper analysis, users with little prior knowledge can leverage patterns (e.g., spikes in the power-law degree distribution marked by other users or experts. Moreover, Perseus-Hub guides users to regions of interest by highlighting anomalous nodes and helps users establish a more comprehensive understanding about the graph at hand. We demonstrate our system through the case study on real, large-scale networks.

  8. The Goddard multi-scale modeling system with unified physics

    Directory of Open Access Journals (Sweden)

    W.-K. Tao

    2009-08-01

    Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data.

    This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.

  9. The Goddard multi-scale modeling system with unified physics

    Directory of Open Access Journals (Sweden)

    W.-K. Tao

    2009-08-01

    Full Text Available Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1 a cloud-resolving model (CRM, (2 a regional-scale model, the NASA unified Weather Research and Forecasting Model (WRF, and (3 a coupled CRM-GCM (general circulation model, known as the Goddard Multi-scale Modeling Framework or MMF. The same cloud-microphysical processes, long- and short-wave radiative transfer and land-surface processes are applied in all of the models to study explicit cloud-radiation and cloud-surface interactive processes in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator for comparison and validation with NASA high-resolution satellite data. This paper reviews the development and presents some applications of the multi-scale modeling system, including results from using the multi-scale modeling system to study the interactions between clouds, precipitation, and aerosols. In addition, use of the multi-satellite simulator to identify the strengths and weaknesses of the model-simulated precipitation processes will be discussed as well as future model developments and applications.

  10. Microphysics in Multi-scale Modeling System with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  11. Simple subgrid scale stresses models for homogeneous isotropic turbulence

    Science.gov (United States)

    Aupoix, B.; Cousteix, J.

    Large eddy simulations employing the filtering of Navier-Stokes equations highlight stresses, related to the interaction between large scales below the cut and small scales above it, which have been designated 'subgrid scale stresses'. Their effects include both the energy flux through the cut and a component of viscous diffusion. The eddy viscosity introduced in the subgrid scale models which give the correct energy flux through the cut by comparison with spectral closures is shown to depend only on the small scales. The Smagorinsky (1963) model can only be obtained if the cut lies in the middle of the inertial range. A novel model which takes the small scales into account statistically, and includes the effects of viscosity, is proposed and compared with classical models for the Comte-Bellot and Corrsin (1971) experiment.

  12. Collection, speciation and aerosol modelling for volatile organic compounds

    Science.gov (United States)

    Goodman-Rendall, Kevin Alan Scott

    Volatile organic compounds (VOCs) are collected on the integrated organic gas and particle sampler (IOGAPS) to measure particle loss and collection efficiency. Particle loss increases with increasing flow rate while collection efficiency is a function of alkane volatility. Unresolved complex mixtures (UCMs) are then analyzed and quantified using the novel technique supersonic molecular beam gas chromatography/mass spectrometry (SMB-GC/MS), to develop accurate inputs in modelling the formation of secondary organic aerosol (SOA). Alkanes were segregated by carbon number (NC), number of double bond equivalents (NDBE), and chemical structure. With the most explicit compositional knowledge to date, these mixtures were modelled for their affinity towards formation of SOA. Unsaturated alkanes formed the most and relatively equal amounts of aerosol based on their degree of unsaturation while branched species formed the least. Increasing specificity in chemical structure led to increased computational demands while only general structural motifs were needed to form an accurate picture of aerosol formation.

  13. Fog collection and deposition modelling - EcoCatch Lunz

    Science.gov (United States)

    Koller, M. W.; Ramírez-Santa Cruz, C.; Leder, K.; Bauer, H.; Dorninger, M.; Hofhansl, F.; Wanek, W.; Kasper-Giebl, A.

    2010-07-01

    The area of Lunz am See (N 047.855°, E 015.068°, 650 m a.s.l.) in Lower Austria has been subject to long term monitoring of meteorological parameters as well as wet deposition. Even though Lunz is known for its good air quality, with about 200 days of precipitation per year reaching an annual average of 1500 mm deposition, immission fluxes reach levels of critical loads. For instance, nitrogen input from wet deposition of nitrate and ammonium is > 14 kg ha-1 a-1, and sulphur input from sulphate is 5 kg ha-1 a-1. In the framework of the EcoCatch project1) wet, dry and occult deposition have been investigated in detail in an alluvial forest near the Biological Station (Lunz/See) since September 2008. The overall contribution of dry and occult deposition was expected to be comparably low and only of importance in times of decreased wet deposition. Collection of fog samples was performed with an active fog sampler, regulated by a Vaisala PWD-12 sensor monitoring visibility. Temperature, relative humidity, wind speed and direction were logged by a HOBO weather station. Filter stacks were used for sampling of aerosol particles and gaseous components and a Wet And Dry Only Sampler (WADOS) was used to sample precipitation. Solute analysis was carried out via ion chromatography. Alkali and earth alkali metals, chloride as well as ammonium, sulphate and nitrate were quantified in rain, aerosol and fog samples on an event basis. In addition dry deposition included nitrogen oxide and dioxide, sulphur dioxide and ammonia measurements. A site specific relation of liquid water content (LWC) to visibility was established using the collection rate and the known collection efficiency of the fog sampler. A modified version of the fog deposition resistance model devised by G.M. Lovett was used to quantify occult deposition onto the alluvial forest. The surface area index of local vegetation was measured with a SunScan System and tree height was determined using a Vertex IV

  14. Emergent collective decision-making: Control, model and behavior

    Science.gov (United States)

    Shen, Tian

    In this dissertation we study emergent collective decision-making in social groups with time-varying interactions and heterogeneously informed individuals. First we analyze a nonlinear dynamical systems model motivated by animal collective motion with heterogeneously informed subpopulations, to examine the role of uninformed individuals. We find through formal analysis that adding uninformed individuals in a group increases the likelihood of a collective decision. Secondly, we propose a model for human shared decision-making with continuous-time feedback and where individuals have little information about the true preferences of other group members. We study model equilibria using bifurcation analysis to understand how the model predicts decisions based on the critical threshold parameters that represent an individual's tradeoff between social and environmental influences. Thirdly, we analyze continuous-time data of pairs of human subjects performing an experimental shared tracking task using our second proposed model in order to understand transient behavior and the decision-making process. We fit the model to data and show that it reproduces a wide range of human behaviors surprisingly well, suggesting that the model may have captured the mechanisms of observed behaviors. Finally, we study human behavior from a game-theoretic perspective by modeling the aforementioned tracking task as a repeated game with incomplete information. We show that the majority of the players are able to converge to playing Nash equilibrium strategies. We then suggest with simulations that the mean field evolution of strategies in the population resemble replicator dynamics, indicating that the individual strategies may be myopic. Decisions form the basis of control and problems involving deciding collectively between alternatives are ubiquitous in nature and in engineering. Understanding how multi-agent systems make decisions among alternatives also provides insight for designing

  15. Simulation of Acoustics for Ares I Scale Model Acoustic Tests

    Science.gov (United States)

    Putnam, Gabriel; Strutzenberg, Louise L.

    2011-01-01

    The Ares I Scale Model Acoustics Test (ASMAT) is a series of live-fire tests of scaled rocket motors meant to simulate the conditions of the Ares I launch configuration. These tests have provided a well documented set of high fidelity acoustic measurements useful for validation including data taken over a range of test conditions and containing phenomena like Ignition Over-Pressure and water suppression of acoustics. To take advantage of this data, a digital representation of the ASMAT test setup has been constructed and test firings of the motor have been simulated using the Loci/CHEM computational fluid dynamics software. Results from ASMAT simulations with the rocket in both held down and elevated configurations, as well as with and without water suppression have been compared to acoustic data collected from similar live-fire tests. Results of acoustic comparisons have shown good correlation with the amplitude and temporal shape of pressure features and reasonable spectral accuracy up to approximately 1000 Hz. Major plume and acoustic features have been well captured including the plume shock structure, the igniter pulse transient, and the ignition overpressure.

  16. On nano-scale hydrodynamic lubrication models

    Science.gov (United States)

    Buscaglia, Gustavo; Ciuperca, Ionel S.; Jai, Mohammed

    2005-06-01

    Current magnetic head sliders and other micromechanisms involve gas lubrication flows with gap thicknesses in the nanometer range and stepped shapes fabricated by lithographic methods. In mechanical simulations, rarefaction effects are accounted for by models that propose Poiseuille flow factors which exhibit singularities as the pressure tends to zero or +∞. In this Note we show that these models are indeed mathematically well-posed, even in the case of discontinuous gap thickness functions. Our results cover popular models that were not previously analyzed in the literature, such as the Fukui-Kaneko model and the second-order model, among others. To cite this article: G. Buscaglia et al., C. R. Mecanique 333 (2005).

  17. Collective synchronization of self/non-self discrimination in T cell activation, across multiple spatio-temporal scales

    Science.gov (United States)

    Altan-Bonnet, Gregoire

    The immune system is a collection of cells whose function is to eradicate pathogenic infections and malignant tumors while protecting healthy tissues. Recent work has delineated key molecular and cellular mechanisms associated with the ability to discriminate self from non-self agents. For example, structural studies have quantified the biophysical characteristics of antigenic molecules (those prone to trigger lymphocyte activation and a subsequent immune response). However, such molecular mechanisms were found to be highly unreliable at the individual cellular level. We will present recent efforts to build experimentally validated computational models of the immune responses at the collective cell level. Such models have become critical to delineate how higher-level integration through nonlinear amplification in signal transduction, dynamic feedback in lymphocyte differentiation and cell-to-cell communication allows the immune system to enforce reliable self/non-self discrimination at the organism level. In particular, we will present recent results demonstrating how T cells tune their antigen discrimination according to cytokine cues, and how competition for cytokine within polyclonal populations of cells shape the repertoire of responding clones. Additionally, we will present recent theoretical and experimental results demonstrating how competition between diffusion and consumption of cytokines determine the range of cell-cell communications within lymphoid organs. Finally, we will discuss how biochemically explicit models, combined with quantitative experimental validation, unravel the relevance of new feedbacks for immune regulations across multiple spatial and temporal scales.

  18. Forecasting rain events - Meteorological models or collective intelligence?

    Science.gov (United States)

    Arazy, Ofer; Halfon, Noam; Malkinson, Dan

    2015-04-01

    Collective intelligence is shared (or group) intelligence that emerges from the collective efforts of many individuals. Collective intelligence is the aggregate of individual contributions: from simple collective decision making to more sophisticated aggregations such as in crowdsourcing and peer-production systems. In particular, collective intelligence could be used in making predictions about future events, for example by using prediction markets to forecast election results, stock prices, or the outcomes of sport events. To date, there is little research regarding the use of collective intelligence for prediction of weather forecasting. The objective of this study is to investigate the extent to which collective intelligence could be utilized to accurately predict weather events, and in particular rainfall. Our analyses employ metrics of group intelligence, as well as compare the accuracy of groups' predictions against the predictions of the standard model used by the National Meteorological Services. We report on preliminary results from a study conducted over the 2013-2014 and 2014-2015 winters. We have built a web site that allows people to make predictions on precipitation levels on certain locations. During each competition participants were allowed to enter their precipitation forecasts (i.e. 'bets') at three locations and these locations changed between competitions. A precipitation competition was defined as a 48-96 hour period (depending on the expected weather conditions), bets were open 24-48 hours prior to the competition, and during betting period participants were allowed to change their bets with no limitation. In order to explore the effect of transparency, betting mechanisms varied across study's sites: full transparency (participants able to see each other's bets); partial transparency (participants see the group's average bet); and no transparency (no information of others' bets is made available). Several interesting findings emerged from

  19. Challenges of Modeling Flood Risk at Large Scales

    Science.gov (United States)

    Guin, J.; Simic, M.; Rowe, J.

    2009-04-01

    Flood risk management is a major concern for many nations and for the insurance sector in places where this peril is insured. A prerequisite for risk management, whether in the public sector or in the private sector is an accurate estimation of the risk. Mitigation measures and traditional flood management techniques are most successful when the problem is viewed at a large regional scale such that all inter-dependencies in a river network are well understood. From an insurance perspective the jury is still out there on whether flood is an insurable peril. However, with advances in modeling techniques and computer power it is possible to develop models that allow proper risk quantification at the scale suitable for a viable insurance market for flood peril. In order to serve the insurance market a model has to be event-simulation based and has to provide financial risk estimation that forms the basis for risk pricing, risk transfer and risk management at all levels of insurance industry at large. In short, for a collection of properties, henceforth referred to as a portfolio, the critical output of the model is an annual probability distribution of economic losses from a single flood occurrence (flood event) or from an aggregation of all events in any given year. In this paper, the challenges of developing such a model are discussed in the context of Great Britain for which a model has been developed. The model comprises of several, physically motivated components so that the primary attributes of the phenomenon are accounted for. The first component, the rainfall generator simulates a continuous series of rainfall events in space and time over thousands of years, which are physically realistic while maintaining the statistical properties of rainfall at all locations over the model domain. A physically based runoff generation module feeds all the rivers in Great Britain, whose total length of stream links amounts to about 60,000 km. A dynamical flow routing

  20. Multi-scale inference of interaction rules in animal groups using Bayesian model selection.

    Directory of Open Access Journals (Sweden)

    Richard P Mann

    Full Text Available Inference of interaction rules of animals moving in groups usually relies on an analysis of large scale system behaviour. Models are tuned through repeated simulation until they match the observed behaviour. More recent work has used the fine scale motions of animals to validate and fit the rules of interaction of animals in groups. Here, we use a Bayesian methodology to compare a variety of models to the collective motion of glass prawns (Paratya australiensis. We show that these exhibit a stereotypical 'phase transition', whereby an increase in density leads to the onset of collective motion in one direction. We fit models to this data, which range from: a mean-field model where all prawns interact globally; to a spatial Markovian model where prawns are self-propelled particles influenced only by the current positions and directions of their neighbours; up to non-Markovian models where prawns have 'memory' of previous interactions, integrating their experiences over time when deciding to change behaviour. We show that the mean-field model fits the large scale behaviour of the system, but does not capture the observed locality of interactions. Traditional self-propelled particle models fail to capture the fine scale dynamics of the system. The most sophisticated model, the non-Markovian model, provides a good match to the data at both the fine scale and in terms of reproducing global dynamics, while maintaining a biologically plausible perceptual range. We conclude that prawns' movements are influenced by not just the current direction of nearby conspecifics, but also those encountered in the recent past. Given the simplicity of prawns as a study system our research suggests that self-propelled particle models of collective motion should, if they are to be realistic at multiple biological scales, include memory of previous interactions and other non-Markovian effects.

  1. Optimal Scaling of Interaction Effects in Generalized Linear Models

    Science.gov (United States)

    van Rosmalen, Joost; Koning, Alex J.; Groenen, Patrick J. F.

    2009-01-01

    Multiplicative interaction models, such as Goodman's (1981) RC(M) association models, can be a useful tool for analyzing the content of interaction effects. However, most models for interaction effects are suitable only for data sets with two or three predictor variables. Here, we discuss an optimal scaling model for analyzing the content of…

  2. Multiple-scale turbulence model in confined swirling jet predictions

    Science.gov (United States)

    Chen, C. P.

    1986-01-01

    A recently developed multiple-scale turbulence model which attempts to circumvent the deficiencies of earlier models by taking nonequilibrium spectral energy transfer into account is presented. The model's validity is tested by predicting the confined swirling coaxial jet flow in a sudden expansion. It is noted that, in order to account for anisotropic turbulence, a full Reynolds stress model is required.

  3. Continental scale modelling of geomagnetically induced currents

    OpenAIRE

    Sakharov Yaroslav; Prácser Ernö; Ádám Antal; Wik Magnus; Pirjola Risto; Viljanen Ari; Katkalov Juri

    2012-01-01

    The EURISGIC project (European Risk from Geomagnetically Induced Currents) aims at deriving statistics of geomagnetically induced currents (GIC) in the European high-voltage power grids. Such a continent-wide system of more than 1500 substations and transmission lines requires updates of the previous modelling, which has dealt with national grids in fairly small geographic areas. We present here how GIC modelling can be conveniently performed on a spherical surface with minor changes in the p...

  4. Multi-scale modeling for sustainable chemical production.

    Science.gov (United States)

    Zhuang, Kai; Bakshi, Bhavik R; Herrgård, Markus J

    2013-09-01

    With recent advances in metabolic engineering, it is now technically possible to produce a wide portfolio of existing petrochemical products from biomass feedstock. In recent years, a number of modeling approaches have been developed to support the engineering and decision-making processes associated with the development and implementation of a sustainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow models that explore the ecological impacts of all economic activities. Research efforts that attempt to connect the models at different scales have been limited. Here, we review a number of existing modeling approaches and their applications at the scales of metabolism, bioreactor, overall process, chemical industry, economy, and ecosystem. In addition, we propose a multi-scale approach for integrating the existing models into a cohesive framework. The major benefit of this proposed framework is that the design and decision-making at each scale can be informed, guided, and constrained by simulations and predictions at every other scale. In addition, the development of this multi-scale framework would promote cohesive collaborations across multiple traditionally disconnected modeling disciplines to achieve sustainable chemical production. Copyright © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  5. Scaling of musculoskeletal models from static and dynamic trials

    DEFF Research Database (Denmark)

    Lund, Morten Enemark; Andersen, Michael Skipper; de Zee, Mark

    2015-01-01

    Subject-specific scaling of cadaver-based musculoskeletal models is important for accurate musculoskeletal analysis within multiple areas such as ergonomics, orthopaedics and occupational health. We present two procedures to scale ‘generic’ musculoskeletal models to match segment lengths and joint...... parameters to a specific subject and compare the results to a simpler approach based on linear, segment-wise scaling. By incorporating data from functional and standing reference trials, the new scaling approaches reduce the model sensitivity to assumed model marker positions. For validation, we applied all....... The presented methods solve part of this problem and rely less on manual identification of anatomical landmarks in the model. The work represents a step towards a more consistent methodology in musculoskeletal modelling....

  6. Exploring nonlinear subgrid-scale models and new characteristic length scales for large-eddy simulation

    NARCIS (Netherlands)

    Silvis, Maurits H.; Trias, F. Xavier; Abkar, M.; Bae, H.J.; Lozano-Duran, A.; Verstappen, R.W.C.P.; Moin, Parviz; Urzay, Javier

    2016-01-01

    We study subgrid-scale modeling for large-eddy simulation of anisotropic turbulent flows on anisotropic grids. In particular, we show how the addition of a velocity-gradient-based nonlinear model term to an eddy viscosity model provides a better representation of energy transfer. This is shown to

  7. Multi-scale modeling for sustainable chemical production

    DEFF Research Database (Denmark)

    Zhuang, Kai; Bakshi, Bhavik R.; Herrgard, Markus

    2013-01-01

    associated with the development and implementation of a su stainable biochemical industry. The temporal and spatial scales of modeling approaches for sustainable chemical production vary greatly, ranging from metabolic models that aid the design of fermentative microbial strains to material and monetary flow......, chemical industry, economy, and ecosystem. In addition, we propose a multi-scale approach for integrating the existing models into a cohesive framework. The major benefit of this proposed framework is that the design and decision-making at each scale can be informed, guided, and constrained by simulations...... and predictions at every other scale. In addition, the development of this multi-scale framework would promote cohesive collaborations across multiple traditionally disconnected modeling disciplines to achieve sustainable chemical production....

  8. Continental scale modelling of geomagnetically induced currents

    Directory of Open Access Journals (Sweden)

    Sakharov Yaroslav

    2012-09-01

    Full Text Available The EURISGIC project (European Risk from Geomagnetically Induced Currents aims at deriving statistics of geomagnetically induced currents (GIC in the European high-voltage power grids. Such a continent-wide system of more than 1500 substations and transmission lines requires updates of the previous modelling, which has dealt with national grids in fairly small geographic areas. We present here how GIC modelling can be conveniently performed on a spherical surface with minor changes in the previous technique. We derive the exact formulation to calculate geovoltages on the surface of a sphere and show its practical approximation in a fast vectorised form. Using the model of the old Finnish power grid and a much larger prototype model of European high-voltage power grids, we validate the new technique by comparing it to the old one. We also compare model results to measured data in the following cases: geoelectric field at the Nagycenk observatory, Hungary; GIC at a Russian transformer; GIC along the Finnish natural gas pipeline. In all cases, the new method works reasonably well.

  9. Nonlinear model predictive control based on collective neurodynamic optimization.

    Science.gov (United States)

    Yan, Zheng; Wang, Jun

    2015-04-01

    In general, nonlinear model predictive control (NMPC) entails solving a sequential global optimization problem with a nonconvex cost function or constraints. This paper presents a novel collective neurodynamic optimization approach to NMPC without linearization. Utilizing a group of recurrent neural networks (RNNs), the proposed collective neurodynamic optimization approach searches for optimal solutions to global optimization problems by emulating brainstorming. Each RNN is guaranteed to converge to a candidate solution by performing constrained local search. By exchanging information and iteratively improving the starting and restarting points of each RNN using the information of local and global best known solutions in a framework of particle swarm optimization, the group of RNNs is able to reach global optimal solutions to global optimization problems. The essence of the proposed collective neurodynamic optimization approach lies in the integration of capabilities of global search and precise local search. The simulation results of many cases are discussed to substantiate the effectiveness and the characteristics of the proposed approach.

  10. Dynamically Scaled Model Experiment of a Mooring Cable

    Directory of Open Access Journals (Sweden)

    Lars Bergdahl

    2016-01-01

    Full Text Available The dynamic response of mooring cables for marine structures is scale-dependent, and perfect dynamic similitude between full-scale prototypes and small-scale physical model tests is difficult to achieve. The best possible scaling is here sought by means of a specific set of dimensionless parameters, and the model accuracy is also evaluated by two alternative sets of dimensionless parameters. A special feature of the presented experiment is that a chain was scaled to have correct propagation celerity for longitudinal elastic waves, thus providing perfect geometrical and dynamic scaling in vacuum, which is unique. The scaling error due to incorrect Reynolds number seemed to be of minor importance. The 33 m experimental chain could then be considered a scaled 76 mm stud chain with the length 1240 m, i.e., at the length scale of 1:37.6. Due to the correct elastic scale, the physical model was able to reproduce the effect of snatch loads giving rise to tensional shock waves propagating along the cable. The results from the experiment were used to validate the newly developed cable-dynamics code, MooDy, which utilises a discontinuous Galerkin FEM formulation. The validation of MooDy proved to be successful for the presented experiments. The experimental data is made available here for validation of other numerical codes by publishing digitised time series of two of the experiments.

  11. Flavor Gauge Models Below the Fermi Scale

    Energy Technology Data Exchange (ETDEWEB)

    Babu, K. S. [Oklahoma State U.; Friedland, A. [SLAC; Machado, P. A.N. [Madrid, IFT; Mocioiu, I. [Penn State U.

    2017-05-04

    The mass and weak interaction eigenstates for the quarks of the third generation are very well aligned, an empirical fact for which the Standard Model offers no explanation. We explore the possibility that this alignment is due to an additional gauge symmetry in the third generation. Specifically, we construct and analyze an explicit, renormalizable model with a gauge boson, $X$, corresponding to the $B-L$ symmetry of the third family. Having a relatively light (in the MeV to multi-GeV range), flavor-nonuniversal gauge boson results in a variety of constraints from different sources. By systematically analyzing 20 different constraints, we identify the most sensitive probes: kaon, $D^+$ and Upsilon decays, $D-\\bar{D}^0$ mixing, atomic parity violation, and neutrino scattering and oscillations. For the new gauge coupling $g_X$ in the range $(10^{-2} - 10^{-4})$ the model is shown to be consistent with the data. Possible ways of testing the model in $b$ physics, top and $Z$ decays, direct collider production and neutrino oscillation experiments, where one can observe nonstandard matter effects, are outlined. The choice of leptons to carry the new force is ambiguous, resulting in additional phenomenological implications, such as non-universality in semileptonic bottom decays. The proposed framework provides interesting connections between neutrino oscillations, flavor and collider physics.

  12. Collective action and technology development: up-scaling of innovation in rice farming communities in Northern Thailand

    NARCIS (Netherlands)

    Limnirankul, B.

    2007-01-01

    Keywords:small-scale rice farmers, collective action, community rice seed, local innovations, green manure crop, contract farming, participatory technology development, up-scaling, technological configuration, grid-group theory,

  13. The Versatility of SpAM: A Fast, Efficient, Spatial Method of Data Collection for Multidimensional Scaling

    Science.gov (United States)

    Hout, Michael C.; Goldinger, Stephen D.; Ferguson, Ryan W.

    2013-01-01

    Although traditional methods to collect similarity data (for multidimensional scaling [MDS]) are robust, they share a key shortcoming. Specifically, the possible pairwise comparisons in any set of objects grow rapidly as a function of set size. This leads to lengthy experimental protocols, or procedures that involve scaling stimulus subsets. We…

  14. Scaling up from field to region for wind erosion prediction using a field-scale wind erosion model and GIS

    Science.gov (United States)

    Zobeck, T.M.; Parker, N.C.; Haskell, S.; Guoding, K.

    2000-01-01

    Factors that affect wind erosion such as surface vegetative and other cover, soil properties and surface roughness usually change spatially and temporally at the field-scale to produce important field-scale variations in wind erosion. Accurate estimation of wind erosion when scaling up from fields to regions, while maintaining meaningful field-scale process details, remains a challenge. The objectives of this study were to evaluate the feasibility of using a field-scale wind erosion model with a geographic information system (GIS) to scale up to regional levels and to quantify the differences in wind erosion estimates produced by different scales of soil mapping used as a data layer in the model. A GIS was used in combination with the revised wind erosion equation (RWEQ), a field-scale wind erosion model, to estimate wind erosion for two 50 km2 areas. Landsat Thematic Mapper satellite imagery from 1993 with 30 m resolution was used as a base map. The GIS database layers included land use, soils, and other features such as roads. The major land use was agricultural fields. Data on 1993 crop management for selected fields of each crop type were collected from local government agency offices and used to 'train' the computer to classify land areas by crop and type of irrigation (agroecosystem) using commercially available software. The land area of the agricultural land uses was overestimated by 6.5% in one region (Lubbock County, TX, USA) and underestimated by about 21% in an adjacent region (Terry County, TX, USA). The total estimated wind erosion potential for Terry County was about four times that estimated for adjacent Lubbock County. The difference in potential erosion among the counties was attributed to regional differences in surface soil texture. In a comparison of different soil map scales in Terry County, the generalised soil map had over 20% more of the land area and over 15% greater erosion potential in loamy sand soils than did the detailed soil map. As

  15. MOUNTAIN-SCALE COUPLED PROCESSES (TH/THC/THM)MODELS

    Energy Technology Data Exchange (ETDEWEB)

    Y.S. Wu

    2005-08-24

    This report documents the development and validation of the mountain-scale thermal-hydrologic (TH), thermal-hydrologic-chemical (THC), and thermal-hydrologic-mechanical (THM) models. These models provide technical support for screening of features, events, and processes (FEPs) related to the effects of coupled TH/THC/THM processes on mountain-scale unsaturated zone (UZ) and saturated zone (SZ) flow at Yucca Mountain, Nevada (BSC 2005 [DIRS 174842], Section 2.1.1.1). The purpose and validation criteria for these models are specified in ''Technical Work Plan for: Near-Field Environment and Transport: Coupled Processes (Mountain-Scale TH/THC/THM, Drift-Scale THC Seepage, and Drift-Scale Abstraction) Model Report Integration'' (BSC 2005 [DIRS 174842]). Model results are used to support exclusion of certain FEPs from the total system performance assessment for the license application (TSPA-LA) model on the basis of low consequence, consistent with the requirements of 10 CFR 63.342 [DIRS 173273]. Outputs from this report are not direct feeds to the TSPA-LA. All the FEPs related to the effects of coupled TH/THC/THM processes on mountain-scale UZ and SZ flow are discussed in Sections 6 and 7 of this report. The mountain-scale coupled TH/THC/THM processes models numerically simulate the impact of nuclear waste heat release on the natural hydrogeological system, including a representation of heat-driven processes occurring in the far field. The mountain-scale TH simulations provide predictions for thermally affected liquid saturation, gas- and liquid-phase fluxes, and water and rock temperature (together called the flow fields). The main focus of the TH model is to predict the changes in water flux driven by evaporation/condensation processes, and drainage between drifts. The TH model captures mountain-scale three-dimensional flow effects, including lateral diversion and mountain-scale flow patterns. The mountain-scale THC model evaluates TH effects on

  16. Sensitivities in global scale modeling of isoprene

    Directory of Open Access Journals (Sweden)

    R. von Kuhlmann

    2004-01-01

    Full Text Available A sensitivity study of the treatment of isoprene and related parameters in 3D atmospheric models was conducted using the global model of tropospheric chemistry MATCH-MPIC. A total of twelve sensitivity scenarios which can be grouped into four thematic categories were performed. These four categories consist of simulations with different chemical mechanisms, different assumptions concerning the deposition characteristics of intermediate products, assumptions concerning the nitrates from the oxidation of isoprene and variations of the source strengths. The largest differences in ozone compared to the reference simulation occured when a different isoprene oxidation scheme was used (up to 30-60% or about 10 nmol/mol. The largest differences in the abundance of peroxyacetylnitrate (PAN were found when the isoprene emission strength was reduced by 50% and in tests with increased or decreased efficiency of the deposition of intermediates. The deposition assumptions were also found to have a significant effect on the upper tropospheric HOx production. Different implicit assumptions about the loss of intermediate products were identified as a major reason for the deviations among the tested isoprene oxidation schemes. The total tropospheric burden of O3 calculated in the sensitivity runs is increased compared to the background methane chemistry by 26±9  Tg( O3 from 273 to an average from the sensitivity runs of 299 Tg(O3. % revised Thus, there is a spread of ± 35% of the overall effect of isoprene in the model among the tested scenarios. This range of uncertainty and the much larger local deviations found in the test runs suggest that the treatment of isoprene in global models can only be seen as a first order estimate at present, and points towards specific processes in need of focused future work.

  17. Modeling Human Behavior at a Large Scale

    Science.gov (United States)

    2012-01-01

    impacts its recognition performance for both activities. The example we just gave illustrates one type of freeing false positives. The hallucinated freeings... vision have worked on the problem of recognizing events in videos of sporting events, such as impressive recent work on learning models of baseball plays...data can only be disambiguated by considering arbitrarily long temporal sequences. In general, however, both our work 65 and that in machine vision

  18. Modeling coastal upwelling around a small-scale coastline promontory

    Science.gov (United States)

    Haas, K. A.; Cai, D.; Freismuth, T. M.; MacMahan, J.; Di Lorenzo, E.; Suanda, S. H.; Kumar, N.; Miller, A. J.; Edwards, C. A.

    2016-12-01

    On the US west coast, northerly winds drive coastal ocean upwelling, an important process which brings cold nutrient rich water to the nearshore. The coastline geometry has been shown to be a significant factor in the strength of the upwelling process. In particular, the upwelling in the lee of major headlands have been shown to be enhanced. Recent observations from the Pt. Sal region on the coast of southern California have shown the presence of cooler water south of a small (350 m) rocky promontory (Mussel Pt.) during upwelling events. The hypothesis is that the small scale promontory is creating a lee side enhancement to the upwelling. To shed some light on this process, numerical simulations of the inner shelf region centered about Pt. Sal are conducted with the ROMS module of the COAWST model system. The model system is configured with four nested grids with resolutions ranging from approximately 600 m to the outer shelf ( 200 m) to the inner shelf ( 66 m) and finally to the surf zone ( 22 m). A solution from a 1 km grid encompassing our domain provides the boundary conditions for the 600 m grid. Barotropic tidal forcing is incorporated at the 600 m grid to provide tidal variability. This model system with realistic topography and bathymetry, winds and tides, is able to isolate the forcing mechanisms that explain the emergence of the cold water mass. The simulations focus on the time period of June - July, 2015 corresponding to the pilot study in which observational experiment data was collected. The experiment data in part consists of in situ measurement, which includes mooring with conductivity, temperature, depth, and flow velocity. The model simulations are able to reproduce the important flow features including the cooler water mass south of Mussel Pt. As hypothesized, the strength of the upwelling is enhanced on the side of Mussel Pt. In addition, periods of wind relaxation where the upwelling ceases and even begins to transform towards downwelling is

  19. Anomalous scaling in an age-dependent branching model

    OpenAIRE

    Keller-Schmidt, Stephanie; Tugrul, Murat; Eguíluz, Víctor M.; Hernández-García, Emilio; Klemm, Konstantin

    2010-01-01

    We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age $\\tau$ as $\\tau^{-\\alpha}$. Depending on the exponent $\\alpha$, the scaling of tree depth with tree size $n$ displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition ($\\alpha=1$) tree depth grows as $(\\log n)^2$. This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus p...

  20. Modeling human-flood interactions: Collective action and community resilience.

    Science.gov (United States)

    Yu, D. J.; Sangwan, N.; Sung, K.

    2016-12-01

    Stylized models of socio-hydrology have mainly used social memory aspects such as community awareness or sensitivity to connect hydrologic change and social response. However, social memory alone does not satisfactorily capture the details of how human behavior is translated into collective action for water resources governance. Nor is it the only mechanism by which the two-way feedbacks of socio-hydrology can be operationalized. This study contributes towards bridging of this gap by developing a stylized model of a human-flood system that includes two additional drivers of change: (1) institutions for collective action, and (2) connections to an external economic system. Motivated by the case of community-managed flood protection systems (polders) in coastal Bangladesh, we use the model to understand critical general features that affect long-term resilience of human-flood systems. Our findings suggest that occasional adversity can enhance long-term resilience. Allowing some hydrological variability to enter into the polder can increase its adaptive capacity and resilience through the preservation of social memory and institutions for collective action. Further, there are potential tradeoffs associated with optimization of flood resilience through structural measures. By reducing sensitivity to flooding, the system may become more fragile under the double impact of flooding and economic change

  1. Modeling closure of circular wounds through coordinated collective motion.

    Science.gov (United States)

    Li, David S; Zimmermann, Juliane; Levine, Herbert

    2016-02-12

    Wound healing enables tissues to restore their original states, and is achieved through collective cell migration into the wound space, contraction of the wound edge via an actomyosin filament 'purse-string,' as well as cell division. Recently, experimental techniques have been developed to create wounds with various regular morphologies in epithelial monolayers, and these experiments of circular closed-contour wounds support coordinated lamellipodial cell crawling as the predominant driver of gap closure. Through utilizing a particle-based mechanical tissue simulation, exhibiting long-range coordination of cell motility, we computationally model these closed-contour experiments with a high level of agreement between experimentally observed and simulated wound closure dynamics and tissue velocity profiles. We also determine the sensitivity of wound closure time in the model to changes in cell motility force and division rate. Our simulation results confirm that circular wounds can close due to collective cell migration without the necessity for a purse-string mechanism or for cell division, and show that the alignment mechanism of cellular motility force with velocity, leading to collective motion in the model, may speed up wound closure.

  2. Simultaneous nested modeling from the synoptic scale to the LES scale for wind energy applications

    DEFF Research Database (Denmark)

    Liu, Yubao; Warner, Tom; Liu, Yuewei

    2011-01-01

    This paper describes an advanced multi-scale weather modeling system, WRF–RTFDDA–LES, designed to simulate synoptic scale (~2000 km) to small- and micro-scale (~100 m) circulations of real weather in wind farms on simultaneous nested grids. This modeling system is built upon the National Center...... grids and seamlessly providing realistic mesoscale weather forcing to drive a large eddy simulation (LES) model within the WRF framework. The WRF based RTFDDA LES modeling capability is referred to as WRF–RTFDDA–LES. In this study, WRF–RTFDDA–LES is employed to simulate real weather in a major wind farm...... located in northern Colorado with six nested domains. The grid sizes of the nested domains are 30, 10, 3.3, 1.1, 0.370 and 0.123 km, respectively. The model results are compared with wind–farm anemometer measurements and are found to capture many intra-farm wind features and microscale flows. Additional...

  3. Fractal Modeling and Scaling in Natural Systems - Editorial

    Science.gov (United States)

    The special issue of Ecological complexity journal on Fractal Modeling and Scaling in Natural Systems contains representative examples of the status and evolution of data-driven research into fractals and scaling in complex natural systems. The editorial discusses contributions to understanding rela...

  4. Multi-scale modeling strategies in materials science—The ...

    Indian Academy of Sciences (India)

    ... time scales involved in determining macroscopic properties has been attempted by several workers with varying degrees of success. This paper will review the recently developed quasicontinuum method which is an attempt to bridge the length scales in a single seamless model with the aid of the finite element method.

  5. Scaling Properties of a Hybrid Fermi-Ulam-Bouncer Model

    Directory of Open Access Journals (Sweden)

    Diego F. M. Oliveira

    2009-01-01

    under the framework of scaling description. The model is described by using a two-dimensional nonlinear area preserving mapping. Our results show that the chaotic regime below the lowest energy invariant spanning curve is scaling invariant and the obtained critical exponents are used to find a universal plot for the second momenta of the average velocity.

  6. Ares I Scale Model Acoustic Test Lift-Off Acoustics

    Science.gov (United States)

    Counter, Douglas D.; Houston, Janie D.

    2011-01-01

    The lift-off acoustic (LOA) environment is an important design factor for any launch vehicle. For the Ares I vehicle, the LOA environments were derived by scaling flight data from other launch vehicles. The Ares I LOA predicted environments are compared to the Ares I Scale Model Acoustic Test (ASMAT) preliminary results.

  7. Advances in Modelling of Large Scale Coastal Evolution

    NARCIS (Netherlands)

    Stive, M.J.F.; De Vriend, H.J.

    1995-01-01

    The attention for climate change impact on the world's coastlines has established large scale coastal evolution as a topic of wide interest. Some more recent advances in this field, focusing on the potential of mathematical models for the prediction of large scale coastal evolution, are discussed.

  8. Large-Scale Data Collection Metadata Management at the National Computation Infrastructure

    Science.gov (United States)

    Wang, J.; Evans, B. J. K.; Bastrakova, I.; Ryder, G.; Martin, J.; Duursma, D.; Gohar, K.; Mackey, T.; Paget, M.; Siddeswara, G.

    2014-12-01

    Data Collection management has become an essential activity at the National Computation Infrastructure (NCI) in Australia. NCI's partners (CSIRO, Bureau of Meteorology, Australian National University, and Geoscience Australia), supported by the Australian Government and Research Data Storage Infrastructure (RDSI), have established a national data resource that is co-located with high-performance computing. This paper addresses the metadata management of these data assets over their lifetime. NCI manages 36 data collections (10+ PB) categorised as earth system sciences, climate and weather model data assets and products, earth and marine observations and products, geosciences, terrestrial ecosystem, water management and hydrology, astronomy, social science and biosciences. The data is largely sourced from NCI partners, the custodians of many of the national scientific records, and major research community organisations. The data is made available in a HPC and data-intensive environment - a ~56000 core supercomputer, virtual labs on a 3000 core cloud system, and data services. By assembling these large national assets, new opportunities have arisen to harmonise the data collections, making a powerful cross-disciplinary resource.To support the overall management, a Data Management Plan (DMP) has been developed to record the workflows, procedures, the key contacts and responsibilities. The DMP has fields that can be exported to the ISO19115 schema and to the collection level catalogue of GeoNetwork. The subset or file level metadata catalogues are linked with the collection level through parent-child relationship definition using UUID. A number of tools have been developed that support interactive metadata management, bulk loading of data, and support for computational workflows or data pipelines. NCI creates persistent identifiers for each of the assets. The data collection is tracked over its lifetime, and the recognition of the data providers, data owners, data

  9. Evaluation of a plot-scale methane emission model using eddy covariance observations and footprint modelling

    Directory of Open Access Journals (Sweden)

    A. Budishchev

    2014-09-01

    Full Text Available Most plot-scale methane emission models – of which many have been developed in the recent past – are validated using data collected with the closed-chamber technique. This method, however, suffers from a low spatial representativeness and a poor temporal resolution. Also, during a chamber-flux measurement the air within a chamber is separated from the ambient atmosphere, which negates the influence of wind on emissions. Additionally, some methane models are validated by upscaling fluxes based on the area-weighted averages of modelled fluxes, and by comparing those to the eddy covariance (EC flux. This technique is rather inaccurate, as the area of upscaling might be different from the EC tower footprint, therefore introducing significant mismatch. In this study, we present an approach to validate plot-scale methane models with EC observations using the footprint-weighted average method. Our results show that the fluxes obtained by the footprint-weighted average method are of the same magnitude as the EC flux. More importantly, the temporal dynamics of the EC flux on a daily timescale are also captured (r2 = 0.7. In contrast, using the area-weighted average method yielded a low (r2 = 0.14 correlation with the EC measurements. This shows that the footprint-weighted average method is preferable when validating methane emission models with EC fluxes for areas with a heterogeneous and irregular vegetation pattern.

  10. Visualization and modeling of smoke transport over landscape scales

    Science.gov (United States)

    Glenn P. Forney; William Mell

    2007-01-01

    Computational tools have been developed at the National Institute of Standards and Technology (NIST) for modeling fire spread and smoke transport. These tools have been adapted to address fire scenarios that occur in the wildland urban interface (WUI) over kilometer-scale distances. These models include the smoke plume transport model ALOFT (A Large Open Fire plume...

  11. Atomic scale simulations for improved CRUD and fuel performance modeling

    Energy Technology Data Exchange (ETDEWEB)

    Andersson, Anders David Ragnar [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Cooper, Michael William Donald [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-01-06

    A more mechanistic description of fuel performance codes can be achieved by deriving models and parameters from atomistic scale simulations rather than fitting models empirically to experimental data. The same argument applies to modeling deposition of corrosion products on fuel rods (CRUD). Here are some results from publications in 2016 carried out using the CASL allocation at LANL.

  12. Meso-scale modeling of a forested landscape

    DEFF Research Database (Denmark)

    Dellwik, Ebba; Arnqvist, Johan; Bergström, Hans

    2014-01-01

    Meso-scale models are increasingly used for estimating wind resources for wind turbine siting. In this study, we investigate how the Weather Research and Forecasting (WRF) model performs using standard model settings in two different planetary boundary layer schemes for a forested landscape and how...

  13. Genome-scale modeling for metabolic engineering.

    Science.gov (United States)

    Simeonidis, Evangelos; Price, Nathan D

    2015-03-01

    We focus on the application of constraint-based methodologies and, more specifically, flux balance analysis in the field of metabolic engineering, and enumerate recent developments and successes of the field. We also review computational frameworks that have been developed with the express purpose of automatically selecting optimal gene deletions for achieving improved production of a chemical of interest. The application of flux balance analysis methods in rational metabolic engineering requires a metabolic network reconstruction and a corresponding in silico metabolic model for the microorganism in question. For this reason, we additionally present a brief overview of automated reconstruction techniques. Finally, we emphasize the importance of integrating metabolic networks with regulatory information-an area which we expect will become increasingly important for metabolic engineering-and present recent developments in the field of metabolic and regulatory integration.

  14. Modelling of evapotranspiration at field and landscape scales. Abstract

    DEFF Research Database (Denmark)

    Overgaard, Jesper; Butts, M.B.; Rosbjerg, Dan

    2002-01-01

    The overall aim of this project is to couple a non-hydrostatic atmospheric model (ARPS) to an integrated hydrological model (MIKE SHE) to investigate atmospheric and hydrological feedbacks at different scales. To ensure a consistent coupling a new land-surface component based on a modified...... Shuttleworth-Wallace scheme was implemented in MIKE SHE. To validate the new land-surface component at different scales, the hydrological model was applied to an intensively monitored 10 km2 agricultural area in Denmark with a resolution of 40 meter. The model is forced with half-hourly metorological...... observations from a nearby weather station. Detailed land-use and soil maps were used to set up the model. Leaf area index was derived from NDVI (Normalized Difference Vegetation Index) images. To validate the model at field scale the simulated evapotranspiration rates were compared to eddy...

  15. Economic Well-Being and Poverty Among the Elderly : An Analysis Based on a Collective Consumption Model

    NARCIS (Netherlands)

    Cherchye, L.J.H.; de Rock, B.; Vermeulen, F.M.P.

    2008-01-01

    We apply the collective consumption model of Browning, Chiappori and Lew- bel (2006) to analyse economic well-being and poverty among the elderly. The model focuses on individual preferences, a consumption technology that captures the economies of scale of living in a couple, and a sharing rule that

  16. Multi-Scale Computational Models for Electrical Brain Stimulation

    Science.gov (United States)

    Seo, Hyeon; Jun, Sung C.

    2017-01-01

    Electrical brain stimulation (EBS) is an appealing method to treat neurological disorders. To achieve optimal stimulation effects and a better understanding of the underlying brain mechanisms, neuroscientists have proposed computational modeling studies for a decade. Recently, multi-scale models that combine a volume conductor head model and multi-compartmental models of cortical neurons have been developed to predict stimulation effects on the macroscopic and microscopic levels more precisely. As the need for better computational models continues to increase, we overview here recent multi-scale modeling studies; we focused on approaches that coupled a simplified or high-resolution volume conductor head model and multi-compartmental models of cortical neurons, and constructed realistic fiber models using diffusion tensor imaging (DTI). Further implications for achieving better precision in estimating cellular responses are discussed. PMID:29123476

  17. A test of the Circumplex Model of Marital and Family Systems using the Clinical Rating Scale.

    Science.gov (United States)

    Thomas, V; Ozechowski, T J

    2000-10-01

    Most studies of the Olson Circumplex Model of Marital and Family Systems have utilized a version of the Family Adaptability and Cohesion Evaluation Scales (FACES). Because FACES does not appear to operationalize the curvilinear dimension of the Circumplex Model, researchers have been pessimistic about the model's validity. However, the Clinical Rating Scale (CRS) has received some support as a curvilinear measure of the Circumplex Model. Therefore, we used the CRS rather than FACES to test the validity of the Circumplex Model hypotheses. Using a structural equation-modeling analytical approach, we found support for the hypotheses pertaining to the effects of cohesion and communication on family functioning. However, we found no support for the hypotheses pertaining to the concept of adaptability. We discuss these results in the context of previous studies of the Circumplex Model using FACES. Based on the collective findings, we propose a preliminary reformulation of the Circumplex Model.

  18. Predictions of a model of weak scale from dynamical breaking of scale invariance

    Directory of Open Access Journals (Sweden)

    Giulio Maria Pelaggi

    2015-04-01

    Full Text Available We consider a model where the weak and the DM scale arise at one loop from the Coleman–Weinberg mechanism. We perform a precision computation of the model predictions for the production cross section of a new Higgs-like scalar and for the direct-detection cross section of the DM particle candidate.

  19. Scaling Surface Fluxes from Tower Footprint to Global Model Pixel Scale Using Multi-Satellite Data Fusion

    Science.gov (United States)

    Anderson, M. C.; Hain, C.; Gao, F.; Semmens, K. A.; Yang, Y.; Schull, M. A.; Ring, T.; Kustas, W. P.; Alfieri, J. G.

    2014-12-01

    There is a fundamental challenge in evaluating performance of global land-surface and climate modeling systems, given that few in-situ observation sets adequately sample surface conditions representative at the global model pixel scale (10-100km). For example, a typical micrometeorological flux tower samples a relatively small footprint ranging from 100m to 1km, depending on tower height and environmental conditions. There is a clear need for diagnostic tools that can effectively bridge this gap in scale, and serve as a means of benchmarking global prognostic modeling systems under current conditions. This paper discusses a multi-scale energy balance modeling system (the Atmosphere-Land Exchange Inverse model and disaggregation utility: ALEXI/DisALEXI) that fuses flux maps generated with thermal infrared (TIR) imagery collected by multiple satellite platforms to estimate daily surface fluxes from field to global scales. These diagnostic assessments, with land-surface temperature (LST) as the primary indicator of surface moisture status, operate under fundamentally different constraints than prognostic land-surface models based on precipitation and water balance, and therefore can serve as a semi-independent benchmark. Furthermore, LST can be retrieved from TIR imagery over a broad range of spatiotemporal resolutions: from several meters (airborne systems; periodically) to ~100m (Landsat; bi-weekly) to 1km (Moderate Resolution Imaging Spectroradiometer - MODIS; daily) to 3-10km (geostationary; hourly). Applications of ALEXI/DisALEXI to flux sites within the US and internationally are described, evaluating daily evapotranspiration retrievals generated at 30m resolution. Annual timeseries of maps at this scale can be useful for better understanding local heterogeneity in the tower vicinity and dependences of observed fluxes on wind direction. If reasonable multi-year performance is demonstrated at the tower footprint scale for flux networks such as the National

  20. Measurement of returns to scale in radial DEA models

    Science.gov (United States)

    Krivonozhko, V. E.; Lychev, A. V.; Førsund, F. R.

    2017-01-01

    A general approach is proposed in order to measure returns to scale and scale elasticity at projections points in the radial data envelopment analysis (DEA) models. In the first stage, a relative interior point belonging to the optimal face is found using a special, elaborated method. In previous work it was proved that any relative interior point of a face has the same returns to scale as any other interior point of this face. In the second stage, we propose to determine the returns to scale at the relative interior point found in the first stage.

  1. Modeling and Simulation of Synchronous Threshold in Vent Collective Behavior

    Directory of Open Access Journals (Sweden)

    Yaofeng Zhang

    2014-01-01

    Full Text Available With the strengthening of the social contradiction, the outbreak of vent collective behavior tends to be frequent. The essence of vent collective behavior is emergence of synchronization. In order to explore the threshold of consensus synchronization in vent collective behavior, a mathematic model and a corresponding simulation model based on multi-agent are proposed. The results of analysis by mean field theory and simulation experiments show the following. (1 There is a threshold Kc for consensus synchronization in global-coupling and homogeneous group, and when the system parameter K is greater than Kc, consensus synchronization emerge. Otherwise the system cannot achieve synchronization. The conclusion is verified by further study of multiagent simulation. (2 Compared with the global-coupling situation, the process of synchronization is delayed in local-coupling and homogeneous group. (3 For local-coupling and heterogeneous group, consensus dissemination can achieve synchronization only when the effects of the parameters meet the threshold requirements of consensus synchronization.

  2. Phenomenological Aspects of No-Scale Inflation Models

    CERN Document Server

    Ellis, John; Nanopoulos, Dimitri V; Olive, Keith A

    2015-01-01

    We discuss phenomenological aspects of no-scale supergravity inflationary models motivated by compactified string models, in which the inflaton may be identified either as a K\\"ahler modulus or an untwisted matter field, focusing on models that make predictions for the scalar spectral index $n_s$ and the tensor-to-scalar ratio $r$ that are similar to the Starobinsky model. We discuss possible patterns of soft supersymmetry breaking, exhibiting examples of the pure no-scale type $m_0 = B_0 = A_0 = 0$, of the CMSSM type with universal $A_0$ and $m_0 \

  3. [Modeling continuous scaling of NDVI based on fractal theory].

    Science.gov (United States)

    Luan, Hai-Jun; Tian, Qing-Jiu; Yu, Tao; Hu, Xin-Li; Huang, Yan; Du, Ling-Tong; Zhao, Li-Min; Wei, Xi; Han, Jie; Zhang, Zhou-Wei; Li, Shao-Peng

    2013-07-01

    Scale effect was one of the very important scientific problems of remote sensing. The scale effect of quantitative remote sensing can be used to study retrievals' relationship between different-resolution images, and its research became an effective way to confront the challenges, such as validation of quantitative remote sensing products et al. Traditional up-scaling methods cannot describe scale changing features of retrievals on entire series of scales; meanwhile, they are faced with serious parameters correction issues because of imaging parameters' variation of different sensors, such as geometrical correction, spectral correction, etc. Utilizing single sensor image, fractal methodology was utilized to solve these problems. Taking NDVI (computed by land surface radiance) as example and based on Enhanced Thematic Mapper Plus (ETM+) image, a scheme was proposed to model continuous scaling of retrievals. Then the experimental results indicated that: (a) For NDVI, scale effect existed, and it could be described by fractal model of continuous scaling; (2) The fractal method was suitable for validation of NDVI. All of these proved that fractal was an effective methodology of studying scaling of quantitative remote sensing.

  4. Ecohydrological modeling for large-scale environmental impact assessment.

    Science.gov (United States)

    Woznicki, Sean A; Nejadhashemi, A Pouyan; Abouali, Mohammad; Herman, Matthew R; Esfahanian, Elaheh; Hamaamin, Yaseen A; Zhang, Zhen

    2016-02-01

    Ecohydrological models are frequently used to assess the biological integrity of unsampled streams. These models vary in complexity and scale, and their utility depends on their final application. Tradeoffs are usually made in model scale, where large-scale models are useful for determining broad impacts of human activities on biological conditions, and regional-scale (e.g. watershed or ecoregion) models provide stakeholders greater detail at the individual stream reach level. Given these tradeoffs, the objective of this study was to develop large-scale stream health models with reach level accuracy similar to regional-scale models thereby allowing for impacts assessments and improved decision-making capabilities. To accomplish this, four measures of biological integrity (Ephemeroptera, Plecoptera, and Trichoptera taxa (EPT), Family Index of Biotic Integrity (FIBI), Hilsenhoff Biotic Index (HBI), and fish Index of Biotic Integrity (IBI)) were modeled based on four thermal classes (cold, cold-transitional, cool, and warm) of streams that broadly dictate the distribution of aquatic biota in Michigan. The Soil and Water Assessment Tool (SWAT) was used to simulate streamflow and water quality in seven watersheds and the Hydrologic Index Tool was used to calculate 171 ecologically relevant flow regime variables. Unique variables were selected for each thermal class using a Bayesian variable selection method. The variables were then used in development of adaptive neuro-fuzzy inference systems (ANFIS) models of EPT, FIBI, HBI, and IBI. ANFIS model accuracy improved when accounting for stream thermal class rather than developing a global model. Copyright © 2015 Elsevier B.V. All rights reserved.

  5. FINAL REPORT: Mechanistically-Base Field Scale Models of Uranium Biogeochemistry from Upscaling Pore-Scale Experiments and Models

    Energy Technology Data Exchange (ETDEWEB)

    Wood, Brian D. [Oregon State Univ., Corvallis, OR (United States)

    2013-11-04

    Biogeochemical reactive transport processes in the subsurface environment are important to many contemporary environmental issues of significance to DOE. Quantification of risks and impacts associated with environmental management options, and design of remediation systems where needed, require that we have at our disposal reliable predictive tools (usually in the form of numerical simulation models). However, it is well known that even the most sophisticated reactive transport models available today have poor predictive power, particularly when applied at the field scale. Although the lack of predictive ability is associated in part with our inability to characterize the subsurface and limitations in computational power, significant advances have been made in both of these areas in recent decades and can be expected to continue. In this research, we examined the upscaling (pore to Darcy and Darcy to field) the problem of bioremediation via biofilms in porous media. The principle idea was to start with a conceptual description of the bioremediation process at the pore scale, and apply upscaling methods to formally develop the appropriate upscaled model at the so-called Darcy scale. The purpose was to determine (1) what forms the upscaled models would take, and (2) how one might parameterize such upscaled models for applications to bioremediation in the field. We were able to effectively upscale the bioremediation process to explain how the pore-scale phenomena were linked to the field scale. The end product of this research was to produce a set of upscaled models that could be used to help predict field-scale bioremediation. These models were mechanistic, in the sense that they directly incorporated pore-scale information, but upscaled so that only the essential features of the process were needed to predict the effective parameters that appear in the model. In this way, a direct link between the microscale and the field scale was made, but the upscaling process

  6. Systematic phenotyping of a large-scale Candida glabrata deletion collection reveals novel antifungal tolerance genes.

    Directory of Open Access Journals (Sweden)

    Tobias Schwarzmüller

    2014-06-01

    Full Text Available The opportunistic fungal pathogen Candida glabrata is a frequent cause of candidiasis, causing infections ranging from superficial to life-threatening disseminated disease. The inherent tolerance of C. glabrata to azole drugs makes this pathogen a serious clinical threat. To identify novel genes implicated in antifungal drug tolerance, we have constructed a large-scale C. glabrata deletion library consisting of 619 unique, individually bar-coded mutant strains, each lacking one specific gene, all together representing almost 12% of the genome. Functional analysis of this library in a series of phenotypic and fitness assays identified numerous genes required for growth of C. glabrata under normal or specific stress conditions, as well as a number of novel genes involved in tolerance to clinically important antifungal drugs such as azoles and echinocandins. We identified 38 deletion strains displaying strongly increased susceptibility to caspofungin, 28 of which encoding proteins that have not previously been linked to echinocandin tolerance. Our results demonstrate the potential of the C. glabrata mutant collection as a valuable resource in functional genomics studies of this important fungal pathogen of humans, and to facilitate the identification of putative novel antifungal drug target and virulence genes.

  7. A New Model for Building Digital Science Education Collections

    Science.gov (United States)

    Niepold, F.; McCaffrey, M.; Morrill, C.; Ganse, J.; Weston, T.

    2005-12-01

    The Polar Regions play an integral role in how our Earth system operates. However, the Polar Regions are marginally studied in the K-12 classroom in the United States. The International Polar Year's (IPY) coordinated campaign of polar observations, research, and analysis that will be multidisciplinary in scope and international in participation offers a powerful opportunity for K-12 classroom. The IPY's scientific objective to better understand the key roles of the Polar Regions in global processes will allow students a window into the poles and this unique regions role in the Earth system. IPY will produce careful, useful scientific information that will advance our understanding of the Polar Regions and their connections to the rest of the globe. The IPY is an opportunity to inspire the next generation of very young Earth system scientists. The IPY's draft education & outreach position paper asks a key question that must guide future educational projects; "Why is the polar regions and polar research important to all people on earth?" In efforts to coordinate educational activities and collaborate with international projects, United States national agencies, and other educational initiatives, it is the purpose of this session to explore potential partnerships, while primarily recommending a model for educational product development and review. During such a large international science endeavor, numerous educational activities and opportunities are developed, but these educational programs can suffer from too many unconnected options being available to teachers and students. Additionally, activities often are incompatible with each other making classroom implementation unnecessarily complex and prohibitively time consuming for teachers. A newly develop educational activity collection technique developed for DLESE offers an effective model for IPY product gap analysis and development. The Climate Change Collection developed as a pilot project for the Digital Library

  8. Cancer systems biology and modeling: microscopic scale and multiscale approaches.

    Science.gov (United States)

    Masoudi-Nejad, Ali; Bidkhori, Gholamreza; Hosseini Ashtiani, Saman; Najafi, Ali; Bozorgmehr, Joseph H; Wang, Edwin

    2015-02-01

    Cancer has become known as a complex and systematic disease on macroscopic, mesoscopic and microscopic scales. Systems biology employs state-of-the-art computational theories and high-throughput experimental data to model and simulate complex biological procedures such as cancer, which involves genetic and epigenetic, in addition to intracellular and extracellular complex interaction networks. In this paper, different systems biology modeling techniques such as systems of differential equations, stochastic methods, Boolean networks, Petri nets, cellular automata methods and agent-based systems are concisely discussed. We have compared the mentioned formalisms and tried to address the span of applicability they can bear on emerging cancer modeling and simulation approaches. Different scales of cancer modeling, namely, microscopic, mesoscopic and macroscopic scales are explained followed by an illustration of angiogenesis in microscopic scale of the cancer modeling. Then, the modeling of cancer cell proliferation and survival are examined on a microscopic scale and the modeling of multiscale tumor growth is explained along with its advantages. Copyright © 2014 Elsevier Ltd. All rights reserved.

  9. Nucleon electric dipole moments in high-scale supersymmetric models

    Energy Technology Data Exchange (ETDEWEB)

    Hisano, Junji [Kobayashi-Maskawa Institute for the Origin of Particles and the Universe (KMI),Nagoya University,Nagoya 464-8602 (Japan); Department of Physics, Nagoya University,Nagoya 464-8602 (Japan); Kavli IPMU (WPI), UTIAS, University of Tokyo,Kashiwa, Chiba 277-8584 (Japan); Kobayashi, Daiki; Kuramoto, Wataru; Kuwahara, Takumi [Department of Physics, Nagoya University,Nagoya 464-8602 (Japan)

    2015-11-12

    The electric dipole moments (EDMs) of electron and nucleons are promising probes of the new physics. In generic high-scale supersymmetric (SUSY) scenarios such as models based on mixture of the anomaly and gauge mediations, gluino has an additional contribution to the nucleon EDMs. In this paper, we studied the effect of the CP-violating gluon Weinberg operator induced by the gluino chromoelectric dipole moment in the high-scale SUSY scenarios, and we evaluated the nucleon and electron EDMs in the scenarios. We found that in the generic high-scale SUSY models, the nucleon EDMs may receive the sizable contribution from the Weinberg operator. Thus, it is important to compare the nucleon EDMs with the electron one in order to discriminate among the high-scale SUSY models.

  10. Modelling Influence and Opinion Evolution in Online Collective Behaviour.

    Directory of Open Access Journals (Sweden)

    Corentin Vande Kerckhove

    Full Text Available Opinion evolution and judgment revision are mediated through social influence. Based on a large crowdsourced in vitro experiment (n = 861, it is shown how a consensus model can be used to predict opinion evolution in online collective behaviour. It is the first time the predictive power of a quantitative model of opinion dynamics is tested against a real dataset. Unlike previous research on the topic, the model was validated on data which did not serve to calibrate it. This avoids to favor more complex models over more simple ones and prevents overfitting. The model is parametrized by the influenceability of each individual, a factor representing to what extent individuals incorporate external judgments. The prediction accuracy depends on prior knowledge on the participants' past behaviour. Several situations reflecting data availability are compared. When the data is scarce, the data from previous participants is used to predict how a new participant will behave. Judgment revision includes unpredictable variations which limit the potential for prediction. A first measure of unpredictability is proposed. The measure is based on a specific control experiment. More than two thirds of the prediction errors are found to occur due to unpredictability of the human judgment revision process rather than to model imperfection.

  11. A Bayesian subgroup analysis using collections of ANOVA models.

    Science.gov (United States)

    Liu, Jinzhong; Sivaganesan, Siva; Laud, Purushottam W; Müller, Peter

    2017-07-01

    We develop a Bayesian approach to subgroup analysis using ANOVA models with multiple covariates, extending an earlier work. We assume a two-arm clinical trial with normally distributed response variable. We also assume that the covariates for subgroup finding are categorical and are a priori specified, and parsimonious easy-to-interpret subgroups are preferable. We represent the subgroups of interest by a collection of models and use a model selection approach to finding subgroups with heterogeneous effects. We develop suitable priors for the model space and use an objective Bayesian approach that yields multiplicity adjusted posterior probabilities for the models. We use a structured algorithm based on the posterior probabilities of the models to determine which subgroup effects to report. Frequentist operating characteristics of the approach are evaluated using simulation. While our approach is applicable in more general cases, we mainly focus on the 2 × 2 case of two covariates each at two levels for ease of presentation. The approach is illustrated using a real data example. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  12. Measuring and modeling behavioral decision dynamics in collective evacuation.

    Directory of Open Access Journals (Sweden)

    Jean M Carlson

    Full Text Available Identifying and quantifying factors influencing human decision making remains an outstanding challenge, impacting the performance and predictability of social and technological systems. In many cases, system failures are traced to human factors including congestion, overload, miscommunication, and delays. Here we report results of a behavioral network science experiment, targeting decision making in a natural disaster. In a controlled laboratory setting, our results quantify several key factors influencing individual evacuation decision making in a controlled laboratory setting. The experiment includes tensions between broadcast and peer-to-peer information, and contrasts the effects of temporal urgency associated with the imminence of the disaster and the effects of limited shelter capacity for evacuees. Based on empirical measurements of the cumulative rate of evacuations as a function of the instantaneous disaster likelihood, we develop a quantitative model for decision making that captures remarkably well the main features of observed collective behavior across many different scenarios. Moreover, this model captures the sensitivity of individual- and population-level decision behaviors to external pressures, and systematic deviations from the model provide meaningful estimates of variability in the collective response. Identification of robust methods for quantifying human decisions in the face of risk has implications for policy in disasters and other threat scenarios, specifically the development and testing of robust strategies for training and control of evacuations that account for human behavior and network topologies.

  13. Standard model with spontaneously broken quantum scale invariance

    Science.gov (United States)

    Ghilencea, D. M.; Lalak, Z.; Olszewski, P.

    2017-09-01

    We explore the possibility that scale symmetry is a quantum symmetry that is broken only spontaneously and apply this idea to the standard model. We compute the quantum corrections to the potential of the Higgs field (ϕ ) in the classically scale-invariant version of the standard model (mϕ=0 at tree level) extended by the dilaton (σ ). The tree-level potential of ϕ and σ , dictated by scale invariance, may contain nonpolynomial effective operators, e.g., ϕ6/σ2, ϕ8/σ4, ϕ10/σ6, etc. The one-loop scalar potential is scale invariant, since the loop calculations manifestly preserve the scale symmetry, with the dimensional regularization subtraction scale μ generated spontaneously by the dilaton vacuum expectation value μ ˜⟨σ ⟩. The Callan-Symanzik equation of the potential is verified in the presence of the gauge, Yukawa, and the nonpolynomial operators. The couplings of the nonpolynomial operators have nonzero beta functions that we can actually compute from the quantum potential. At the quantum level, the Higgs mass is protected by spontaneously broken scale symmetry, even though the theory is nonrenormalizable. We compare the one-loop potential to its counterpart computed in the "traditional" dimensional regularization scheme that breaks scale symmetry explicitly (μ =constant) in the presence at the tree level of the nonpolynomial operators.

  14. Description of Muzzle Blast by Modified Ideal Scaling Models

    Directory of Open Access Journals (Sweden)

    Kevin S. Fansler

    1998-01-01

    Full Text Available Gun blast data from a large variety of weapons are scaled and presented for both the instantaneous energy release and the constant energy deposition rate models. For both ideal explosion models, similar amounts of data scatter occur for the peak overpressure but the instantaneous energy release model correlated the impulse data significantly better, particularly for the region in front of the gun. Two parameters that characterize gun blast are used in conjunction with the ideal scaling models to improve the data correlation. The gun-emptying parameter works particularly well with the instantaneous energy release model to improve data correlation. In particular, the impulse, especially in the forward direction of the gun, is correlated significantly better using the instantaneous energy release model coupled with the use of the gun-emptying parameter. The use of the Mach disc location parameter improves the correlation only marginally. A predictive model is obtained from the modified instantaneous energy release correlation.

  15. On Scaling Modes and Balancing Stochastic, Discretization, and Modeling Error

    Science.gov (United States)

    Brown, J.

    2015-12-01

    We consider accuracy-cost tradeoffs and the problem of finding Pareto optimal configurations for stochastic forward and inverse problems. As the target accuracy is changed, we should use different physical models, stochastic models, discretizations, and solution algorithms. In this spectrum, we see different scientifically-relevant scaling modes, thus different opportunities and limitations on parallel computers and emerging architectures.

  16. A Scale Model of Cation Exchange for Classroom Demonstration.

    Science.gov (United States)

    Guertal, E. A.; Hattey, J. A.

    1996-01-01

    Describes a project that developed a scale model of cation exchange that can be used for a classroom demonstration. The model uses kaolinite clay, nails, plywood, and foam balls to enable students to gain a better understanding of the exchange complex of soil clays. (DDR)

  17. Modeling nano-scale grain growth of intermetallics

    Indian Academy of Sciences (India)

    Administrator

    Abstract. The Monte Carlo simulation is utilized to model the nano-scale grain growth of two nano- crystalline materials, Pd81Zr19 and RuAl. In this regard, the relationship between the real time and the time unit of simulation, i.e. Monte Carlo step (MCS), is determined. The results of modeling show that with increasing time ...

  18. Multi-scale modeling strategies in materials science—The ...

    Indian Academy of Sciences (India)

    Unknown

    modeling strategies that bridge the length-scales. The quasicontinuum method pivots on a strategy which attempts to take advantage of both conventional atomistic simulations and continuum mechanics to develop a seamless methodology for the modeling of defects such as dislocations, grain boundaries and cracks, and ...

  19. Role of scaling in the statistical modelling of finance

    Indian Academy of Sciences (India)

    Modelling the evolution of a financial index as a stochastic process is a problem awaiting a full, satisfactory solution since it was first formulated by Bachelier in 1900. Here it is shown that the scaling with time of the return probability density function sampled from the historical series suggests a successful model.

  20. Role of scaling in the statistical modelling of finance

    Indian Academy of Sciences (India)

    Abstract. Modelling the evolution of a financial index as a stochastic process is a prob- lem awaiting a full, satisfactory solution since it was first formulated by Bachelier in 1900. Here it is shown that the scaling with time of the return probability density function sampled from the historical series suggests a successful model.

  1. A first large-scale flood inundation forecasting model

    Science.gov (United States)

    Schumann, G. J.-P.; Neal, J. C.; Voisin, N.; Andreadis, K. M.; Pappenberger, F.; Phanthuwongpakdee, N.; Hall, A. C.; Bates, P. D.

    2013-10-01

    At present continental to global scale flood forecasting predicts at a point discharge, with little attention to detail and accuracy of local scale inundation predictions. Yet, inundation variables are of interest and all flood impacts are inherently local in nature. This paper proposes a large-scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas. The model was built for the Lower Zambezi River to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. ECMWF ensemble forecast (ENS) data were used to force the VIC (Variable Infiltration Capacity) hydrologic model, which simulated and routed daily flows to the input boundary locations of a 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of channels that play a key role in flood wave propagation. We therefore employed a novel subgrid channel scheme to describe the river network in detail while representing the floodplain at an appropriate scale. The modeling system was calibrated using channel water levels from satellite laser altimetry and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of between one and two model resolutions compared to an observed flood edge and inundation area agreement was on average 86%. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2.

  2. Transdisciplinary application of the cross-scale resilience model

    Science.gov (United States)

    Sundstrom, Shana M.; Angeler, David G.; Garmestani, Ahjond S.; Garcia, Jorge H.; Allen, Craig R.

    2014-01-01

    The cross-scale resilience model was developed in ecology to explain the emergence of resilience from the distribution of ecological functions within and across scales, and as a tool to assess resilience. We propose that the model and the underlying discontinuity hypothesis are relevant to other complex adaptive systems, and can be used to identify and track changes in system parameters related to resilience. We explain the theory behind the cross-scale resilience model, review the cases where it has been applied to non-ecological systems, and discuss some examples of social-ecological, archaeological/ anthropological, and economic systems where a cross-scale resilience analysis could add a quantitative dimension to our current understanding of system dynamics and resilience. We argue that the scaling and diversity parameters suitable for a resilience analysis of ecological systems are appropriate for a broad suite of systems where non-normative quantitative assessments of resilience are desired. Our planet is currently characterized by fast environmental and social change, and the cross-scale resilience model has the potential to quantify resilience across many types of complex adaptive systems.

  3. Ionocovalency and Applications 1. Ionocovalency Model and Orbital Hybrid Scales

    Directory of Open Access Journals (Sweden)

    Yonghe Zhang

    2010-11-01

    Full Text Available Ionocovalency (IC, a quantitative dual nature of the atom, is defined and correlated with quantum-mechanical potential to describe quantitatively the dual properties of the bond. Orbiotal hybrid IC model scale, IC, and IC electronegativity scale, XIC, are proposed, wherein the ionicity and the covalent radius are determined by spectroscopy. Being composed of the ionic function I and the covalent function C, the model describes quantitatively the dual properties of bond strengths, charge density and ionic potential. Based on the atomic electron configuration and the various quantum-mechanical built-up dual parameters, the model formed a Dual Method of the multiple-functional prediction, which has much more versatile and exceptional applications than traditional electronegativity scales and molecular properties. Hydrogen has unconventional values of IC and XIC, lower than that of boron. The IC model can agree fairly well with the data of bond properties and satisfactorily explain chemical observations of elements throughout the Periodic Table.

  4. Traffic assignment models in large-scale applications

    DEFF Research Database (Denmark)

    Rasmussen, Thomas Kjær

    focuses on large-scale applications and contributes with methods to actualise the true potential of disaggregate models. To achieve this target, contributions are given to several components of traffic assignment modelling, by (i) enabling the utilisation of the increasingly available data sources...... on individual behaviour in the model specification, (ii) proposing a method to use disaggregate Revealed Preference (RP) data to estimate utility functions and provide evidence on the value of congestion and the value of reliability, (iii) providing a method to account for individual mis...... is essential in the development and validation of realistic models for large-scale applications. Nowadays, modern technology facilitates easy access to RP data and allows large-scale surveys. The resulting datasets are, however, usually very large and hence data processing is necessary to extract the pieces...

  5. A first large-scale flood inundation forecasting model

    Energy Technology Data Exchange (ETDEWEB)

    Schumann, Guy J-P; Neal, Jeffrey C.; Voisin, Nathalie; Andreadis, Konstantinos M.; Pappenberger, Florian; Phanthuwongpakdee, Kay; Hall, Amanda C.; Bates, Paul D.

    2013-11-04

    At present continental to global scale flood forecasting focusses on predicting at a point discharge, with little attention to the detail and accuracy of local scale inundation predictions. Yet, inundation is actually the variable of interest and all flood impacts are inherently local in nature. This paper proposes a first large scale flood inundation ensemble forecasting model that uses best available data and modeling approaches in data scarce areas and at continental scales. The model was built for the Lower Zambezi River in southeast Africa to demonstrate current flood inundation forecasting capabilities in large data-scarce regions. The inundation model domain has a surface area of approximately 170k km2. ECMWF meteorological data were used to force the VIC (Variable Infiltration Capacity) macro-scale hydrological model which simulated and routed daily flows to the input boundary locations of the 2-D hydrodynamic model. Efficient hydrodynamic modeling over large areas still requires model grid resolutions that are typically larger than the width of many river channels that play a key a role in flood wave propagation. We therefore employed a novel sub-grid channel scheme to describe the river network in detail whilst at the same time representing the floodplain at an appropriate and efficient scale. The modeling system was first calibrated using water levels on the main channel from the ICESat (Ice, Cloud, and land Elevation Satellite) laser altimeter and then applied to predict the February 2007 Mozambique floods. Model evaluation showed that simulated flood edge cells were within a distance of about 1 km (one model resolution) compared to an observed flood edge of the event. Our study highlights that physically plausible parameter values and satisfactory performance can be achieved at spatial scales ranging from tens to several hundreds of thousands of km2 and at model grid resolutions up to several km2. However, initial model test runs in forecast mode

  6. Observed Scaling in Clouds and Precipitation and Scale Incognizance in Regional to Global Atmospheric Models

    Energy Technology Data Exchange (ETDEWEB)

    O' Brien, Travis A.; Li, Fuyu; Collins, William D.; Rauscher, Sara; Ringler, Todd; Taylor, Mark; Hagos, Samson M.; Leung, Lai-Yung R.

    2013-12-01

    We use observations of robust scaling behavior in clouds and precipitation to derive constraints on how partitioning of precipitation should change with model resolution. Our analysis indicates that 90-99% of stratiform precipitation should occur in clouds that are resolvable by contemporary climate models (e.g., with 200 km or finer grid spacing). Furthermore, this resolved fraction of stratiform precipitation should increase sharply with resolution, such that effectively all stratiform precipitation should be resolvable above scales of ~50 km. We show that the Community Atmosphere Model (CAM) and the Weather Research and Forecasting (WRF) model also exhibit the robust cloud and precipitation scaling behavior that is present in observations, yet the resolved fraction of stratiform precipitation actually decreases with increasing model resolution. A suite of experiments with multiple dynamical cores provides strong evidence that this `scale-incognizant' behavior originates in one of the CAM4 parameterizations. An additional set of sensitivity experiments rules out both convection parameterizations, and by a process of elimination these results implicate the stratiform cloud and precipitation parameterization. Tests with the CAM5 physics package show improvements in the resolution-dependence of resolved cloud fraction and resolved stratiform precipitation fraction.

  7. Multi-scale atmospheric composition modelling for the Balkan region

    Science.gov (United States)

    Ganev, Kostadin; Syrakov, Dimiter; Todorova, Angelina; Prodanova, Maria; Atanasov, Emanouil; Gurov, Todor; Karaivanova, Aneta; Miloshev, Nikolai; Gadzhev, Georgi; Jordanov, Georgi

    2010-05-01

    Overview The present work describes the progress in developing of an integrated, multi-scale Balkan region oriented modeling system. The main activities and achievements at this stage of the work are: Creating, enriching and updating the necessary physiographic, emission and meteorological data bases; Installation of the models for GRID application, model tuning and validation; Extensive numerical simulations on regional (Balkan Peninsula) and local (Bulgaria) scales. Objevtives: The present work describes the progress of an application developed by the Environmental VO of the 7FP project SEE-GRID eInfrastructure for regional eScience. The application aims at developing of an integrated, multi-scale Balkan region oriented modelling system, which would be able to: -Study the atmospheric pollution transport and transformation processes (accounting also for heterogeneous chemistry and the importance of aerosols for air quality and climate) from urban to local to regional (Balkan) scales; -Track and characterize the main pathways and processes that lead to atmospheric composition formation in different scales; -Account for the biosphere-atmosphere exchange as a source and receptor of atmospheric chemical species; -Provide high quality scientifically robust assessments of the air quality and its origin, thus facilitating formulation of pollution mitigation strategies at national and Balkan level. The application is based on US EPA Models-3 system. Description of work: The main activities and achievements at this still preparatory stage of the work are: 1.) Creating, enriching and updating the necessary physiographic, emission and meteorological data bases 2.) Installation of the models for GRID application, model tuning and validation, numerical experiments and interpretation of the results: The US EPA Models 3 system is installed; software for emission speciation and for introducing emission temporal profiles is created, a procedure for calculating biogenic VOC

  8. Drift-Scale Coupled Processes (DST and THC Seepage) Models

    Energy Technology Data Exchange (ETDEWEB)

    P. Dixon

    2004-04-05

    The purpose of this Model Report (REV02) is to document the unsaturated zone (UZ) models used to evaluate the potential effects of coupled thermal-hydrological-chemical (THC) processes on UZ flow and transport. This Model Report has been developed in accordance with the ''Technical Work Plan for: Performance Assessment Unsaturated Zone'' (Bechtel SAIC Company, LLC (BSC) 2002 [160819]). The technical work plan (TWP) describes planning information pertaining to the technical scope, content, and management of this Model Report in Section 1.12, Work Package AUZM08, ''Coupled Effects on Flow and Seepage''. The plan for validation of the models documented in this Model Report is given in Attachment I, Model Validation Plans, Section I-3-4, of the TWP. Except for variations in acceptance criteria (Section 4.2), there were no deviations from this TWP. This report was developed in accordance with AP-SIII.10Q, ''Models''. This Model Report documents the THC Seepage Model and the Drift Scale Test (DST) THC Model. The THC Seepage Model is a drift-scale process model for predicting the composition of gas and water that could enter waste emplacement drifts and the effects of mineral alteration on flow in rocks surrounding drifts. The DST THC model is a drift-scale process model relying on the same conceptual model and much of the same input data (i.e., physical, hydrological, thermodynamic, and kinetic) as the THC Seepage Model. The DST THC Model is the primary method for validating the THC Seepage Model. The DST THC Model compares predicted water and gas compositions, as well as mineral alteration patterns, with observed data from the DST. These models provide the framework to evaluate THC coupled processes at the drift scale, predict flow and transport behavior for specified thermal-loading conditions, and predict the evolution of mineral alteration and fluid chemistry around potential waste emplacement drifts. The

  9. Empirical spatial econometric modelling of small scale neighbourhood

    Science.gov (United States)

    Gerkman, Linda

    2012-07-01

    The aim of the paper is to model small scale neighbourhood in a house price model by implementing the newest methodology in spatial econometrics. A common problem when modelling house prices is that in practice it is seldom possible to obtain all the desired variables. Especially variables capturing the small scale neighbourhood conditions are hard to find. If there are important explanatory variables missing from the model, the omitted variables are spatially autocorrelated and they are correlated with the explanatory variables included in the model, it can be shown that a spatial Durbin model is motivated. In the empirical application on new house price data from Helsinki in Finland, we find the motivation for a spatial Durbin model, we estimate the model and interpret the estimates for the summary measures of impacts. By the analysis we show that the model structure makes it possible to model and find small scale neighbourhood effects, when we know that they exist, but we are lacking proper variables to measure them.

  10. Model Scaling of Hydrokinetic Ocean Renewable Energy Systems

    Science.gov (United States)

    von Ellenrieder, Karl; Valentine, William

    2013-11-01

    Numerical simulations are performed to validate a non-dimensional dynamic scaling procedure that can be applied to subsurface and deeply moored systems, such as hydrokinetic ocean renewable energy devices. The prototype systems are moored in water 400 m deep and include: subsurface spherical buoys moored in a shear current and excited by waves; an ocean current turbine excited by waves; and a deeply submerged spherical buoy in a shear current excited by strong current fluctuations. The corresponding model systems, which are scaled based on relative water depths of 10 m and 40 m, are also studied. For each case examined, the response of the model system closely matches the scaled response of the corresponding full-sized prototype system. The results suggest that laboratory-scale testing of complete ocean current renewable energy systems moored in a current is possible. This work was supported by the U.S. Southeast National Marine Renewable Energy Center (SNMREC).

  11. Intermediate time scaling in classical continuous-spin models

    CERN Document Server

    Oh, S K; Chung, J S

    1999-01-01

    The time-dependent total spin correlation functions of the two- and the three-dimensional classical XY models seem to have a very narrow first dynamic scaling interval and, after this interval, a much broader anomalous second dynamic scaling interval appears. In this paper, this intriguing feature found in our previous work is re-examined. By introducing a phenomenological characteristic time for this intermediate time interval, the second dynamic scaling behavior can be explained. Moreover, the dynamic critical exponent found from this novel characteristic time is found to be identical to that found from the usual dynamic scaling theory developed in the wave vector and frequency domain. For continuous spin models, in which the spin variable related to a long-range order parameter is not a constant of motion, our method yielded the dynamic critical exponent with less computational efforts.

  12. Hydrological Modelling of Small Scale Processes in a Wetland Habitat

    DEFF Research Database (Denmark)

    Johansen, Ole; Jensen, Jacob Birk; Pedersen, Morten Lauge

    2009-01-01

    Numerical modelling of the hydrology in a Danish rich fen area has been conducted. By collecting various data in the field the model has been successfully calibrated and the flow paths as well as the groundwater discharge distribution have been simulated in details. The results of this work have ...

  13. Scale genesis and gravitational wave in a classically scale invariant extension of the standard model

    Energy Technology Data Exchange (ETDEWEB)

    Kubo, Jisuke [Institute for Theoretical Physics, Kanazawa University,Kanazawa 920-1192 (Japan); Yamada, Masatoshi [Department of Physics, Kyoto University,Kyoto 606-8502 (Japan); Institut für Theoretische Physik, Universität Heidelberg,Philosophenweg 16, 69120 Heidelberg (Germany)

    2016-12-01

    We assume that the origin of the electroweak (EW) scale is a gauge-invariant scalar-bilinear condensation in a strongly interacting non-abelian gauge sector, which is connected to the standard model via a Higgs portal coupling. The dynamical scale genesis appears as a phase transition at finite temperature, and it can produce a gravitational wave (GW) background in the early Universe. We find that the critical temperature of the scale phase transition lies above that of the EW phase transition and below few O(100) GeV and it is strongly first-order. We calculate the spectrum of the GW background and find the scale phase transition is strong enough that the GW background can be observed by DECIGO.

  14. Large scale stochastic spatio-temporal modelling with PCRaster

    Science.gov (United States)

    Karssenberg, Derek; Drost, Niels; Schmitz, Oliver; de Jong, Kor; Bierkens, Marc F. P.

    2013-04-01

    software from the eScience Technology Platform (eSTeP), developed at the Netherlands eScience Center. This will allow us to scale up to hundreds of machines, with thousands of compute cores. A key requirement is not to change the user experience of the software. PCRaster operations and the use of the Python framework classes should work in a similar manner on machines ranging from a laptop to a supercomputer. This enables a seamless transfer of models from small machines, where model development is done, to large machines used for large-scale model runs. Domain specialists from a large range of disciplines, including hydrology, ecology, sedimentology, and land use change studies, currently use the PCRaster Python software within research projects. Applications include global scale hydrological modelling and error propagation in large-scale land use change models. The software runs on MS Windows, Linux operating systems, and OS X.

  15. Deconfined Quantum Criticality, Scaling Violations, and Classical Loop Models

    Science.gov (United States)

    Nahum, Adam; Chalker, J. T.; Serna, P.; Ortuño, M.; Somoza, A. M.

    2015-10-01

    Numerical studies of the transition between Néel and valence bond solid phases in two-dimensional quantum antiferromagnets give strong evidence for the remarkable scenario of deconfined criticality, but display strong violations of finite-size scaling that are not yet understood. We show how to realize the universal physics of the Néel-valence-bond-solid (VBS) transition in a three-dimensional classical loop model (this model includes the subtle interference effect that suppresses hedgehog defects in the Néel order parameter). We use the loop model for simulations of unprecedentedly large systems (up to linear size L =512 ). Our results are compatible with a continuous transition at which both Néel and VBS order parameters are critical, and we do not see conventional signs of first-order behavior. However, we show that the scaling violations are stronger than previously realized and are incompatible with conventional finite-size scaling, even if allowance is made for a weakly or marginally irrelevant scaling variable. In particular, different approaches to determining the anomalous dimensions ηVBS and ηN é el yield very different results. The assumption of conventional finite-size scaling leads to estimates that drift to negative values at large sizes, in violation of the unitarity bounds. In contrast, the decay with distance of critical correlators on scales much smaller than system size is consistent with large positive anomalous dimensions. Barring an unexpected reversal in behavior at still larger sizes, this implies that the transition, if continuous, must show unconventional finite-size scaling, for example, from an additional dangerously irrelevant scaling variable. Another possibility is an anomalously weak first-order transition. By analyzing the renormalization group flows for the noncompact CP n -1 field theory (the n -component Abelian Higgs model) between two and four dimensions, we give the simplest scenario by which an anomalously weak first

  16. Mapping condition-dependent regulation of metabolism in yeast through genome-scale modeling

    DEFF Research Database (Denmark)

    Österlund, Tobias; Nookaew, Intawat; Bordel, Sergio

    2013-01-01

    ABSTRACT: BACKGROUND: The genome-scale metabolic model of Saccharomyces cerevisiae, first presented in 2003, was the first genome-scale network reconstruction for a eukaryotic organism. Since then continuous efforts have been made in order to improve and expand the yeast metabolic network. RESULTS......-filling methods and by introducing new reactions and pathways based on studies of the literature and databases. The model was shown to perform well both for growth simulations in different media and gene essentiality analysis for single and double knock-outs. Further, the model was used as a scaffold......-to-date collection of knowledge on yeast metabolism. The model was used for simulating the yeast metabolism under four different growth conditions and experimental data from these four conditions was integrated to the model. The model together with experimental data is a useful tool to identify condition...

  17. Anomalous scaling in an age-dependent branching model.

    Science.gov (United States)

    Keller-Schmidt, Stephanie; Tuğrul, Murat; Eguíluz, Víctor M; Hernández-García, Emilio; Klemm, Konstantin

    2015-02-01

    We introduce a one-parametric family of tree growth models, in which branching probabilities decrease with branch age τ as τ(-α). Depending on the exponent α, the scaling of tree depth with tree size n displays a transition between the logarithmic scaling of random trees and an algebraic growth. At the transition (α=1) tree depth grows as (logn)(2). This anomalous scaling is in good agreement with the trend observed in evolution of biological species, thus providing a theoretical support for age-dependent speciation and associating it to the occurrence of a critical point.

  18. Criticality in the scale invariant standard model (squared

    Directory of Open Access Journals (Sweden)

    Robert Foot

    2015-07-01

    Full Text Available We consider first the standard model Lagrangian with μh2 Higgs potential term set to zero. We point out that this classically scale invariant theory potentially exhibits radiative electroweak/scale symmetry breaking with very high vacuum expectation value (VEV for the Higgs field, 〈ϕ〉≈1017–18 GeV. Furthermore, if such a vacuum were realized then cancellation of vacuum energy automatically implies that this nontrivial vacuum is degenerate with the trivial unbroken vacuum. Such a theory would therefore be critical with the Higgs self-coupling and its beta function nearly vanishing at the symmetry breaking minimum, λ(μ=〈ϕ〉≈βλ(μ=〈ϕ〉≈0. A phenomenologically viable model that predicts this criticality property arises if we consider two copies of the standard model Lagrangian, with exact Z2 symmetry swapping each ordinary particle with a partner. The spontaneously broken vacuum can then arise where one sector gains the high scale VEV, while the other gains the electroweak scale VEV. The low scale VEV is perturbed away from zero due to a Higgs portal coupling, or via the usual small Higgs mass terms μh2, which softly break the scale invariance. In either case, the cancellation of vacuum energy requires Mt=(171.53±0.42 GeV, which is close to its measured value of (173.34±0.76 GeV.

  19. Computational Modelling of Cancer Development and Growth: Modelling at Multiple Scales and Multiscale Modelling.

    Science.gov (United States)

    Szymańska, Zuzanna; Cytowski, Maciej; Mitchell, Elaine; Macnamara, Cicely K; Chaplain, Mark A J

    2017-06-20

    In this paper, we present two mathematical models related to different aspects and scales of cancer growth. The first model is a stochastic spatiotemporal model of both a synthetic gene regulatory network (the example of a three-gene repressilator is given) and an actual gene regulatory network, the NF-[Formula: see text]B pathway. The second model is a force-based individual-based model of the development of a solid avascular tumour with specific application to tumour cords, i.e. a mass of cancer cells growing around a central blood vessel. In each case, we compare our computational simulation results with experimental data. In the final discussion section, we outline how to take the work forward through the development of a multiscale model focussed at the cell level. This would incorporate key intracellular signalling pathways associated with cancer within each cell (e.g. p53-Mdm2, NF-[Formula: see text]B) and through the use of high-performance computing be capable of simulating up to [Formula: see text] cells, i.e. the tissue scale. In this way, mathematical models at multiple scales would be combined to formulate a multiscale computational model.

  20. ScaleNet: A literature-based model of scale insect biology and systematics

    Science.gov (United States)

    Scale insects (Hemiptera: Coccoidea) are small herbivorous insects found in all continents except Antarctica. They are extremely invasive, and many species are serious agricultural pests. They are also emerging models for studies of the evolution of genetic systems, endosymbiosis, and plant-insect i...

  1. From Field- to Landscape-Scale Vadose Zone Processes: Scale Issues, Modeling, and Monitoring

    NARCIS (Netherlands)

    Corwin, D.L.; Hopmans, J.; Rooij, de G.H.

    2006-01-01

    Modeling and monitoring vadose zone processes across multiple scales is a fundamental component of many environmental and natural resource issues including nonpoint source (NPS) pollution, watershed management, and nutrient management, to mention just a few. In this special section in Vadose Zone

  2. Scaling of Core Material in Rubble Mound Breakwater Model Tests

    DEFF Research Database (Denmark)

    Burcharth, H. F.; Liu, Z.; Troch, P.

    1999-01-01

    The permeability of the core material influences armour stability, wave run-up and wave overtopping. The main problem related to the scaling of core materials in models is that the hydraulic gradient and the pore velocity are varying in space and time. This makes it impossible to arrive at a fully...... correct scaling. The paper presents an empirical formula for the estimation of the wave induced pressure gradient in the core, based on measurements in models and a prototype. The formula, together with the Forchheimer equation can be used for the estimation of pore velocities in cores. The paper proposes...... that the diameter of the core material in models is chosen in such a way that the Froude scale law holds for a characteristic pore velocity. The characteristic pore velocity is chosen as the average velocity of a most critical area in the core with respect to porous flow. Finally the method is demonstrated...

  3. Multi Scale Models for Flexure Deformation in Sheet Metal Forming

    Directory of Open Access Journals (Sweden)

    Di Pasquale Edmondo

    2016-01-01

    Full Text Available This paper presents the application of multi scale techniques to the simulation of sheet metal forming using the one-step method. When a blank flows over the die radius, it undergoes a complex cycle of bending and unbending. First, we describe an original model for the prediction of residual plastic deformation and stresses in the blank section. This model, working on a scale about one hundred times smaller than the element size, has been implemented in SIMEX, one-step sheet metal forming simulation code. The utilisation of this multi-scale modeling technique improves greatly the accuracy of the solution. Finally, we discuss the implications of this analysis on the prediction of springback in metal forming.

  4. Including investment risk in large-scale power market models

    DEFF Research Database (Denmark)

    Lemming, Jørgen Kjærgaard; Meibom, P.

    2003-01-01

    can be included in large-scale partial equilibrium models of the power market. The analyses are divided into a part about risk measures appropriate for power market investors and a more technical part about the combination of a risk-adjustment model and a partial-equilibrium model. To illustrate......Long-term energy market models can be used to examine investments in production technologies, however, with market liberalisation it is crucial that such models include investment risks and investor behaviour. This paper analyses how the effect of investment risk on production technology selection...

  5. Multiple time scales in multi-state models.

    Science.gov (United States)

    Iacobelli, Simona; Carstensen, Bendix

    2013-12-30

    In multi-state models, it has been the tradition to model all transition intensities on one time scale, usually the time since entry into the study ('clock-forward' approach). The effect of time since an intermediate event has been accommodated either by changing the time scale to time since entry to the new state ('clock-back' approach) or by including the time at entry to the new state as a covariate. In this paper, we argue that the choice of time scale for the various transitions in a multi-state model should be dealt with as an empirical question, as also the question of whether a single time scale is sufficient. We illustrate that these questions are best addressed by using parametric models for the transition rates, as opposed to the traditional Cox-model-based approaches. Specific advantages are that dependence of failure rates on multiple time scales can be made explicit and described in informative graphical displays. Using a single common time scale for all transitions greatly facilitates computations of probabilities of being in a particular state at a given time, because the machinery from the theory of Markov chains can be applied. However, a realistic model for transition rates is preferable, especially when the focus is not on prediction of final outcomes from start but on the analysis of instantaneous risk or on dynamic prediction. We illustrate the various approaches using a data set from stem cell transplant in leukemia and provide supplementary online material in R. Copyright © 2013 John Wiley & Sons, Ltd.

  6. Collective I/O Tuning Using Analytical and Machine-Learning Models

    Energy Technology Data Exchange (ETDEWEB)

    Isaila, Florin; Balaprakash, Prasanna; Wild, Stefan M.; Kimpe, Dries; Latham, Rob; Ross, Rob; Hovland, Paul

    2015-01-01

    The ever larger demand of scientific applications for computation and data is currently driving a continuous increase in scale of parallel computers. The inherent complexity of scaling up a computing systems in terms of both hardware and software stack exposes an increasing number of factors impacting the performance and complicating the process of optimization. In particular, the optimization of parallel I/O has become increasingly challenging due to increasing storage hierarchy and well known performance variability of shared storage systems. This paper focuses on model-based autotuning of the two-phase collective I/O algorithm from a popular MPI distribution on the Blue Gene/Q architecture. We propose a novel hybrid model, constructed as a composition of analytical models for communication and storage operations and black-box models for the performance of the individual operations. We perform an in-depth study of the complexity involved in performance modeling including architecture, software stack and noise. In particular we address this challenges of modeling the performance of shared storage systems by building a benchmark that helps synthesizing factors such as topology, file caching, and noise. The experimental results show that the hybrid approach produces significantly better results than state-of-the-art machine learning approaches and shows a higher robustness to noise, at the cost of a higher modeling complexity

  7. A Network Contention Model for the Extreme-scale Simulator

    Energy Technology Data Exchange (ETDEWEB)

    Engelmann, Christian [ORNL; Naughton III, Thomas J [ORNL

    2015-01-01

    The Extreme-scale Simulator (xSim) is a performance investigation toolkit for high-performance computing (HPC) hardware/software co-design. It permits running a HPC application with millions of concurrent execution threads, while observing its performance in a simulated extreme-scale system. This paper details a newly developed network modeling feature for xSim, eliminating the shortcomings of the existing network modeling capabilities. The approach takes a different path for implementing network contention and bandwidth capacity modeling using a less synchronous and accurate enough model design. With the new network modeling feature, xSim is able to simulate on-chip and on-node networks with reasonable accuracy and overheads.

  8. Wind Farm Wake Models From Full Scale Data

    DEFF Research Database (Denmark)

    Knudsen, Torben; Bak, Thomas

    2012-01-01

    This investigation is part of the EU FP7 project “Distributed Control of Large-Scale Offshore Wind Farms”. The overall goal in this project is to develop wind farm controllers giving power set points to individual turbines in the farm in order to minimise mechanical loads and optimise power. One...... on real full scale data. The modelling is based on so called effective wind speed. It is shown that there is a wake for a wind direction range of up to 20 degrees. Further, when accounting for the wind direction it is shown that the two model structures considered can both fit the experimental data...

  9. Scalar dark matter in scale invariant standard model

    Energy Technology Data Exchange (ETDEWEB)

    Ghorbani, Karim [Physics Department, Faculty of Sciences,Arak University, Arak 38156-8-8349 (Iran, Islamic Republic of); Ghorbani, Hossein [Institute for Research in Fundamental Sciences (IPM),School of Particles and Accelerators, P.O. Box 19395-5531, Tehran (Iran, Islamic Republic of)

    2016-04-05

    We investigate single and two-component scalar dark matter scenarios in classically scale invariant standard model which is free of the hierarchy problem in the Higgs sector. We show that despite the very restricted space of parameters imposed by the scale invariance symmetry, both single and two-component scalar dark matter models overcome the direct and indirect constraints provided by the Planck/WMAP observational data and the LUX/Xenon100 experiment. We comment also on the radiative mass corrections of the classically massless scalon that plays a crucial role in our study.

  10. Use of genome-scale microbial models for metabolic engineering

    DEFF Research Database (Denmark)

    Patil, Kiran Raosaheb; Åkesson, M.; Nielsen, Jens

    2004-01-01

    network structures. The major challenge for metabolic engineering in the post-genomic era is to broaden its design methodologies to incorporate genome-scale biological data. Genome-scale stoichiometric models of microorganisms represent a first step in this direction.......Metabolic engineering serves as an integrated approach to design new cell factories by providing rational design procedures and valuable mathematical and experimental tools. Mathematical models have an important role for phenotypic analysis, but can also be used for the design of optimal metabolic...

  11. Ergonomics-inspired Reshaping and Exploration of Collections of Models

    KAUST Repository

    Zheng, Youyi

    2015-06-22

    This paper examines the following question: given a collection of man-made shapes, e.g., chairs, can we effectively explore and rank the shapes with respect to a given human body – in terms of how well a candidate shape fits the specified human body? Answering this question requires identifying which shapes are more suitable for a prescribed body, and how to alter the input geometry to better fit the shapes to a given human body. The problem links physical proportions of the human body and its interaction with object geometry, which is often expressed as ergonomics guidelines. We present an interactive system that allows users to explore shapes using different avatar poses, while, at the same time providing interactive previews of how to alter the shapes to fit the user-specified body and pose. We achieve this by first constructing a fuzzy shape-to-body map from the ergonomic guidelines to multi-contacts geometric constraints; and then, proposing a novel contact-preserving deformation paradigm to realize a reshaping to adapt the input shape. We evaluate our method on collections of models from different categories and validate the results through a user study.

  12. Scour around Support Structures of Scaled Model Marine Hydrokinetic Devices

    Science.gov (United States)

    Volpe, M. A.; Beninati, M. L.; Krane, M.; Fontaine, A.

    2013-12-01

    Experiments are presented to explore scour due to flows around support structures of marine hydrokinetic (MHK) devices. Three related studies were performed to understand how submergence, scour condition, and the presence of an MHK device impact scour around the support structure (cylinder). The first study focuses on clear-water scour conditions for a cylinder of varying submergence: surface-piercing and fully submerged. The second study centers on three separate scour conditions (clear-water, transitional and live-bed) around the fully submerged cylinder. Lastly, the third study emphasizes the impact of an MHK turbine on scour around the support structure, in live-bed conditions. Small-scale laboratory testing of model devices can be used to help predict the behavior of MHK devices at full-scale. Extensive studies have been performed on single cylinders, modeling bridge piers, though few have focused on fully submerged structures. Many of the devices being used to harness marine hydrokinetic energy are fully submerged in the flow. Additionally, scour hole dimensions and scour rates have not been addressed. Thus, these three studies address the effect of structure blockage/drag, and the ambient scour conditions on scour around the support structure. The experiments were performed in the small-scale testing platform in the hydraulic flume facility (9.8 m long, 1.2 m wide and 0.4 m deep) at Bucknell University. The support structure diameter (D = 2.54 cm) was held constant for all tests. The submerged cylinder (l/D = 5) and sediment size (d50 = 790 microns) were held constant for all three studies. The MHK device (Dturbine = 10.2 cm) is a two-bladed horizontal axis turbine and the rotating shaft is friction-loaded using a metal brush motor. For each study, bed form topology was measured after a three-hour time interval using a traversing two-dimensional bed profiler. During the experiments, scour hole depth measurements at the front face of the support structure

  13. Bridging scales through multiscale modeling: a case study on protein kinase A.

    Science.gov (United States)

    Boras, Britton W; Hirakis, Sophia P; Votapka, Lane W; Malmstrom, Robert D; Amaro, Rommie E; McCulloch, Andrew D

    2015-01-01

    The goal of multiscale modeling in biology is to use structurally based physico-chemical models to integrate across temporal and spatial scales of biology and thereby improve mechanistic understanding of, for example, how a single mutation can alter organism-scale phenotypes. This approach may also inform therapeutic strategies or identify candidate drug targets that might otherwise have been overlooked. However, in many cases, it remains unclear how best to synthesize information obtained from various scales and analysis approaches, such as atomistic molecular models, Markov state models (MSM), subcellular network models, and whole cell models. In this paper, we use protein kinase A (PKA) activation as a case study to explore how computational methods that model different physical scales can complement each other and integrate into an improved multiscale representation of the biological mechanisms. Using measured crystal structures, we show how molecular dynamics (MD) simulations coupled with atomic-scale MSMs can provide conformations for Brownian dynamics (BD) simulations to feed transitional states and kinetic parameters into protein-scale MSMs. We discuss how milestoning can give reaction probabilities and forward-rate constants of cAMP association events by seamlessly integrating MD and BD simulation scales. These rate constants coupled with MSMs provide a robust representation of the free energy landscape, enabling access to kinetic, and thermodynamic parameters unavailable from current experimental data. These approaches have helped to illuminate the cooperative nature of PKA activation in response to distinct cAMP binding events. Collectively, this approach exemplifies a general strategy for multiscale model development that is applicable to a wide range of biological problems.

  14. Bridging scales through multiscale modeling: A case study on Protein Kinase A

    Directory of Open Access Journals (Sweden)

    Sophia P Hirakis

    2015-09-01

    Full Text Available The goal of multiscale modeling in biology is to use structurally based physico-chemical models to integrate across temporal and spatial scales of biology and thereby improve mechanistic understanding of, for example, how a single mutation can alter organism-scale phenotypes. This approach may also inform therapeutic strategies or identify candidate drug targets that might otherwise have been overlooked. However, in many cases, it remains unclear how best to synthesize information obtained from various scales and analysis approaches, such as atomistic molecular models, Markov state models (MSM, subcellular network models, and whole cell models. In this paper, we use protein kinase A (PKA activation as a case study to explore how computational methods that model different physical scales can complement each other and integrate into an improved multiscale representation of the biological mechanisms. Using measured crystal structures, we show how molecular dynamics (MD simulations coupled with atomic-scale MSMs can provide conformations for Brownian dynamics (BD simulations to feed transitional states and kinetic parameters into protein-scale MSMs. We discuss how milestoning can give reaction probabilities and forward-rate constants of cAMP association events by seamlessly integrating MD and BD simulation scales. These rate constants coupled with MSMs provide a robust representation of the free energy landscape, enabling access to kinetic and thermodynamic parameters unavailable from current experimental data. These approaches have helped to illuminate the cooperative nature of PKA activation in response to distinct cAMP binding events. Collectively, this approach exemplifies a general strategy for multiscale model development that is applicable to a wide range of biological problems.

  15. Low-scale inflation and supersymmetry breaking in racetrack models

    Science.gov (United States)

    Allahverdi, Rouzbeh; Dutta, Bhaskar; Sinha, Kuver

    2010-04-01

    In many moduli stabilization schemes in string theory, the scale of inflation appears to be of the same order as the scale of supersymmetry breaking. For low-scale supersymmetry breaking, therefore, the scale of inflation should also be low, unless this correlation is avoided in specific models. We explore such a low-scale inflationary scenario in a racetrack model with a single modulus in type IIB string theory. Inflation occurs near a point of inflection in the Kähler modulus potential. Obtaining acceptable cosmological density perturbations leads to the introduction of magnetized D7-branes sourcing nonperturbative superpotentials. The gravitino mass, m3/2, is chosen to be around 30 TeV, so that gravitinos that are produced in the inflaton decay do not affect big-bang nucleosynthesis. Supersymmetry is communicated to the visible sector by a mixture of anomaly and modulus mediation. We find that the two sources contribute equally to the gaugino masses, while scalar masses are decided mainly by anomaly contribution. This happens as a result of the low scale of inflation and can be probed at the LHC.

  16. eDNAoccupancy: An R package for multi-scale occupancy modeling of environmental DNA data

    Science.gov (United States)

    Dorazio, Robert; Erickson, Richard A.

    2017-01-01

    In this article we describe eDNAoccupancy, an R package for fitting Bayesian, multi-scale occupancy models. These models are appropriate for occupancy surveys that include three, nested levels of sampling: primary sample units within a study area, secondary sample units collected from each primary unit, and replicates of each secondary sample unit. This design is commonly used in occupancy surveys of environmental DNA (eDNA). eDNAoccupancy allows users to specify and fit multi-scale occupancy models with or without covariates, to estimate posterior summaries of occurrence and detection probabilities, and to compare different models using Bayesian model-selection criteria. We illustrate these features by analyzing two published data sets: eDNA surveys of a fungal pathogen of amphibians and eDNA surveys of an endangered fish species.

  17. Multilevel method for modeling large-scale networks.

    Energy Technology Data Exchange (ETDEWEB)

    Safro, I. M. (Mathematics and Computer Science)

    2012-02-24

    Understanding the behavior of real complex networks is of great theoretical and practical significance. It includes developing accurate artificial models whose topological properties are similar to the real networks, generating the artificial networks at different scales under special conditions, investigating a network dynamics, reconstructing missing data, predicting network response, detecting anomalies and other tasks. Network generation, reconstruction, and prediction of its future topology are central issues of this field. In this project, we address the questions related to the understanding of the network modeling, investigating its structure and properties, and generating artificial networks. Most of the modern network generation methods are based either on various random graph models (reinforced by a set of properties such as power law distribution of node degrees, graph diameter, and number of triangles) or on the principle of replicating an existing model with elements of randomization such as R-MAT generator and Kronecker product modeling. Hierarchical models operate at different levels of network hierarchy but with the same finest elements of the network. However, in many cases the methods that include randomization and replication elements on the finest relationships between network nodes and modeling that addresses the problem of preserving a set of simplified properties do not fit accurately enough the real networks. Among the unsatisfactory features are numerically inadequate results, non-stability of algorithms on real (artificial) data, that have been tested on artificial (real) data, and incorrect behavior at different scales. One reason is that randomization and replication of existing structures can create conflicts between fine and coarse scales of the real network geometry. Moreover, the randomization and satisfying of some attribute at the same time can abolish those topological attributes that have been undefined or hidden from

  18. No specimen left behind: industrial scale digitization of natural history collections.

    Science.gov (United States)

    Blagoderov, Vladimir; Kitching, Ian J; Livermore, Laurence; Simonsen, Thomas J; Smith, Vincent S

    2012-01-01

    Traditional approaches for digitizing natural history collections, which include both imaging and metadata capture, are both labour- and time-intensive. Mass-digitization can only be completed if the resource-intensive steps, such as specimen selection and databasing of associated information, are minimized. Digitization of larger collections should employ an "industrial" approach, using the principles of automation and crowd sourcing, with minimal initial metadata collection including a mandatory persistent identifier. A new workflow for the mass-digitization of natural history museum collections based on these principles, and using SatScan® tray scanning system, is described.

  19. 75 FR 9157 - Proposed Information Collection; Comment Request; Alaska Region Scale and Catch Weighing...

    Science.gov (United States)

    2010-03-01

    ... catch weight, species composition, and location data for every delivery by a catcher vessel or every pot by a catcher/processor. Second, all catch must be weighed accurately using NMFS-approved scales to... Region Scale and Catch Weighing Requirements AGENCY: National Oceanic and Atmospheric Administration...

  20. Ares I Scale Model Acoustic Test Overpressure Results

    Science.gov (United States)

    Casiano, M. J.; Alvord, D. A.; McDaniels, D. M.

    2011-01-01

    A summary of the overpressure environment from the 5% Ares I Scale Model Acoustic Test (ASMAT) and the implications to the full-scale Ares I are presented in this Technical Memorandum. These include the scaled environment that would be used for assessing the full-scale Ares I configuration, observations, and team recommendations. The ignition transient is first characterized and described, the overpressure suppression system configuration is then examined, and the final environment characteristics are detailed. The recommendation for Ares I is to keep the space shuttle heritage ignition overpressure (IOP) suppression system (below-deck IOP water in the launch mount and mobile launcher and also the crest water on the main flame deflector) and the water bags.

  1. Transforming Monograph Collections with a Model of Collections as a Service

    Science.gov (United States)

    Way, Doug

    2017-01-01

    Financial pressures, changes in scholarly communications, the rise of online content, and the ability to easily share materials have provided libraries the opportunity to rethink their collections practices. This article provides an overview of these changes and outlines a framework for viewing collections as a service. It describes how libraries…

  2. Modeling the uncertainty associated with the observation scale of space/time natural processes

    Science.gov (United States)

    Lee, S.; Serre, M.

    2005-12-01

    In many mapping applications of spatiotemporally distributed hydrological processes, the traditional space/time Geostatistics approaches have played a significant role to estimate a variable of interest at unsampled locations. Measured values are usually sparsely located over space and time due to the difficulty and cost of obtaining data. In some cases, the data for the hydrological variable of interest may have been collected at different temporal or spatial observation scales. Even though mixing data measured at different space/time scales may alleviate the problem of the sparsity of the data available, it essentially disregards the scale effect of estimation results. The importance of the scale effect must be recognized since a variable displays different physical properties depending on the spatial or temporal scale at which it is observed. In this study we develop a mathematical framework to derive the conditional Probability Density Function (PDF) of a variable at the local scale given an observation of that variable at a larger spatial or temporal scale, which properly models the uncertainty associated with the different observations scales of space/time natural processes. The developed framework allows to efficiently mix data observed at a variety of scales by accounting for data uncertainty associated with each observation scale present, and therefore generates soft data rigorously assimilated in the Bayesian Maximum Entropy (BME) method of modern Geostatistics to increase the mapping accuracy of the map at the scale of interest. We investigate the proposed approach with synthetic case studies involving observations of a space/time process at a variety of temporal and spatial scales. These case studies demonstrate the power of the proposed approach by leading to a set of maps with a noticeable increase of mapping accuracy over classical approaches not accounting for the scale effects. Hence the proposed approach will be useful for a wide variety of

  3. Dynamic Modeling, Optimization, and Advanced Control for Large Scale Biorefineries

    DEFF Research Database (Denmark)

    Prunescu, Remus Mihail

    plant [3]. The goal of the project is to utilize realtime data extracted from the large scale facility to formulate and validate first principle dynamic models of the plant. These models are then further exploited to derive model-based tools for process optimization, advanced control and real...... with building a plantwide model-based optimization layer, which searches for optimal values regarding the pretreatment temperature, enzyme dosage in liquefaction, and yeast seed in fermentation such that profit is maximized [7]. When biomass is pretreated, by-products are also created that affect the downstream...

  4. Scale-up considerations for surface collecting agent assisted in-situ burn crude oil spill response experiments in the Arctic: Laboratory to field-scale investigations.

    Science.gov (United States)

    Bullock, Robin J; Aggarwal, Srijan; Perkins, Robert A; Schnabel, William

    2017-04-01

    In the event of a marine oil spill in the Arctic, government agencies, industry, and the public have a stake in the successful implementation of oil spill response. Because large spills are rare events, oil spill response techniques are often evaluated with laboratory and meso-scale experiments. The experiments must yield scalable information sufficient to understand the operability and effectiveness of a response technique under actual field conditions. Since in-situ burning augmented with surface collecting agents ("herders") is one of the few viable response options in ice infested waters, a series of oil spill response experiments were conducted in Fairbanks, Alaska, in 2014 and 2015 to evaluate the use of herders to assist in-situ burning and the role of experimental scale. This study compares burn efficiency and herder application for three experimental designs for in-situ burning of Alaska North Slope crude oil in cold, fresh waters with ∼10% ice cover. The experiments were conducted in three project-specific constructed venues with varying scales (surface areas of approximately 0.09 square meters, 9 square meters and 8100 square meters). The results from the herder assisted in-situ burn experiments performed at these three different scales showed good experimental scale correlation and no negative impact due to the presence of ice cover on burn efficiency. Experimental conclusions are predominantly associated with application of the herder material and usability for a given experiment scale to make response decisions. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. Design and Modelling of Small Scale Low Temperature Power Cycles

    DEFF Research Database (Denmark)

    Wronski, Jorrit

    he work presented in this report contributes to the state of the art within design and modelling of small scale low temperature power cycles. The study is divided into three main parts: (i) fluid property evaluation, (ii) expansion device investigations and (iii) heat exchanger performance. The t...... scale plate heat exchanger. Working towards a validation of heat transfer correlations for ORC conditions, a new test rig was designed and built. The test facility can be used to study heat transfer in both ORC and high temperature heat pump systems.......he work presented in this report contributes to the state of the art within design and modelling of small scale low temperature power cycles. The study is divided into three main parts: (i) fluid property evaluation, (ii) expansion device investigations and (iii) heat exchanger performance...

  6. A scale-free neural network for modelling neurogenesis

    Science.gov (United States)

    Perotti, Juan I.; Tamarit, Francisco A.; Cannas, Sergio A.

    2006-11-01

    In this work we introduce a neural network model for associative memory based on a diluted Hopfield model, which grows through a neurogenesis algorithm that guarantees that the final network is a small-world and scale-free one. We also analyze the storage capacity of the network and prove that its performance is larger than that measured in a randomly dilute network with the same connectivity.

  7. Multi-scale Modeling of Plasticity in Tantalum.

    Energy Technology Data Exchange (ETDEWEB)

    Lim, Hojun [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Battaile, Corbett Chandler. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Carroll, Jay [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Buchheit, Thomas E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Boyce, Brad [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Weinberger, Christopher [Drexel Univ., Philadelphia, PA (United States)

    2015-12-01

    In this report, we present a multi-scale computational model to simulate plastic deformation of tantalum and validating experiments. In atomistic/ dislocation level, dislocation kink- pair theory is used to formulate temperature and strain rate dependent constitutive equations. The kink-pair theory is calibrated to available data from single crystal experiments to produce accurate and convenient constitutive laws. The model is then implemented into a BCC crystal plasticity finite element method (CP-FEM) model to predict temperature and strain rate dependent yield stresses of single and polycrystalline tantalum and compared with existing experimental data from the literature. Furthermore, classical continuum constitutive models describing temperature and strain rate dependent flow behaviors are fit to the yield stresses obtained from the CP-FEM polycrystal predictions. The model is then used to conduct hydro- dynamic simulations of Taylor cylinder impact test and compared with experiments. In order to validate the proposed tantalum CP-FEM model with experiments, we introduce a method for quantitative comparison of CP-FEM models with various experimental techniques. To mitigate the effects of unknown subsurface microstructure, tantalum tensile specimens with a pseudo-two-dimensional grain structure and grain sizes on the order of millimeters are used. A technique combining an electron back scatter diffraction (EBSD) and high resolution digital image correlation (HR-DIC) is used to measure the texture and sub-grain strain fields upon uniaxial tensile loading at various applied strains. Deformed specimens are also analyzed with optical profilometry measurements to obtain out-of- plane strain fields. These high resolution measurements are directly compared with large-scale CP-FEM predictions. This computational method directly links fundamental dislocation physics to plastic deformations in the grain-scale and to the engineering-scale applications. Furthermore, direct

  8. Collective design in 3D printing: A large scale empirical study of designs, designers and evolution

    DEFF Research Database (Denmark)

    Özkil, Ali Gürcan

    2017-01-01

    This paper provides an empirical study of a collective design platform (Thingiverse); with the aim of understanding the phenomenon and investigating how designs concurrently evolve through the large and complex network of designers. The case study is based on the meta-data collected from 158 489 ...

  9. Modelling Collective Opinion Formation by Means of Active Brownian Particles

    CERN Document Server

    Schweitzer, F; Schweitzer, Frank; Holyst, Janusz

    1999-01-01

    The concept of active Brownian particles is used to model a collective opinion formation process. It is assumed that individuals in community create a two-component communication field that influences the change of opinions of other persons and/or can induce their migration. The communication field is described by a reaction-diffusion equation, meaning that it has a certain lifetime, which models memory effects, further it can spread out in the community. Within our stochastic approach, the opinion change of the individuals is described by a master equation, while the migration is described by a set of Langevin equations, coupled by the communication field. In the mean-field limit which holds for fast communication, we derive a critical population size, above which the community separates into a majority and a minority with opposite opinions. The existence of external support (e.g. from mass media) can change the ratio between minority and majority, until above a critical external support the supported subpop...

  10. Large scale solar district heating. Evaluation, modelling and designing - Appendices

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The appendices present the following: A) Cad-drawing of the Marstal CSHP design. B) Key values - large-scale solar heating in Denmark. C) Monitoring - a system description. D) WMO-classification of pyranometers (solarimeters). E) The computer simulation model in TRNSYS. F) Selected papers from the author. (EHS)

  11. Phenomenological aspects of no-scale inflation models

    Energy Technology Data Exchange (ETDEWEB)

    Ellis, John [Theoretical Particle Physics and Cosmology Group, Department of Physics,King’s College London,WC2R 2LS London (United Kingdom); Theory Division, CERN,CH-1211 Geneva 23 (Switzerland); Garcia, Marcos A.G. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,University of Minnesota,116 Church Street SE, Minneapolis, MN 55455 (United States); Nanopoulos, Dimitri V. [George P. and Cynthia W. Mitchell Institute for Fundamental Physics andAstronomy, Texas A& M University,College Station, 77843 Texas (United States); Astroparticle Physics Group, Houston Advanced Research Center (HARC), Mitchell Campus, Woodlands, 77381 Texas (United States); Academy of Athens, Division of Natural Sciences, 28 Panepistimiou Avenue, 10679 Athens (Greece); Olive, Keith A. [William I. Fine Theoretical Physics Institute, School of Physics and Astronomy,University of Minnesota,116 Church Street SE, Minneapolis, MN 55455 (United States)

    2015-10-01

    We discuss phenomenological aspects of inflationary models wiith a no-scale supergravity Kähler potential motivated by compactified string models, in which the inflaton may be identified either as a Kähler modulus or an untwisted matter field, focusing on models that make predictions for the scalar spectral index n{sub s} and the tensor-to-scalar ratio r that are similar to the Starobinsky model. We discuss possible patterns of soft supersymmetry breaking, exhibiting examples of the pure no-scale type m{sub 0}=B{sub 0}=A{sub 0}=0, of the CMSSM type with universal A{sub 0} and m{sub 0}≠0 at a high scale, and of the mSUGRA type with A{sub 0}=B{sub 0}+m{sub 0} boundary conditions at the high input scale. These may be combined with a non-trivial gauge kinetic function that generates gaugino masses m{sub 1/2}≠0, or one may have a pure gravity mediation scenario where trilinear terms and gaugino masses are generated through anomalies. We also discuss inflaton decays and reheating, showing possible decay channels for the inflaton when it is either an untwisted matter field or a Kähler modulus. Reheating is very efficient if a matter field inflaton is directly coupled to MSSM fields, and both candidates lead to sufficient reheating in the presence of a non-trivial gauge kinetic function.

  12. Large-Scale Modeling of Wordform Learning and Representation

    Science.gov (United States)

    Sibley, Daragh E.; Kello, Christopher T.; Plaut, David C.; Elman, Jeffrey L.

    2008-01-01

    The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed the "sequence encoder" is used to learn…

  13. Small-Scale Helicopter Automatic Autorotation : Modeling, Guidance, and Control

    NARCIS (Netherlands)

    Taamallah, S.

    2015-01-01

    Our research objective consists in developing a, model-based, automatic safety recovery system, for a small-scale helicopter Unmanned Aerial Vehicle (UAV) in autorotation, i.e. an engine OFF flight condition, that safely flies and lands the helicopter to a pre-specified ground location. In pursuit

  14. Vegetable parenting practices scale: Item response modeling analyses

    Science.gov (United States)

    Our objective was to evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We al...

  15. Effect of Logarithmic and Linear Frequency Scales on Parametric Modelling of Tissue Dielectric Data.

    Science.gov (United States)

    Salahuddin, Saqib; Porter, Emily; Meaney, Paul M; O'Halloran, Martin

    2017-02-01

    The dielectric properties of biological tissues have been studied widely over the past half-century. These properties are used in a vast array of applications, from determining the safety of wireless telecommunication devices to the design and optimisation of medical devices. The frequency-dependent dielectric properties are represented in closed-form parametric models, such as the Cole-Cole model, for use in numerical simulations which examine the interaction of electromagnetic (EM) fields with the human body. In general, the accuracy of EM simulations depends upon the accuracy of the tissue dielectric models. Typically, dielectric properties are measured using a linear frequency scale; however, use of the logarithmic scale has been suggested historically to be more biologically descriptive. Thus, the aim of this paper is to quantitatively compare the Cole-Cole fitting of broadband tissue dielectric measurements collected with both linear and logarithmic frequency scales. In this way, we can determine if appropriate choice of scale can minimise the fit error and thus reduce the overall error in simulations. Using a well-established fundamental statistical framework, the results of the fitting for both scales are quantified. It is found that commonly used performance metrics, such as the average fractional error, are unable to examine the effect of frequency scale on the fitting results due to the averaging effect that obscures large localised errors. This work demonstrates that the broadband fit for these tissues is quantitatively improved when the given data is measured with a logarithmic frequency scale rather than a linear scale, underscoring the importance of frequency scale selection in accurate wideband dielectric modelling of human tissues.

  16. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    DEFF Research Database (Denmark)

    King, Zachary A.; Lu, Justin; Dräger, Andreas

    2016-01-01

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized...... redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases....... Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource...

  17. Disappearing scales in carps: Re-visiting Kirpichnikov's model on the genetics of scale pattern formation

    KAUST Repository

    Casas, Laura

    2013-12-30

    The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n) regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the \\'S\\' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called \\'N\\' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude x nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s) showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype), those with two Hungarian nude parents did not. We further extended Kirpichnikov\\'s work by correlating changes in phenotype (scale-pattern) to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here). We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dosedependent effect) probably due to a concerted action of multiple pathways involved in scale formation. 2013 Casas et al.

  18. Disappearing scales in carps: re-visiting Kirpichnikov's model on the genetics of scale pattern formation.

    Directory of Open Access Journals (Sweden)

    Laura Casas

    Full Text Available The body of most fishes is fully covered by scales that typically form tight, partially overlapping rows. While some of the genes controlling the formation and growth of fish scales have been studied, very little is known about the genetic mechanisms regulating scale pattern formation. Although the existence of two genes with two pairs of alleles (S&s and N&n regulating scale coverage in cyprinids has been predicted by Kirpichnikov and colleagues nearly eighty years ago, their identity was unknown until recently. In 2009, the 'S' gene was found to be a paralog of fibroblast growth factor receptor 1, fgfr1a1, while the second gene called 'N' has not yet been identified. We re-visited the original model of Kirpichnikov that proposed four major scale pattern types and observed a high degree of variation within the so-called scattered phenotype due to which this group was divided into two sub-types: classical mirror and irregular. We also analyzed the survival rates of offspring groups and found a distinct difference between Asian and European crosses. Whereas nude × nude crosses involving at least one parent of Asian origin or hybrid with Asian parent(s showed the 25% early lethality predicted by Kirpichnikov (due to the lethality of the NN genotype, those with two Hungarian nude parents did not. We further extended Kirpichnikov's work by correlating changes in phenotype (scale-pattern to the deformations of fins and losses of pharyngeal teeth. We observed phenotypic changes which were not restricted to nudes, as described by Kirpichnikov, but were also present in mirrors (and presumably in linears as well; not analyzed in detail here. We propose that the gradation of phenotypes observed within the scattered group is caused by a gradually decreasing level of signaling (a dose-dependent effect probably due to a concerted action of multiple pathways involved in scale formation.

  19. Programmatic access to logical models in the Cell Collective modeling environment via a REST API.

    Science.gov (United States)

    Kowal, Bryan M; Schreier, Travis R; Dauer, Joseph T; Helikar, Tomáš

    2016-01-01

    Cell Collective (www.cellcollective.org) is a web-based interactive environment for constructing, simulating and analyzing logical models of biological systems. Herein, we present a Web service to access models, annotations, and simulation data in the Cell Collective platform through the Representational State Transfer (REST) Application Programming Interface (API). The REST API provides a convenient method for obtaining Cell Collective data through almost any programming language. To ensure easy processing of the retrieved data, the request output from the API is available in a standard JSON format. The Cell Collective REST API is freely available at http://thecellcollective.org/tccapi. All public models in Cell Collective are available through the REST API. For users interested in creating and accessing their own models through the REST API first need to create an account in Cell Collective (http://thecellcollective.org). thelikar2@unl.edu. Technical user documentation: https://goo.gl/U52GWo. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  20. Lightweight Vertical Take-Off & Landing Unmanned Aerial Systems For Local-Scale Forestry and Agriculture Remote Sensing Data Collection

    Science.gov (United States)

    Putman, E.; Sheridan, R.; Popescu, S. C.

    2015-12-01

    The evolution of lightweight Vertical Take-Off and Landing (VTOL) rotary Unmanned Aerial Vehicles (UAVs) and remote sensor technologies have provided researchers with the ability to integrate compact remote sensing systems with UAVs to create Unmanned Aerial Systems (UASs) capable of collecting high-resolution airborne remote sensing data. UASs offer a myriad of benefits. Some of the most notable include: (1) reduced operational cost; (2) reduced lead-time for mission planning; (3) high-resolution and high-density data collection; and (4) customization of data collection intervals to fit the needs of a specific project (i.e. acquiring data at hourly, daily, or weekly intervals). Such benefits allow researchers and natural resource managers to acquire airborne remote sensing data on local-scale phenomenon in ways that were previously cost-prohibitive. VTOL UASs also offer a stable platform capable of low speed low altitude flight over small spatial scales that do not require a dedicated runway. Such flight characteristics allow VTOL UASs to collect high-resolution data at very high densities, enabling the use of structure from motion (SFM) techniques to generate three-dimensional datasets from photographs. When combined, these characteristics make VTOL UASs ideal for collecting data over agricultural or forested research areas. The goal of this study is to provide an overview of several lightweight eight-rotor VTOL UASs designed for small-scale forest remote sensing data collection. Specific objectives include: (1) the independent integration of a lightweight multispectral camera, a lightweight scanning lidar sensor, with required components (i.e. IMU, GPS, data logger) and the UAV; (2) comparison of UAS-collected data to terrestrial lidar data and airborne multispectral and lidar data; (3) comparison of UAS SFM techniques to terrestrial lidar data; and (4) multi-temporal assessment of tree decay using terrestrial lidar and UAS SfM techniques.

  1. New Models and Methods for the Electroweak Scale

    Energy Technology Data Exchange (ETDEWEB)

    Carpenter, Linda [The Ohio State Univ., Columbus, OH (United States). Dept. of Physics

    2017-09-26

    This is the Final Technical Report to the US Department of Energy for grant DE-SC0013529, New Models and Methods for the Electroweak Scale, covering the time period April 1, 2015 to March 31, 2017. The goal of this project was to maximize the understanding of fundamental weak scale physics in light of current experiments, mainly the ongoing run of the Large Hadron Collider and the space based satellite experiements searching for signals Dark Matter annihilation or decay. This research program focused on the phenomenology of supersymmetry, Higgs physics, and Dark Matter. The properties of the Higgs boson are currently being measured by the Large Hadron collider, and could be a sensitive window into new physics at the weak scale. Supersymmetry is the leading theoretical candidate to explain the natural nessof the electroweak theory, however new model space must be explored as the Large Hadron collider has disfavored much minimal model parameter space. In addition the nature of Dark Matter, the mysterious particle that makes up 25% of the mass of the universe is still unknown. This project sought to address measurements of the Higgs boson couplings to the Standard Model particles, new LHC discovery scenarios for supersymmetric particles, and new measurements of Dark Matter interactions with the Standard Model both in collider production and annihilation in space. Accomplishments include new creating tools for analyses of Dark Matter models in Dark Matter which annihilates into multiple Standard Model particles, including new visualizations of bounds for models with various Dark Matter branching ratios; benchmark studies for new discovery scenarios of Dark Matter at the Large Hardon Collider for Higgs-Dark Matter and gauge boson-Dark Matter interactions; New target analyses to detect direct decays of the Higgs boson into challenging final states like pairs of light jets, and new phenomenological analysis of non-minimal supersymmetric models, namely the set of Dirac

  2. Quartic oscillator potential in the γ-rigid regime of the collective geometrical model

    Energy Technology Data Exchange (ETDEWEB)

    Budaca, R. [Horia Hulubei National Institute of Physics and Nuclear Engineering, Bucharest-Magurele (Romania)

    2014-05-15

    A prolate γ-rigid version of the Bohr-Mottelson Hamiltonian with a quartic anharmonic oscillator potential in β collective shape variable is used to describe the spectra for a variety of vibrational-like nuclei. Speculating the exact separation between the two Euler angles and the β variable, one arrives at a differential Schroedinger equation with a quartic anharmonic oscillator potential and a centrifugal-like barrier. The corresponding eigenvalue is approximated by an analytical formula depending only on a single parameter up to an overall scaling factor. The applicability of the model is discussed in connection to the existence interval of the free parameter, which is limited by the accuracy of the approximation, and by comparison with the predictions of the related X(3) and X(3)-β {sup 2} models. The model is applied to qualitatively describe the spectra for nine nuclei which exhibit near-vibrational features. (orig.)

  3. Genome-scale constraint-based modeling of Geobacter metallireducens

    Directory of Open Access Journals (Sweden)

    Famili Iman

    2009-01-01

    Full Text Available Abstract Background Geobacter metallireducens was the first organism that can be grown in pure culture to completely oxidize organic compounds with Fe(III oxide serving as electron acceptor. Geobacter species, including G. sulfurreducens and G. metallireducens, are used for bioremediation and electricity generation from waste organic matter and renewable biomass. The constraint-based modeling approach enables the development of genome-scale in silico models that can predict the behavior of complex biological systems and their responses to the environments. Such a modeling approach was applied to provide physiological and ecological insights on the metabolism of G. metallireducens. Results The genome-scale metabolic model of G. metallireducens was constructed to include 747 genes and 697 reactions. Compared to the G. sulfurreducens model, the G. metallireducens metabolic model contains 118 unique reactions that reflect many of G. metallireducens' specific metabolic capabilities. Detailed examination of the G. metallireducens model suggests that its central metabolism contains several energy-inefficient reactions that are not present in the G. sulfurreducens model. Experimental biomass yield of G. metallireducens growing on pyruvate was lower than the predicted optimal biomass yield. Microarray data of G. metallireducens growing with benzoate and acetate indicated that genes encoding these energy-inefficient reactions were up-regulated by benzoate. These results suggested that the energy-inefficient reactions were likely turned off during G. metallireducens growth with acetate for optimal biomass yield, but were up-regulated during growth with complex electron donors such as benzoate for rapid energy generation. Furthermore, several computational modeling approaches were applied to accelerate G. metallireducens research. For example, growth of G. metallireducens with different electron donors and electron acceptors were studied using the genome-scale

  4. Comparing the Hydrologic and Watershed Processes between a Full Scale Stochastic Model Versus a Scaled Physical Model of Bell Canyon

    Science.gov (United States)

    Hernandez, K. F.; Shah-Fairbank, S.

    2016-12-01

    The San Dimas Experimental Forest has been designated as a research area by the United States Forest Service for use as a hydrologic testing facility since 1933 to investigate watershed hydrology of the 27 square mile land. Incorporation of a computer model provides validity to the testing of the physical model. This study focuses on San Dimas Experimental Forest's Bell Canyon, one of the triad of watersheds contained within the Big Dalton watershed of the San Dimas Experimental Forest. A scaled physical model was constructed of Bell Canyon to highlight watershed characteristics and each's effect on runoff. The physical model offers a comprehensive visualization of a natural watershed and can vary the characteristics of rainfall intensity, slope, and roughness through interchangeable parts and adjustments to the system. The scaled physical model is validated and calibrated through a HEC-HMS model to assure similitude of the system. Preliminary results of the physical model suggest that a 50-year storm event can be represented by a peak discharge of 2.2 X 10-3 cfs. When comparing the results to HEC-HMS, this equates to a flow relationship of approximately 1:160,000, which can be used to model other return periods. The completion of the Bell Canyon physical model can be used for educational instruction in the classroom, outreach in the community, and further research using the model as an accurate representation of the watershed present in the San Dimas Experimental Forest.

  5. User-experience surveys with maternity services: a randomized comparison of two data collection models.

    Science.gov (United States)

    Bjertnaes, Oyvind Andresen; Iversen, Hilde Hestad

    2012-08-01

    To compare two ways of combining postal and electronic data collection for a maternity services user-experience survey. Cross-sectional survey. Maternity services in Norway. All women who gave birth at a university hospital in Norway between 1 June and 27 July 2010. Patients were randomized into the following groups (n= 752): Group A, who were posted questionnaires with both electronic and paper response options for both the initial and reminder postal requests; and Group B, who were posted questionnaires with an electronic response option for the initial request, and both electronic and paper response options for the reminder postal request. Response rate, the amount of difference in background variables between respondents and non-respondents, main study results and estimated cost-effectiveness. The final response rate was significantly higher in Group A (51.9%) than Group B (41.1%). None of the background variables differed significantly between the respondents and non-respondents in Group A, while two variables differed significantly between the respondents and non-respondents in Group B. None of the 11 user-experience scales differed significantly between Groups A and B. The estimated costs per response for the forthcoming national survey was €11.7 for data collection Model A and €9.0 for Model B. The model with electronic-only response option in the first request had lowest response rate. However, this model performed equal to the other model on non-response bias and better on estimated cost-effectiveness, and is the better of the two models in large-scale user experiences surveys with maternity services.

  6. Large scale modelling of catastrophic floods in Italy

    Science.gov (United States)

    Azemar, Frédéric; Nicótina, Ludovico; Sassi, Maximiliano; Savina, Maurizio; Hilberts, Arno

    2017-04-01

    The RMS European Flood HD model® is a suite of country scale flood catastrophe models covering 13 countries throughout continental Europe and the UK. The models are developed with the goal of supporting risk assessment analyses for the insurance industry. Within this framework RMS is developing a hydrologic and inundation model for Italy. The model aims at reproducing the hydrologic and hydraulic properties across the domain through a modeling chain. A semi-distributed hydrologic model that allows capturing the spatial variability of the runoff formation processes is coupled with a one-dimensional river routing algorithm and a two-dimensional (depth averaged) inundation model. This model setup allows capturing the flood risk from both pluvial (overland flow) and fluvial flooding. Here we describe the calibration and validation methodologies for this modelling suite applied to the Italian river basins. The variability that characterizes the domain (in terms of meteorology, topography and hydrologic regimes) requires a modeling approach able to represent a broad range of meteo-hydrologic regimes. The calibration of the rainfall-runoff and river routing models is performed by means of a genetic algorithm that identifies the set of best performing parameters within the search space over the last 50 years. We first establish the quality of the calibration parameters on the full hydrologic balance and on individual discharge peaks by comparing extreme statistics to observations over the calibration period on several stations. The model is then used to analyze the major floods in the country; we discuss the different meteorological setup leading to the historical events and the physical mechanisms that induced these floods. We can thus assess the performance of RMS' hydrological model in view of the physical mechanisms leading to flood and highlight the main controls on flood risk modelling throughout the country. The model's ability to accurately simulate antecedent

  7. Large Scale Computing for the Modelling of Whole Brain Connectivity

    DEFF Research Database (Denmark)

    Albers, Kristoffer Jon

    of nodes with a shared connectivity pattern. Modelling the brain in great detail on a whole-brain scale is essential to fully understand the underlying organization of the brain and reveal the relations between structure and function, that allows sophisticated cognitive behaviour to emerge from ensembles...... of neurons. Relying on Markov Chain Monte Carlo (MCMC) simulations as the workhorse in Bayesian inference however poses significant computational challenges, especially when modelling networks at the scale and complexity supported by high-resolution whole-brain MRI. In this thesis, we present how to overcome...... these computational limitations and apply Bayesian stochastic block models for un-supervised data-driven clustering of whole-brain connectivity in full image resolution. We implement high-performance software that allows us to efficiently apply stochastic blockmodelling with MCMC sampling on large complex networks...

  8. Modeling fast and slow earthquakes at various scales.

    Science.gov (United States)

    Ide, Satoshi

    2014-01-01

    Earthquake sources represent dynamic rupture within rocky materials at depth and often can be modeled as propagating shear slip controlled by friction laws. These laws provide boundary conditions on fault planes embedded in elastic media. Recent developments in observation networks, laboratory experiments, and methods of data analysis have expanded our knowledge of the physics of earthquakes. Newly discovered slow earthquakes are qualitatively different phenomena from ordinary fast earthquakes and provide independent information on slow deformation at depth. Many numerical simulations have been carried out to model both fast and slow earthquakes, but problems remain, especially with scaling laws. Some mechanisms are required to explain the power-law nature of earthquake rupture and the lack of characteristic length. Conceptual models that include a hierarchical structure over a wide range of scales would be helpful for characterizing diverse behavior in different seismic regions and for improving probabilistic forecasts of earthquakes.

  9. The legacy effects of keystone individuals on collective behaviour scale to how long they remain within a group.

    Science.gov (United States)

    Pruitt, Jonathan N; Pinter-Wollman, Noa

    2015-09-07

    The collective behaviour of social groups is often strongly influenced by one or few individuals, termed here 'keystone individuals'. We examined whether the influence of keystone individuals on collective behaviour lingers after their departure and whether these lingering effects scale with their tenure in the group. In the social spider, Stegodyphus dumicola, colonies' boldest individuals wield a disproportionately large influence over colony behaviour. We experimentally manipulated keystones' tenure in laboratory-housed colonies and tracked their legacy effects on collective prey capture following their removal. We found that bolder keystones caused more aggressive collective foraging behaviour and catalysed greater inter-individual variation in boldness within their colonies. The longer keystones remained in a colony, the longer both of these effects lingered after their departure. Our data demonstrate that, long after their disappearance, keystones have large and lasting effects on social dynamics at both the individual and colony levels. © 2015 The Authors.

  10. Classical scale invariance in the inert doublet model

    Energy Technology Data Exchange (ETDEWEB)

    Plascencia, Alexis D. [Institute for Particle Physics Phenomenology, Department of Physics,Durham University, Durham DH1 3LE (United Kingdom)

    2015-09-04

    The inert doublet model (IDM) is a minimal extension of the Standard Model (SM) that can account for the dark matter in the universe. Naturalness arguments motivate us to study whether the model can be embedded into a theory with dynamically generated scales. In this work we study a classically scale invariant version of the IDM with a minimal hidden sector, which has a U(1){sub CW} gauge symmetry and a complex scalar Φ. The mass scale is generated in the hidden sector via the Coleman-Weinberg (CW) mechanism and communicated to the two Higgs doublets via portal couplings. Since the CW scalar remains light, acquires a vacuum expectation value and mixes with the SM Higgs boson, the phenomenology of this construction can be modified with respect to the traditional IDM. We analyze the impact of adding this CW scalar and the Z{sup ′} gauge boson on the calculation of the dark matter relic density and on the spin-independent nucleon cross section for direct detection experiments. Finally, by studying the RG equations we find regions in parameter space which remain valid all the way up to the Planck scale.

  11. Utilization of Large Scale Surface Models for Detailed Visibility Analyses

    Science.gov (United States)

    Caha, J.; Kačmařík, M.

    2017-11-01

    This article demonstrates utilization of large scale surface models with small spatial resolution and high accuracy, acquired from Unmanned Aerial Vehicle scanning, for visibility analyses. The importance of large scale data for visibility analyses on the local scale, where the detail of the surface model is the most defining factor, is described. The focus is not only the classic Boolean visibility, that is usually determined within GIS, but also on so called extended viewsheds that aims to provide more information about visibility. The case study with examples of visibility analyses was performed on river Opava, near the Ostrava city (Czech Republic). The multiple Boolean viewshed analysis and global horizon viewshed were calculated to determine most prominent features and visibility barriers of the surface. Besides that, the extended viewshed showing angle difference above the local horizon, which describes angular height of the target area above the barrier, is shown. The case study proved that large scale models are appropriate data source for visibility analyses on local level. The discussion summarizes possible future applications and further development directions of visibility analyses.

  12. Deconfined Quantum Criticality, Scaling Violations, and Classical Loop Models

    Directory of Open Access Journals (Sweden)

    Adam Nahum

    2015-12-01

    Full Text Available Numerical studies of the transition between Néel and valence bond solid phases in two-dimensional quantum antiferromagnets give strong evidence for the remarkable scenario of deconfined criticality, but display strong violations of finite-size scaling that are not yet understood. We show how to realize the universal physics of the Néel–valence-bond-solid (VBS transition in a three-dimensional classical loop model (this model includes the subtle interference effect that suppresses hedgehog defects in the Néel order parameter. We use the loop model for simulations of unprecedentedly large systems (up to linear size L=512. Our results are compatible with a continuous transition at which both Néel and VBS order parameters are critical, and we do not see conventional signs of first-order behavior. However, we show that the scaling violations are stronger than previously realized and are incompatible with conventional finite-size scaling, even if allowance is made for a weakly or marginally irrelevant scaling variable. In particular, different approaches to determining the anomalous dimensions η_{VBS} and η_{Néel} yield very different results. The assumption of conventional finite-size scaling leads to estimates that drift to negative values at large sizes, in violation of the unitarity bounds. In contrast, the decay with distance of critical correlators on scales much smaller than system size is consistent with large positive anomalous dimensions. Barring an unexpected reversal in behavior at still larger sizes, this implies that the transition, if continuous, must show unconventional finite-size scaling, for example, from an additional dangerously irrelevant scaling variable. Another possibility is an anomalously weak first-order transition. By analyzing the renormalization group flows for the noncompact CP^{n-1} field theory (the n-component Abelian Higgs model between two and four dimensions, we give the simplest scenario by which an

  13. Modelling galaxy merger time-scales and tidal destruction

    Science.gov (United States)

    Simha, Vimal; Cole, Shaun

    2017-12-01

    We present a model for the dynamical evolution of subhaloes based on an approach combining numerical and analytical methods. Our method is based on tracking subhaloes in an N-body simulation up to the latest epoch that it can be resolved, and applying an analytic prescription for its merger time-scale that takes dynamical friction and tidal disruption into account. When applied to cosmological N-body simulations with mass resolutions that differ by two orders of magnitude, the technique produces halo occupation distributions that agree to within 3 per cent. This model has now been implemented in the GALFORM semi-analytic model of galaxy formation.

  14. Toward multi-scale computational modeling in developmental disability research.

    Science.gov (United States)

    Dammann, O; Follett, P

    2011-06-01

    The field of theoretical neuroscience is gaining increasing recognition. Virtually all areas of neuroscience offer potential linkage points for computational work. In developmental neuroscience, main areas of research are neural development and connectivity, and connectionist modeling of cognitive development. In this paper, we suggest that computational models can be helpful tools for understanding the pathogenesis and consequences of perinatal brain damage and subsequent developmental disability. In particular, designing multi-scale computational models should be considered by developmental neuroscientists interested in helping reduce the risk for developmental disabilities. Georg Thieme Verlag Stuttgart · New york.

  15. Modeling Habitat Associations for the Common Loon (Gavia immer at Multiple Scales in Northeastern North America

    Directory of Open Access Journals (Sweden)

    Anne Kuhn

    2011-06-01

    Full Text Available Common Loon (Gavia immer is considered an emblematic and ecologically important example of aquatic-dependent wildlife in North America. The northern breeding range of Common Loon has contracted over the last century as a result of habitat degradation from human disturbance and lakeshore development. We focused on the state of New Hampshire, USA, where a long-term monitoring program conducted by the Loon Preservation Committee has been collecting biological data on Common Loon since 1976. The Common Loon population in New Hampshire is distributed throughout the state across a wide range of lake-specific habitats, water quality conditions, and levels of human disturbance. We used a multiscale approach to evaluate the association of Common Loon and breeding habitat within three natural physiographic ecoregions of New Hampshire. These multiple scales reflect Common Loon-specific extents such as territories, home ranges, and lake-landscape influences. We developed ecoregional multiscale models and compared them to single-scale models to evaluate model performance in distinguishing Common Loon breeding habitat. Based on information-theoretic criteria, there is empirical support for both multiscale and single-scale models across all three ecoregions, warranting a model-averaging approach. Our results suggest that the Common Loon responds to both ecological and anthropogenic factors at multiple scales when selecting breeding sites. These multiscale models can be used to identify and prioritize the conservation of preferred nesting habitat for Common Loon populations.

  16. Atmospheric CO2 modeling at the regional scale: an intercomparison of 5 meso-scale atmospheric models

    Directory of Open Access Journals (Sweden)

    G. Pérez-Landa

    2007-12-01

    Full Text Available Atmospheric CO2 modeling in interaction with the surface fluxes, at the regional scale is developed within the frame of the European project CarboEurope-IP and its Regional Experiment component. In this context, five meso-scale meteorological models at 2 km resolution participate in an intercomparison exercise. Using a common experimental protocol that imposes a large number of rules, two days of the CarboEurope Regional Experiment Strategy (CERES campaign are simulated. A systematic evaluation of the models is done in confrontation with the observations, using statistical tools and direct comparisons. Thus, temperature and relative humidity at 2 m, wind direction, surface energy and CO2 fluxes, vertical profiles of potential temperature as well as in-situ CO2 concentrations comparisons between observations and simulations are examined. These comparisons reveal a cold bias in the simulated temperature at 2 m, the latent heat flux is often underestimated. Nevertheless, the CO2 concentrations heterogeneities are well captured by most of the models. This intercomparison exercise shows also the models ability to represent the meteorology and carbon cycling at the synoptic and regional scale in the boundary layer, but also points out some of the major shortcomings of the models.

  17. Increasing process integrity in global scale water balance models

    Science.gov (United States)

    Plöger, Lisa; Mewes, Benjamin; Oppel, Henning; Schumann, Andreas

    2017-04-01

    Hydrological models on a global or continental scale are often used to model human impact on the water balance in data scarce regions. Therefore, they are not validated for a time series of runoff measured at gauges but for long term estimates. The simplistic model GlobWat was introduced by the FAO to predict irrigation water demand based on open source data for continental catchments. Originally, the model was not designed to process time series, but to estimate the water demand on long-time averages of precipitation and evapotranspiration. Therefore the emphasis of detail of GlobWat was focused on crop evapotranspiration and water availability in agricultural regions. In our study we wanted to enhance the modelling in detail to forest evapotranspiration on the one hand and to time series simulation on the other hand. Meanwhile, we tried to keep the amount of input data as small as possible or at least limit it to open source data. Our objectives derived from case studies in the forest dominated catchments of Danube and Mississippi. With the use of Penman-Montheith equation as fundamental equation within the original GlobWat model, evapotranspiration losses in these regions could not be simulated adequately. As this being the fact, the water availability of downstream regions dominated by agriculture might be overestimated and hence estimation of irrigation demands biased. Therefore, we implemented a Shuttleworth & Calder as well as a Priestly-Taylor approach for evapotranspiration calculation of forested areas. Both models are compared and evaluated based on monthly time series validation of the model with runoff series provided by GRDC (Global Runoff Data Center). For an additional extension of the model we added a simple one-parameter snow-routine. In our presentation we compare the different stages of modelling to demonstrate the options to extent and validate these models with observed data on an appropriate scale.

  18. Comparing turbulent mixing of biogenic VOC across model scale

    Science.gov (United States)

    Li, Y.; Barth, M. C.; Steiner, A. L.

    2016-12-01

    Vertical mixing of biogenic volatile organic compounds (BVOC) in the planetary boundary layer (PBL) is very important in simulating the formation of ozone, secondary organic aerosols (SOA), and climate feedbacks. To assess the representation of vertical mixing in the atmosphere for the Baltimore-Washington DISCOVER-AQ 2011 campaign, we use two models of different scale and turbulence representation: (1) the National Center for Atmospheric Research's Large Eddy Simulation (LES), and (2) the Weather Research and Forecasting-Chemistry (WRF-Chem) model to simulate regional meteorology and chemistry. For WRF-Chem, we evaluate the boundary layer schemes in the model at convection-permitting scales (4km). WRF-Chem simulated vertical profiles are compared with the results from turbulence-resolving LES model under similar meteorological and chemical environment. The influence of clouds on gas and aqueous species and the impact of cloud processing at both scales are evaluated. Temporal evolutions of a surface-to-cloud concentration ratio are calculated to determine the capability of BVOC vertical mixing in WRF-Chem.

  19. Large-scale model of mammalian thalamocortical systems.

    Science.gov (United States)

    Izhikevich, Eugene M; Edelman, Gerald M

    2008-03-04

    The understanding of the structural and dynamic complexity of mammalian brains is greatly facilitated by computer simulations. We present here a detailed large-scale thalamocortical model based on experimental measures in several mammalian species. The model spans three anatomical scales. (i) It is based on global (white-matter) thalamocortical anatomy obtained by means of diffusion tensor imaging (DTI) of a human brain. (ii) It includes multiple thalamic nuclei and six-layered cortical microcircuitry based on in vitro labeling and three-dimensional reconstruction of single neurons of cat visual cortex. (iii) It has 22 basic types of neurons with appropriate laminar distribution of their branching dendritic trees. The model simulates one million multicompartmental spiking neurons calibrated to reproduce known types of responses recorded in vitro in rats. It has almost half a billion synapses with appropriate receptor kinetics, short-term plasticity, and long-term dendritic spike-timing-dependent synaptic plasticity (dendritic STDP). The model exhibits behavioral regimes of normal brain activity that were not explicitly built-in but emerged spontaneously as the result of interactions among anatomical and dynamic processes. We describe spontaneous activity, sensitivity to changes in individual neurons, emergence of waves and rhythms, and functional connectivity on different scales.

  20. Bed form dynamics in distorted lightweight scale models

    Science.gov (United States)

    Aberle, Jochen; Henning, Martin; Ettmer, Bernd

    2016-04-01

    The adequate prediction of flow and sediment transport over bed forms presents a major obstacle for the solution of sedimentation problems in alluvial channels because bed forms affect hydraulic resistance, sediment transport, and channel morphodynamics. Moreover, bed forms can affect hydraulic habitat for biota, may introduce severe restrictions to navigation, and present a major problem for engineering structures such as water intakes and groynes. The main body of knowledge on the geometry and dynamics of bed forms such as dunes originates from laboratory and field investigations focusing on bed forms in sand bed rivers. Such investigations enable insight into the physics of the transport processes, but do not allow for the long term simulation of morphodynamic development as required to assess, for example, the effects of climate change on river morphology. On the other hand, this can be achieved through studies with distorted lightweight scale models allowing for the modification of the time scale. However, our understanding of how well bed form geometry and dynamics, and hence sediment transport mechanics, are reproduced in such models is limited. Within this contribution we explore this issue using data from investigations carried out at the Federal Waterways and Research Institute in Karlsruhe, Germany in a distorted lightweight scale model of the river Oder. The model had a vertical scale of 1:40 and a horizontal scale of 1:100, the bed material consisted of polystyrene particles, and the resulting dune geometry and dynamics were measured with a high spatial and temporal resolution using photogrammetric methods. Parameters describing both the directly measured and up-scaled dune geometry were determined using the random field approach. These parameters (e.g., standard deviation, skewness, kurtosis) will be compared to prototype observations as well as to results from the literature. Similarly, parameters describing the lightweight bed form dynamics, which

  1. MODEL OF GEOMEDIA CONTAINING DEFECTS: COLLECTIVE EFFECTS OF DEFECTS EVOLUTION DURING FORMATION OF POTENTIAL EARTHQUAKE FOCI

    Directory of Open Access Journals (Sweden)

    I. A. Panteleev

    2015-09-01

    transition is manifested by localized cataclastic deformation (i.e. a set of weak earthquakes, which migrates in space at a velocity several orders of magnitude lower than the speed of sound, as a ‘slow’ deformation wave (Fig. 3. Further reduction of the structural scaling parameter leads to degeneracy of the orientation meta-stability and formation of localized dissipative defect structures in the medium. Once the critical stress is reached, such structures develop in the blow-up regime, i.e. the mode of avalanche-unstable growth of defects in the localized area that is shrinking eventually. At the scale of observation, this process is manifested as brittle fracturing that causes formation of a deformation zone, which size is proportional to the scale of observation, and corresponds to occurrence of a strong earthquake.On the basis of the proposed model showing the behavior of the geomedium containing defects in the field of external stresses, it is possible to describe main ways of stress relaxation in the rock massives – brittle large-scale destruction and cataclastic deformation as consequences of the collective behavior of defects, which is determined by the structural scaling parameter.Results of this study may prove useful for estimation of critical stresses and assessment of the geomedium status in seismically active regions and be viewed as model representations of the physical hypothesis about the uniform nature of deve­lopment of discontinuities/defects in a wide range of spatial scales

  2. MODEL OF GEOMEDIA CONTAINING DEFECTS: COLLECTIVE EFFECTS OF DEFECTS EVOLUTION DURING FORMATION OF POTENTIAL EARTHQUAKE FOCI

    Directory of Open Access Journals (Sweden)

    I. A. Panteleev

    2013-01-01

    transition is manifested by localized cataclastic deformation (i.e. a set of weak earthquakes, which migrates in space at a velocity several orders of magnitude lower than the speed of sound, as a ‘slow’ deformation wave (Fig. 3. Further reduction of the structural scaling parameter leads to degeneracy of the orientation meta-stability and formation of localized dissipative defect structures in the medium. Once the critical stress is reached, such structures develop in the blow-up regime, i.e. the mode of avalanche-unstable growth of defects in the localized area that is shrinking eventually. At the scale of observation, this process is manifested as brittle fracturing that causes formation of a deformation zone, which size is proportional to the scale of observation, and corresponds to occurrence of a strong earthquake.On the basis of the proposed model showing the behavior of the geomedium containing defects in the field of external stresses, it is possible to describe main ways of stress relaxation in the rock massives – brittle large-scale destruction and cataclastic deformation as consequences of the collective behavior of defects, which is determined by the structural scaling parameter.Results of this study may prove useful for estimation of critical stresses and assessment of the geomedium status in seismically active regions and be viewed as model representations of the physical hypothesis about the uniform nature of deve­lopment of discontinuities/defects in a wide range of spatial scales

  3. Exploiting major trends in subject hierarchies for large-scale collection visualization

    Science.gov (United States)

    Julien, Charles-Antoine; Tirilly, Pierre; Leide, John E.; Guastavino, Catherine

    2012-01-01

    Many large digital collections are currently organized by subject; however, these useful information organization structures are large and complex, making them difficult to browse. Current online tools and visualization prototypes show small localized subsets and do not provide the ability to explore the predominant patterns of the overall subject structure. This research addresses this issue by simplifying the subject structure using two techniques based on the highly uneven distribution of real-world collections: level compression and child pruning. The approach is demonstrated using a sample of 130K records organized by the Library of Congress Subject Headings (LCSH). Promising results show that the subject hierarchy can be reduced down to 42% of its initial size, while maintaining access to 81% of the collection. The visual impact is demonstrated using a traditional outline view allowing searchers to dynamically change the amount of complexity that they feel necessary for the tasks at hand.

  4. Compare pilot-scale and industry-scale models of pulverized coal combustion in an ironmaking blast furnace

    Science.gov (United States)

    Shen, Yansong; Yu, Aibing; Zulli, Paul

    2013-07-01

    In order to understand the complex phenomena of pulverized coal injection (PCI) process in blast furnace (BF), mathematical models have been developed at different scales: pilot-scale model of coal combustion and industry-scale model (in-furnace model) of coal/coke combustion in a real BF respectively. This paper compares these PCI models in aspects of model developments and model capability. The model development is discussed in terms of model formulation, their new features and geometry/regions considered. The model capability is then discussed in terms of main findings followed by the model evaluation on their advantages and limitations. It is indicated that these PCI models are all able to describe PCI operation qualitatively. The in-furnace model is more reliable for simulating in-furnace phenomena of PCI operation qualitatively and quantitatively. These models are useful for understanding the flow-thermo-chemical behaviors and then optimizing the PCI operation in practice.

  5. Multi-scale modeling of the CD8 immune response

    Science.gov (United States)

    Barbarroux, Loic; Michel, Philippe; Adimy, Mostafa; Crauste, Fabien

    2016-06-01

    During the primary CD8 T-Cell immune response to an intracellular pathogen, CD8 T-Cells undergo exponential proliferation and continuous differentiation, acquiring cytotoxic capabilities to address the infection and memorize the corresponding antigen. After cleaning the organism, the only CD8 T-Cells left are antigen-specific memory cells whose role is to respond stronger and faster in case they are presented this very same antigen again. That is how vaccines work: a small quantity of a weakened pathogen is introduced in the organism to trigger the primary response, generating corresponding memory cells in the process, giving the organism a way to defend himself in case it encounters the same pathogen again. To investigate this process, we propose a non linear, multi-scale mathematical model of the CD8 T-Cells immune response due to vaccination using a maturity structured partial differential equation. At the intracellular scale, the level of expression of key proteins is modeled by a delay differential equation system, which gives the speeds of maturation for each cell. The population of cells is modeled by a maturity structured equation whose speeds are given by the intracellular model. We focus here on building the model, as well as its asymptotic study. Finally, we display numerical simulations showing the model can reproduce the biological dynamics of the cell population for both the primary response and the secondary responses.

  6. Multi-scale modeling of the CD8 immune response

    Energy Technology Data Exchange (ETDEWEB)

    Barbarroux, Loic, E-mail: loic.barbarroux@doctorant.ec-lyon.fr [Inria, Université de Lyon, UMR 5208, Institut Camille Jordan (France); Ecole Centrale de Lyon, 36 avenue Guy de Collongue, 69134 Ecully (France); Michel, Philippe, E-mail: philippe.michel@ec-lyon.fr [Inria, Université de Lyon, UMR 5208, Institut Camille Jordan (France); Ecole Centrale de Lyon, 36 avenue Guy de Collongue, 69134 Ecully (France); Adimy, Mostafa, E-mail: mostafa.adimy@inria.fr [Inria, Université de Lyon, UMR 5208, Université Lyon 1, Institut Camille Jordan, 43 Bd. du 11 novembre 1918, F-69200 Villeurbanne Cedex (France); Crauste, Fabien, E-mail: crauste@math.univ-lyon1.fr [Inria, Université de Lyon, UMR 5208, Université Lyon 1, Institut Camille Jordan, 43 Bd. du 11 novembre 1918, F-69200 Villeurbanne Cedex (France)

    2016-06-08

    During the primary CD8 T-Cell immune response to an intracellular pathogen, CD8 T-Cells undergo exponential proliferation and continuous differentiation, acquiring cytotoxic capabilities to address the infection and memorize the corresponding antigen. After cleaning the organism, the only CD8 T-Cells left are antigen-specific memory cells whose role is to respond stronger and faster in case they are presented this very same antigen again. That is how vaccines work: a small quantity of a weakened pathogen is introduced in the organism to trigger the primary response, generating corresponding memory cells in the process, giving the organism a way to defend himself in case it encounters the same pathogen again. To investigate this process, we propose a non linear, multi-scale mathematical model of the CD8 T-Cells immune response due to vaccination using a maturity structured partial differential equation. At the intracellular scale, the level of expression of key proteins is modeled by a delay differential equation system, which gives the speeds of maturation for each cell. The population of cells is modeled by a maturity structured equation whose speeds are given by the intracellular model. We focus here on building the model, as well as its asymptotic study. Finally, we display numerical simulations showing the model can reproduce the biological dynamics of the cell population for both the primary response and the secondary responses.

  7. Multi-scale Modeling of the Evolution of a Large-Scale Nourishment

    Science.gov (United States)

    Luijendijk, A.; Hoonhout, B.

    2016-12-01

    Morphological predictions are often computed using a single morphological model commonly forced with schematized boundary conditions representing the time scale of the prediction. Recent model developments are now allowing us to think and act differently. This study presents some recent developments in coastal morphological modeling focusing on flexible meshes, flexible coupling between models operating at different time scales, and a recently developed morphodynamic model for the intertidal and dry beach. This integrated modeling approach is applied to the Sand Engine mega nourishment in The Netherlands to illustrate the added-values of this integrated approach both in accuracy and computational efficiency. The state-of-the-art Delft3D Flexible Mesh (FM) model is applied at the study site under moderate wave conditions. One of the advantages is that the flexibility of the mesh structure allows a better representation of the water exchange with the lagoon and corresponding morphological behavior than with the curvilinear grid used in the previous version of Delft3D. The XBeach model is applied to compute the morphodynamic response to storm events in detail incorporating the long wave effects on bed level changes. The recently developed aeolian transport and bed change model AeoLiS is used to compute the bed changes in the intertidal and dry beach area. In order to enable flexible couplings between the three abovementioned models, a component-based environment has been developed using the BMI method. This allows a serial coupling of Delft3D FM and XBeach steered by a control module that uses a hydrodynamic time series as input (see figure). In addition, a parallel online coupling, with information exchange in each timestep will be made with the AeoLiS model that predicts the bed level changes at the intertidal and dry beach area. This study presents the first years of evolution of the Sand Engine computed with the integrated modelling approach. Detailed comparisons

  8. Modelling hydrological processes at different scales across Russian permafrost domain

    Science.gov (United States)

    Makarieva, Olga; Lebedeva, Lyudmila; Nesterova, Natalia; Vinogradova, Tatyana

    2017-04-01

    The project aims to study the interactions between permafrost and runoff generation processes across Russian Arctic domain based on hydrological modelling. The uniqueness of the approach is a unified modelling framework which allows for coupled simulations of upper permafrost dynamics and streamflow generation at different scales (from soil column to large watersheds). The base of the project is hydrological model Hydrograph (Vinogradov et al. 2011, Semenova et al. 2013, 2015; Lebedeva et al., 2015). The model algorithms combine physically-based and conceptual approaches for the description of land hydrological cycle processes, which allows for maintaining a balance between the complexity of model design and the use of limited input information. The method for modeling heat dynamics in soil is integrated into the model. Main parameters of the model are the physical properties of landscapes that may be measured (observed) in nature and are classified according to the types of soil, vegetation and other characteristics. A set of parameters specified in the studied catchments (basins analog) can be transferred to ungauged basins with similar types of the underlying surface without calibration. The results of modelling from small research watersheds to large poorly gauged river basins in different climate and landscape settings of Russian Arctic (within the Yenisey, Lena, Yana, Indigirka, Kolyma rivers basins) will be presented. Based on gained experience methodological aspects of hydrological modelling approaches in permafrost environment will be discussed. The study is partially supported by Russian Foundation for Basic Research, projects 16-35-50151 and 17-05-01138.

  9. Site-scale groundwater flow modelling of Aberg

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D. [Duke Engineering and Services (United States); Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden)

    1998-12-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Aberg, which adopts input parameters from the Aespoe Hard Rock Laboratory in southern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the boundary conditions. The study uses HYDRASTAR, the SKB stochastic continuum groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position and the advective travel times and paths through the geosphere. The nested modelling approach and the scale dependency of hydraulic conductivity raise a number of questions regarding the regional to site-scale mass balance and the method`s self-consistency. The transfer of regional heads via constant head boundaries preserves the regional pattern recharge and discharge in the site-scale model, and the regional to site-scale mass balance is thought to be adequate. The upscaling method appears to be approximately self-consistent with respect to the median performance measures at various grid scales. A series of variant cases indicates that the study results are insensitive to alternative methods on transferring boundary conditions from the regional model to the site-scale model. The flow paths, travel times and simulated heads appear to be consistent with on-site observations and simple scoping calculations. The variabilities of the performance measures are quite high for the Base Case, but the

  10. Experimental exploration of diffusion panel labyrinth in scale model

    Science.gov (United States)

    Vance, Mandi M.

    Small rehearsal and performance venues often lack the rich reverberation found in larger spaces. Higini Arau-Puchades has designed and implemented a system of diffusion panels in the Orchestra Rehearsal Room at the Great Theatre Liceu and the Tonhalle St. Gallen that lengthen the reverberation time. These panels defy traditional room acoustics theory which holds that adding material to a room will shorten the reverberation time. This work explores several versions of Arau-Puchades' panels and room characteristics in scale model. Reverberation times are taken from room impulse response measurements in order to better understand the unusual phenomenon. Scale modeling enables many tests but has limitations in its accuracy due to the higher frequency range involved. Further investigations are necessary to establish how the sound energy interacts with the diffusion panels and confirm their validity in a range of applications.

  11. A Rasch Model Analysis of the Mindful Attention Awareness Scale.

    Science.gov (United States)

    Goh, Hong Eng; Marais, Ida; Ireland, Michael James

    2017-04-01

    The Mindful Attention Awareness Scale was developed to measure individual differences in the tendency to be mindful. The current study examined the psychometric properties of the Mindful Attention Awareness Scale in a heterogeneous sample of 565 nonmeditators and 612 meditators using the polytomous Rasch model. The results showed that some items did not function the same way for these two groups. Overall, meditators had higher mean estimates than nonmeditators. The analysis identified a group of items as highly discriminating. Using a different model, Van Dam, Earleywine, and Borders in 2010 identified the same group of items as highly discriminating, and concluded that they were the items with the most information. Multiple pieces of evidence from the Rasch analysis showed that these items discriminate highly because of local dependence, hence do not supply independent information. We discussed how these different conclusions, based on similar findings, result from two very different paradigms in measurement.

  12. Next-generation genome-scale models for metabolic engineering

    DEFF Research Database (Denmark)

    King, Zachary A.; Lloyd, Colton J.; Feist, Adam M.

    2015-01-01

    Constraint-based reconstruction and analysis (COBRA) methods have become widely used tools for metabolic engineering in both academic and industrial laboratories. By employing a genome-scale in silico representation of the metabolic network of a host organism, COBRA methods can be used to predict...... examples of applying COBRA methods to strain optimization are presented and discussed. Then, an outlook is provided on the next generation of COBRA models and the new types of predictions they will enable for systems metabolic engineering....

  13. Vegetable parenting practices scale. Item response modeling analyses.

    Science.gov (United States)

    Chen, Tzu-An; O'Connor, Teresia M; Hughes, Sheryl O; Beltran, Alicia; Baranowski, Janice; Diep, Cassandra; Baranowski, Tom

    2015-08-01

    To evaluate the psychometric properties of a vegetable parenting practices scale using multidimensional polytomous item response modeling which enables assessing item fit to latent variables and the distributional characteristics of the items in comparison to the respondents. We also tested for differences in the ways item function (called differential item functioning) across child's gender, ethnicity, age, and household income groups. Parents of 3-5 year old children completed a self-reported vegetable parenting practices scale online. Vegetable parenting practices consisted of 14 effective vegetable parenting practices and 12 ineffective vegetable parenting practices items, each with three subscales (responsiveness, structure, and control). Multidimensional polytomous item response modeling was conducted separately on effective vegetable parenting practices and ineffective vegetable parenting practices. One effective vegetable parenting practice item did not fit the model well in the full sample or across demographic groups, and another was a misfit in differential item functioning analyses across child's gender. Significant differential item functioning was detected across children's age and ethnicity groups, and more among effective vegetable parenting practices than ineffective vegetable parenting practices items. Wright maps showed items only covered parts of the latent trait distribution. The harder- and easier-to-respond ends of the construct were not covered by items for effective vegetable parenting practices and ineffective vegetable parenting practices, respectively. Several effective vegetable parenting practices and ineffective vegetable parenting practices scale items functioned differently on the basis of child's demographic characteristics; therefore, researchers should use these vegetable parenting practices scales with caution. Item response modeling should be incorporated in analyses of parenting practice questionnaires to better assess

  14. Modeling basin- and plume-scale processes of CO2 storage for full-scale deployment

    Energy Technology Data Exchange (ETDEWEB)

    Zhou, Q.; Birkholzer, J.T.; Mehnert, E.; Lin, Y.-F.; Zhang, K.

    2009-08-15

    Integrated modeling of basin- and plume-scale processes induced by full-scale deployment of CO{sub 2} storage was applied to the Mt. Simon Aquifer in the Illinois Basin. A three-dimensional mesh was generated with local refinement around 20 injection sites, with approximately 30 km spacing. A total annual injection rate of 100 Mt CO{sub 2} over 50 years was used. The CO{sub 2}-brine flow at the plume scale and the single-phase flow at the basin scale were simulated. Simulation results show the overall shape of a CO{sub 2} plume consisting of a typical gravity-override subplume in the bottom injection zone of high injectivity and a pyramid-shaped subplume in the overlying multilayered Mt. Simon, indicating the important role of a secondary seal with relatively low-permeability and high-entry capillary pressure. The secondary-seal effect is manifested by retarded upward CO{sub 2} migration as a result of multiple secondary seals, coupled with lateral preferential CO{sub 2} viscous fingering through high-permeability layers. The plume width varies from 9.0 to 13.5 km at 200 years, indicating the slow CO{sub 2} migration and no plume interference between storage sites. On the basin scale, pressure perturbations propagate quickly away from injection centers, interfere after less than 1 year, and eventually reach basin margins. The simulated pressure buildup of 35 bar in the injection area is not expected to affect caprock geomechanical integrity. Moderate pressure buildup is observed in Mt. Simon in northern Illinois. However, its impact on groundwater resources is less than the hydraulic drawdown induced by long-term extensive pumping from overlying freshwater aquifers.

  15. Ecohydrologic Modeling of Hillslope Scale Processes in Dryland Ecosystems

    Science.gov (United States)

    Franz, T. E.; King, E. G.; Lester, A.; Caylor, K. K.; Nordbotten, J.; Celia, M. A.; Rodriguez-Iturbe, I.

    2008-12-01

    Dryland ecosystem processes are governed by complex interactions between the atmosphere, soil, and vegetation that are tightly coupled through the mass balance of water. At the scale of individual hillslopes, mass balance of water is dominated by mechanisms of water redistribution which require spatially explicit representation. Fully-resolved physical models of surface and subsurface processes require numerical routines that are not trivial to solve for the spatial (hillslope) and temporal (many plant generations) scales of ecohydrologic interest. In order to reduce model complexity, we have used small-scale field data to derive empirical surface flux terms for representative patches (bare soil, grass, and tree) in a dryland ecosystem of central Kenya. The model is coupled spatially in the subsurface by an analytical solution to the Boussinesq equation for a sloping slab. The semi-analytical model is spatially explicit driven by pulses of precipitation over a simulation period that represents many plant generations. By examining long-term model dynamics, we are able to investigate the principles of self-organization and optimization (maximization of plant water use and minimization of water lost to the system) of dryland ecosystems for various initial conditions and climatic variability. Precipitation records in central Kenya reveal a shift to more intense infrequent rain events with a constant annual total. The range of stable solutions of initial conditions and climatic variability are important to land management agencies for addressing current grazing practices and future policies. The model is a quantitative tool for addressing perturbations to the system and the overall sustainability of pastoralist activities in dryland ecosystems.

  16. Disaggregation, aggregation and spatial scaling in hydrological modelling

    Science.gov (United States)

    Becker, Alfred; Braun, Peter

    1999-04-01

    A typical feature of the land surface is its heterogeneity in terms of the spatial variability of land surface characteristics and parameters controlling physical/hydrological, biological, and other related processes. Different forms and degrees of heterogeneity need to be taken into account in hydrological modelling. The first part of the article concerns the conditions under which a disaggregation of the land surface into subareas of uniform or "quasihomogeneous" behaviour (hydrotopes or hydrological response units - HRUs) is indispensable. In a case study in northern Germany, it is shown that forests in contrast to arable land, areas with shallow groundwater in contrast to those with deep, water surfaces and sealed areas should generally be distinguished (disaggregated) in modelling, whereas internal heterogeneities within these hydrotopes can be assessed statistically, e.g., by areal distribution functions (soil water holding capacity, hydraulic conductivity, etc.). Models with hydrotope-specific parameters can be applied to calculate the "vertical" processes (fluxes, storages, etc.), and this, moreover, for hydrotopes of different area, and even for groups of distributed hydrotopes in a reference area (hydrotope classes), provided that the meteorological conditions are similar. Thus, a scaling problem does not really exist in this process domain. The primary domain for the application of scaling laws is that of lateral flows in landscapes and river basins. This is illustrated in the second part of the article, where results of a case study in Bavaria/Germany are presented and discussed. It is shown that scaling laws can be applied efficiently for the determination of the Instantaneous Unit Hydrograph (IUH) of the surface runoff system in river basins: simple scaling for basins larger than 43 km 2, and multiple scaling for smaller basins. Surprisingly, only two parameters were identified as important in the derived relations: the drainage area and, in some

  17. Regional scale hydrology with a new land surface processes model

    Science.gov (United States)

    Laymon, Charles; Crosson, William

    1995-01-01

    Through the CaPE Hydrometeorology Project, we have developed an understanding of some of the unique data quality issues involved in assimilating data of disparate types for regional-scale hydrologic modeling within a GIS framework. Among others, the issues addressed here include the development of adequate validation of the surface water budget, implementation of the STATSGO soil data set, and implementation of a remote sensing-derived landcover data set to account for surface heterogeneity. A model of land surface processes has been developed and used in studies of the sensitivity of surface fluxes and runoff to soil and landcover characterization. Results of these experiments have raised many questions about how to treat the scale-dependence of land surface-atmosphere interactions on spatial and temporal variability. In light of these questions, additional modifications are being considered for the Marshall Land Surface Processes Model. It is anticipated that these techniques can be tested and applied in conjunction with GCIP activities over regional scales.

  18. Space Launch System Scale Model Acoustic Test Ignition Overpressure Testing

    Science.gov (United States)

    Nance, Donald K.; Liever, Peter A.

    2015-01-01

    The overpressure phenomenon is a transient fluid dynamic event occurring during rocket propulsion system ignition. This phenomenon results from fluid compression of the accelerating plume gas, subsequent rarefaction, and subsequent propagation from the exhaust trench and duct holes. The high-amplitude unsteady fluid-dynamic perturbations can adversely affect the vehicle and surrounding structure. Commonly known as ignition overpressure (IOP), this is an important design-to environment for the Space Launch System (SLS) that NASA is currently developing. Subscale testing is useful in validating and verifying the IOP environment. This was one of the objectives of the Scale Model Acoustic Test (SMAT), conducted at Marshall Space Flight Center (MSFC). The test data quantifies the effectiveness of the SLS IOP suppression system and improves the analytical models used to predict the SLS IOP environments. The reduction and analysis of the data gathered during the SMAT IOP test series requires identification and characterization of multiple dynamic events and scaling of the event waveforms to provide the most accurate comparisons to determine the effectiveness of the IOP suppression systems. The identification and characterization of the overpressure events, the waveform scaling, the computation of the IOP suppression system knockdown factors, and preliminary comparisons to the analytical models are discussed.

  19. Modeling Biology Spanning Different Scales: An Open Challenge

    Directory of Open Access Journals (Sweden)

    Filippo Castiglione

    2014-01-01

    Full Text Available It is coming nowadays more clear that in order to obtain a unified description of the different mechanisms governing the behavior and causality relations among the various parts of a living system, the development of comprehensive computational and mathematical models at different space and time scales is required. This is one of the most formidable challenges of modern biology characterized by the availability of huge amount of high throughput measurements. In this paper we draw attention to the importance of multiscale modeling in the framework of studies of biological systems in general and of the immune system in particular.

  20. Model Predictive Control for a Small Scale Unmanned Helicopter

    Directory of Open Access Journals (Sweden)

    Jianfu Du

    2008-11-01

    Full Text Available Kinematical and dynamical equations of a small scale unmanned helicoper are presented in the paper. Based on these equations a model predictive control (MPC method is proposed for controlling the helicopter. This novel method allows the direct accounting for the existing time delays which are used to model the dynamics of actuators and aerodynamics of the main rotor. Also the limits of the actuators are taken into the considerations during the controller design. The proposed control algorithm was verified in real flight experiments where good perfomance was shown in postion control mode.

  1. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models

    Science.gov (United States)

    King, Zachary A.; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A.; Ebrahim, Ali; Palsson, Bernhard O.; Lewis, Nathan E.

    2016-01-01

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. PMID:26476456

  2. BiGG Models: A platform for integrating, standardizing and sharing genome-scale models.

    Science.gov (United States)

    King, Zachary A; Lu, Justin; Dräger, Andreas; Miller, Philip; Federowicz, Stephen; Lerman, Joshua A; Ebrahim, Ali; Palsson, Bernhard O; Lewis, Nathan E

    2016-01-04

    Genome-scale metabolic models are mathematically-structured knowledge bases that can be used to predict metabolic pathway usage and growth phenotypes. Furthermore, they can generate and test hypotheses when integrated with experimental data. To maximize the value of these models, centralized repositories of high-quality models must be established, models must adhere to established standards and model components must be linked to relevant databases. Tools for model visualization further enhance their utility. To meet these needs, we present BiGG Models (http://bigg.ucsd.edu), a completely redesigned Biochemical, Genetic and Genomic knowledge base. BiGG Models contains more than 75 high-quality, manually-curated genome-scale metabolic models. On the website, users can browse, search and visualize models. BiGG Models connects genome-scale models to genome annotations and external databases. Reaction and metabolite identifiers have been standardized across models to conform to community standards and enable rapid comparison across models. Furthermore, BiGG Models provides a comprehensive application programming interface for accessing BiGG Models with modeling and analysis tools. As a resource for highly curated, standardized and accessible models of metabolism, BiGG Models will facilitate diverse systems biology studies and support knowledge-based analysis of diverse experimental data. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  3. Radar altimetry assimilation in catchment-scale hydrological models

    Science.gov (United States)

    Bauer-Gottwein, P.; Michailovsky, C. I. B.

    2012-04-01

    Satellite-borne radar altimeters provide time series of river and lake levels with global coverage and moderate temporal resolution. Current missions can detect rivers down to a minimum width of about 100m, depending on local conditions around the virtual station. Water level time series from space-borne radar altimeters are an important source of information in ungauged or poorly gauged basins. However, many water resources management applications require information on river discharge. Water levels can be converted into river discharge by means of a rating curve, if sufficient and accurate information on channel geometry, slope and roughness is available. Alternatively, altimetric river levels can be assimilated into catchment-scale hydrological models. The updated models can subsequently be used to produce improved discharge estimates. In this study, a Muskingum routing model for a river network is updated using multiple radar altimetry time series. The routing model is forced with runoff produced by lumped-parameter rainfall-runoff models in each subcatchment. Runoff is uncertain because of errors in the precipitation forcing, structural errors in the rainfall-runoff model as well as uncertain rainfall-runoff model parameters. Altimetric measurements are translated into river reach storage based on river geometry. The Muskingum routing model is forced with a runoff ensemble and storages in the river reaches are updated using a Kalman filter approach. The approach is applied to the Zambezi and Brahmaputra river basins. Assimilation of radar altimetry significantly improves the capability of the models to simulate river discharge.

  4. Optimization-Based Artificial Bee Colony Algorithm for Data Collection in Large-Scale Mobile Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Yinggao Yue

    2016-01-01

    Full Text Available Data collection is a fundamental operation in various mobile wireless sensor networks (MWSN applications. The energy of nodes around the Sink can be untimely depleted owing to the fact that sensor nodes must transmit vast amounts of data, readily forming a bottleneck in energy consumption; mobile wireless sensor networks have been designed to address this issue. In this study, we focused on a large-scale and intensive MWSN which allows a certain amount of data latency by investigating mobile Sink balance from three aspects: data collection maximization, mobile path length minimization, and network reliability optimization. We also derived a corresponding formula to represent the MWSN and proved that it represents an NP-hard problem. Traditional data collection methods only focus on increasing the amount data collection or reducing the overall network energy consumption, which is why we designed the proposed heuristic algorithm to jointly consider cluster head selection, the routing path from ordinary nodes to the cluster head node, and mobile Sink path planning optimization. The proposed data collection algorithm for mobile Sinks is, in effect, based on artificial bee colony. Simulation results show that, in comparison with other algorithms, the proposed algorithm can effectively reduce data transmission, save energy, improve network data collection efficiency and reliability, and extend the network lifetime.

  5. Using Scaling to Understand, Model and Predict Global Scale Anthropogenic and Natural Climate Change

    Science.gov (United States)

    Lovejoy, S.; del Rio Amador, L.

    2014-12-01

    The atmosphere is variable over twenty orders of magnitude in time (≈10-3 to 1017 s) and almost all of the variance is in the spectral "background" which we show can be divided into five scaling regimes: weather, macroweather, climate, macroclimate and megaclimate. We illustrate this with instrumental and paleo data. Based the signs of the fluctuation exponent H, we argue that while the weather is "what you get" (H>0: fluctuations increasing with scale), that it is macroweather (Hwhite noise and focuses on quasi-periodic variability assumes a spectrum that is in error by a factor of a quadrillion (≈ 1015). Using this scaling framework, we can quantify the natural variability, distinguish it from anthropogenic variability, test various statistical hypotheses and make stochastic climate forecasts. For example, we estimate the probability that the warming is simply a giant century long natural fluctuation is less than 1%, most likely less than 0.1% and estimate return periods for natural warming events of different strengths and durations, including the slow down ("pause") in the warming since 1998. The return period for the pause was found to be 20-50 years i.e. not very unusual; however it immediately follows a 6 year "pre-pause" warming event of almost the same magnitude with a similar return period (30 - 40 years). To improve on these unconditional estimates, we can use scaling models to exploit the long range memory of the climate process to make accurate stochastic forecasts of the climate including the pause. We illustrate stochastic forecasts on monthly and annual scale series of global and northern hemisphere surface temperatures. We obtain forecast skill nearly as high as the theoretical (scaling) predictability limits allow: for example, using hindcasts we find that at 10 year forecast horizons we can still explain ≈ 15% of the anomaly variance. These scaling hindcasts have comparable - or smaller - RMS errors than existing GCM's. We discuss how these

  6. Reconstructing genome-scale metabolic models with merlin.

    Science.gov (United States)

    Dias, Oscar; Rocha, Miguel; Ferreira, Eugénio C; Rocha, Isabel

    2015-04-30

    The Metabolic Models Reconstruction Using Genome-Scale Information (merlin) tool is a user-friendly Java application that aids the reconstruction of genome-scale metabolic models for any organism that has its genome sequenced. It performs the major steps of the reconstruction process, including the functional genomic annotation of the whole genome and subsequent construction of the portfolio of reactions. Moreover, merlin includes tools for the identification and annotation of genes encoding transport proteins, generating the transport reactions for those carriers. It also performs the compartmentalisation of the model, predicting the organelle localisation of the proteins encoded in the genome and thus the localisation of the metabolites involved in the reactions promoted by such enzymes. The gene-proteins-reactions (GPR) associations are automatically generated and included in the model. Finally, merlin expedites the transition from genomic data to draft metabolic models reconstructions exported in the SBML standard format, allowing the user to have a preliminary view of the biochemical network, which can be manually curated within the environment provided by merlin. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  7. Macro Scale Independently Homogenized Subcells for Modeling Braided Composites

    Science.gov (United States)

    Blinzler, Brina J.; Goldberg, Robert K.; Binienda, Wieslaw K.

    2012-01-01

    An analytical method has been developed to analyze the impact response of triaxially braided carbon fiber composites, including the penetration velocity and impact damage patterns. In the analytical model, the triaxial braid architecture is simulated by using four parallel shell elements, each of which is modeled as a laminated composite. Currently, each shell element is considered to be a smeared homogeneous material. The commercial transient dynamic finite element code LS-DYNA is used to conduct the simulations, and a continuum damage mechanics model internal to LS-DYNA is used as the material constitutive model. To determine the stiffness and strength properties required for the constitutive model, a top-down approach for determining the strength properties is merged with a bottom-up approach for determining the stiffness properties. The top-down portion uses global strengths obtained from macro-scale coupon level testing to characterize the material strengths for each subcell. The bottom-up portion uses micro-scale fiber and matrix stiffness properties to characterize the material stiffness for each subcell. Simulations of quasi-static coupon level tests for several representative composites are conducted along with impact simulations.

  8. Research of Model Scale Seawater Intrusion using Geoelectric Method

    Directory of Open Access Journals (Sweden)

    Supriyadi Supriyadi

    2011-08-01

    Full Text Available A depth experience and knowledge are needed in analyzing the prediction of seawater intrusion. We report here a physical modelling for monitoring the model scale of seawater intrusion. The model used in this research is glass basin consists of two parts; soil and seawater. The intrusion of seawater into soil in the glass basin is modelled. The results of 2-D inversion by using software Res2DInv32 showed that the monitoring of seawater intrusion, in soil model scale, can be detected by using Schlumberger configuration resistivity method. The watering process of freshwater into soil caused the electric resistivity value decreased. This phenomenon can be seen from the transition of the resistivity pseudo section before and after the watering process using different cummulative volume of freshwater in different soil. After being intruded by the seawater, the measured soil resistivity is 2.22 Ωm – 5.69 Ωm which means that the soil had been intruded.

  9. Current state of genome-scale modeling in filamentous fungi.

    Science.gov (United States)

    Brandl, Julian; Andersen, Mikael R

    2015-06-01

    The group of filamentous fungi contains important species used in industrial biotechnology for acid, antibiotics and enzyme production. Their unique lifestyle turns these organisms into a valuable genetic reservoir of new natural products and biomass degrading enzymes that has not been used to full capacity. One of the major bottlenecks in the development of new strains into viable industrial hosts is the alteration of the metabolism towards optimal production. Genome-scale models promise a reduction in the time needed for metabolic engineering by predicting the most potent targets in silico before testing them in vivo. The increasing availability of high quality models and molecular biological tools for manipulating filamentous fungi renders the model-guided engineering of these fungal factories possible with comprehensive metabolic networks. A typical fungal model contains on average 1138 unique metabolic reactions and 1050 ORFs, making them a vast knowledge-base of fungal metabolism. In the present review we focus on the current state as well as potential future applications of genome-scale models in filamentous fungi.

  10. Pore-scale modeling of wettability alteration during primary drainage

    Science.gov (United States)

    Kallel, W.; van Dijke, M. I. J.; Sorbie, K. S.; Wood, R.

    2017-03-01

    While carbonate reservoirs are recognized to be weakly-to-moderately oil-wet at the core-scale, pore-scale wettability distributions remain poorly understood. In particular, the wetting state of micropores (pores polar non-hydrocarbon compounds from the oil-phase into the water-phase. We implement a diffusion/adsorption model for these compounds that triggers a wettability alteration from initially water-wet to intermediate-wet conditions. This mechanism is incorporated in a quasi-static pore-network model to which we add a notional time-dependency of the quasi-static invasion percolation mechanism. The model qualitatively reproduces experimental observations where an early rapid wettability alteration involving these small polar species occurred during primary drainage. Interestingly, we could invoke clear differences in the primary drainage patterns by varying both the extent of wettability alteration and the balance between the processes of oil invasion and wetting change. Combined, these parameters dictate the initial water saturation for waterflooding. Indeed, under conditions where oil invasion is slow compared to a fast and relatively strong wetting change, the model results in significant non-zero water saturations. However, for relatively fast oil invasion or small wetting changes, the model allows higher oil saturations at fixed maximum capillary pressures, and invasion of micropores at moderate capillary pressures.

  11. Use of a Bayesian hierarchical model to study the allometric scaling of the fetoplacental weight ratio

    Directory of Open Access Journals (Sweden)

    Fidel Ernesto Castro Morales

    2016-03-01

    Full Text Available Abstract Objectives: to propose the use of a Bayesian hierarchical model to study the allometric scaling of the fetoplacental weight ratio, including possible confounders. Methods: data from 26 singleton pregnancies with gestational age at birth between 37 and 42 weeks were analyzed. The placentas were collected immediately after delivery and stored under refrigeration until the time of analysis, which occurred within up to 12 hours. Maternal data were collected from medical records. A Bayesian hierarchical model was proposed and Markov chain Monte Carlo simulation methods were used to obtain samples from distribution a posteriori. Results: the model developed showed a reasonable fit, even allowing for the incorporation of variables and a priori information on the parameters used. Conclusions: new variables can be added to the modelfrom the available code, allowing many possibilities for data analysis and indicating the potential for use in research on the subject.

  12. Health Belief Model Scale for Human Papilloma Virus and its Vaccination: Adaptation and Psychometric Testing.

    Science.gov (United States)

    Guvenc, Gulten; Seven, Memnun; Akyuz, Aygul

    2016-06-01

    To adapt and psychometrically test the Health Belief Model Scale for Human Papilloma Virus (HPV) and Its Vaccination (HBMS-HPVV) for use in a Turkish population and to assess the Human Papilloma Virus Knowledge score (HPV-KS) among female college students. Instrument adaptation and psychometric testing study. The sample consisted of 302 nursing students at a nursing school in Turkey between April and May 2013. Questionnaire-based data were collected from the participants. Information regarding HBMS-HPVV and HPV knowledge and descriptive characteristic of participants was collected using translated HBMS-HPVV and HPV-KS. Test-retest reliability was evaluated and Cronbach α was used to assess internal consistency reliability, and exploratory factor analysis was used to assess construct validity of the HBMS-HPVV. The scale consists of 4 subscales that measure 4 constructs of the Health Belief Model covering the perceived susceptibility and severity of HPV and the benefits and barriers. The final 14-item scale had satisfactory validity and internal consistency. Cronbach α values for the 4 subscales ranged from 0.71 to 0.78. Total HPV-KS ranged from 0 to 8 (scale range, 0-10; 3.80 ± 2.12). The HBMS-HPVV is a valid and reliable instrument for measuring young Turkish women's beliefs and attitudes about HPV and its vaccination. Copyright © 2015 North American Society for Pediatric and Adolescent Gynecology. Published by Elsevier Inc. All rights reserved.

  13. A perspective on bridging scales and design of models using low-dimensional manifolds and data-driven model inference

    KAUST Repository

    Tegner, Jesper

    2016-10-04

    Systems in nature capable of collective behaviour are nonlinear, operating across several scales. Yet our ability to account for their collective dynamics differs in physics, chemistry and biology. Here, we briefly review the similarities and differences between mathematical modelling of adaptive living systems versus physico-chemical systems. We find that physics-based chemistry modelling and computational neuroscience have a shared interest in developing techniques for model reductions aiming at the identification of a reduced subsystem or slow manifold, capturing the effective dynamics. By contrast, as relations and kinetics between biological molecules are less characterized, current quantitative analysis under the umbrella of bioinformatics focuses on signal extraction, correlation, regression and machine-learning analysis. We argue that model reduction analysis and the ensuing identification of manifolds bridges physics and biology. Furthermore, modelling living systems presents deep challenges as how to reconcile rich molecular data with inherent modelling uncertainties (formalism, variables selection and model parameters). We anticipate a new generative data-driven modelling paradigm constrained by identified governing principles extracted from low-dimensional manifold analysis. The rise of a new generation of models will ultimately connect biology to quantitative mechanistic descriptions, thereby setting the stage for investigating the character of the model language and principles driving living systems.

  14. Modeling and Simulation of a lab-scale Fluidised Bed

    Directory of Open Access Journals (Sweden)

    Britt Halvorsen

    2002-04-01

    Full Text Available The flow behaviour of a lab-scale fluidised bed with a central jet has been simulated. The study has been performed with an in-house computational fluid dynamics (CFD model named FLOTRACS-MP-3D. The CFD model is based on a multi-fluid Eulerian description of the phases, where the kinetic theory for granular flow forms the basis for turbulence modelling of the solid phases. A two-dimensional Cartesian co-ordinate system is used to describe the geometry. This paper discusses whether bubble formation and bed height are influenced by coefficient of restitution, drag model and number of solid phases. Measurements of the same fluidised bed with a digital video camera are performed. Computational results are compared with the experimental results, and the discrepancies are discussed.

  15. Censored rainfall modelling for estimation of fine-scale extremes

    Directory of Open Access Journals (Sweden)

    D. Cross

    2018-01-01

    Full Text Available Reliable estimation of rainfall extremes is essential for drainage system design, flood mitigation, and risk quantification. However, traditional techniques lack physical realism and extrapolation can be highly uncertain. In this study, we improve the physical basis for short-duration extreme rainfall estimation by simulating the heavy portion of the rainfall record mechanistically using the Bartlett–Lewis rectangular pulse (BLRP model. Mechanistic rainfall models have had a tendency to underestimate rainfall extremes at fine temporal scales. Despite this, the simple process representation of rectangular pulse models is appealing in the context of extreme rainfall estimation because it emulates the known phenomenology of rainfall generation. A censored approach to Bartlett–Lewis model calibration is proposed and performed for single-site rainfall from two gauges in the UK and Germany. Extreme rainfall estimation is performed for each gauge at the 5, 15, and 60 min resolutions, and considerations for censor selection discussed.

  16. Current state of genome-scale modeling in filamentous fungi

    DEFF Research Database (Denmark)

    Brandl, Julian; Andersen, Mikael Rørdam

    2015-01-01

    The group of filamentous fungi contains important species used in industrial biotechnology for acid, antibiotics and enzyme production. Their unique lifestyle turns these organisms into a valuable genetic reservoir of new natural products and biomass degrading enzymes that has not been used to full...... testing them in vivo. The increasing availability of high quality models and molecular biological tools for manipulating filamentous fungi renders the model-guided engineering of these fungal factories possible with comprehensive metabolic networks. A typical fungal model contains on average 1138 unique...... metabolic reactions and 1050 ORFs, making them a vast knowledge-base of fungal metabolism. In the present review we focus on the current state as well as potential future applications of genome-scale models in filamentous fungi....

  17. Effects of input uncertainty on cross-scale crop modeling

    Science.gov (United States)

    Waha, Katharina; Huth, Neil; Carberry, Peter

    2014-05-01

    The quality of data on climate, soils and agricultural management in the tropics is in general low or data is scarce leading to uncertainty in process-based modeling of cropping systems. Process-based crop models are common tools for simulating crop yields and crop production in climate change impact studies, studies on mitigation and adaptation options or food security studies. Crop modelers are concerned about input data accuracy as this, together with an adequate representation of plant physiology processes and choice of model parameters, are the key factors for a reliable simulation. For example, assuming an error in measurements of air temperature, radiation and precipitation of ± 0.2°C, ± 2 % and ± 3 % respectively, Fodor & Kovacs (2005) estimate that this translates into an uncertainty of 5-7 % in yield and biomass simulations. In our study we seek to answer the following questions: (1) are there important uncertainties in the spatial variability of simulated crop yields on the grid-cell level displayed on maps, (2) are there important uncertainties in the temporal variability of simulated crop yields on the aggregated, national level displayed in time-series, and (3) how does the accuracy of different soil, climate and management information influence the simulated crop yields in two crop models designed for use at different spatial scales? The study will help to determine whether more detailed information improves the simulations and to advise model users on the uncertainty related to input data. We analyse the performance of the point-scale crop model APSIM (Keating et al., 2003) and the global scale crop model LPJmL (Bondeau et al., 2007) with different climate information (monthly and daily) and soil conditions (global soil map and African soil map) under different agricultural management (uniform and variable sowing dates) for the low-input maize-growing areas in Burkina Faso/West Africa. We test the models' response to different levels of input

  18. Scaling behavior of an airplane-boarding model.

    Science.gov (United States)

    Brics, Martins; Kaupužs, Jevgenijs; Mahnke, Reinhard

    2013-04-01

    An airplane-boarding model, introduced earlier by Frette and Hemmer [Phys. Rev. E 85, 011130 (2012)], is studied with the aim of determining precisely its asymptotic power-law scaling behavior for a large number of passengers N. Based on Monte Carlo simulation data for very large system sizes up to N=2(16)=65536, we have analyzed numerically the scaling behavior of the mean boarding time and other related quantities. In analogy with critical phenomena, we have used appropriate scaling Ansätze, which include the leading term as some power of N (e.g., [proportionality]N(α) for ), as well as power-law corrections to scaling. Our results clearly show that α=1/2 holds with a very high numerical accuracy (α=0.5001±0.0001). This value deviates essentially from α=/~0.69, obtained earlier by Frette and Hemmer from data within the range 2≤N≤16. Our results confirm the convergence of the effective exponent α(eff)(N) to 1/2 at large N as observed by Bernstein. Our analysis explains this effect. Namely, the effective exponent α(eff)(N) varies from values about 0.7 for small system sizes to the true asymptotic value 1/2 at N→∞ almost linearly in N(-1/3) for large N. This means that the variation is caused by corrections to scaling, the leading correction-to-scaling exponent being θ≈1/3. We have estimated also other exponents: ν=1/2 for the mean number of passengers taking seats simultaneously in one time step, β=1 for the second moment of t(b), and γ≈1/3 for its variance.

  19. Turkish translation and adaptation of Champion's Health Belief Model Scales for breast cancer mammography screening.

    Science.gov (United States)

    Yilmaz, Meryem; Sayin, Yazile Yazici

    2014-07-01

    To examine the translation and adaptation process from English to Turkish and the validity and reliability of the Champion's Health Belief Model Scales for Mammography Screening. Its aim (1) is to provide data about and (2) to assess Turkish women's attitudes and behaviours towards mammography. The proportion of women who have mammography is lower in Turkey. The Champion's Health Belief Model Scales for Mammography Screening-Turkish version can be helpful to determine Turkish women's health beliefs, particularly about mammography. Cross-sectional design was used to collect survey data from Turkish women: classical measurement method. The Champion's Health Belief Model Scales for Mammography Screening was translated from English to Turkish. Again, it was back translated into English. Later, the meaning and clarity of the scale items were evaluated by a bilingual group representing the culture of the target population. Finally, the tool was evaluated by two bilingual professional researchers in terms of content validity, translation validity and psychometric estimates of the validity and reliability. The analysis included a total of 209 Turkish women. The validity of the scale was confirmed by confirmatory factor analysis and criterion-related validity testing. The Champion's Health Belief Model Scales for Mammography Screening aligned to four factors that were coherent and relatively independent of each other. There was a statistically significant relationship among all of the subscale items: the positive and high correlation of the total item test score and high Cronbach's α. The scale has a strong stability over time: the Champion's Health Belief Model Scales for Mammography Screening demonstrated acceptable preliminary values of reliability and validity. The Champion's Health Belief Model Scales for Mammography Screening is both a reliable and valid instrument that can be useful in measuring the health beliefs of Turkish women. It can be used to provide data

  20. Pore-scale simulation of microbial growth using a genome-scale metabolic model: Implications for Darcy-scale reactive transport

    Science.gov (United States)

    Tartakovsky, G. D.; Tartakovsky, A. M.; Scheibe, T. D.; Fang, Y.; Mahadevan, R.; Lovley, D. R.

    2013-09-01

    Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated with microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparison to prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model

  1. Pore-scale simulation of microbial growth using a genome-scale metabolic model: Implications for Darcy-scale reactive transport

    Energy Technology Data Exchange (ETDEWEB)

    Tartakovsky, Guzel D.; Tartakovsky, Alexandre M.; Scheibe, Timothy D.; Fang, Yilin; Mahadevan, Radhakrishnan; Lovley, Derek R.

    2013-09-07

    Recent advances in microbiology have enabled the quantitative simulation of microbial metabolism and growth based on genome-scale characterization of metabolic pathways and fluxes. We have incorporated a genome-scale metabolic model of the iron-reducing bacteria Geobacter sulfurreducens into a pore-scale simulation of microbial growth based on coupling of iron reduction to oxidation of a soluble electron donor (acetate). In our model, fluid flow and solute transport is governed by a combination of the Navier-Stokes and advection-diffusion-reaction equations. Microbial growth occurs only on the surface of soil grains where solid-phase mineral iron oxides are available. Mass fluxes of chemical species associated with microbial growth are described by the genome-scale microbial model, implemented using a constraint-based metabolic model, and provide the Robin-type boundary condition for the advection-diffusion equation at soil grain surfaces. Conventional models of microbially-mediated subsurface reactions use a lumped reaction model that does not consider individual microbial reaction pathways, and describe reactions rates using empirically-derived rate formulations such as the Monod-type kinetics. We have used our pore-scale model to explore the relationship between genome-scale metabolic models and Monod-type formulations, and to assess the manifestation of pore-scale variability (microenvironments) in terms of apparent Darcy-scale microbial reaction rates. The genome-scale model predicted lower biomass yield, and different stoichiometry for iron consumption, in comparisonto prior Monod formulations based on energetics considerations. We were able to fit an equivalent Monod model, by modifying the reaction stoichiometry and biomass yield coefficient, that could effectively match results of the genome-scale simulation of microbial behaviors under excess nutrient conditions, but predictions of the fitted Monod model deviated from those of the genome-scale model under

  2. Chronic hyperglycemia affects bone metabolism in adult zebrafish scale model.

    Science.gov (United States)

    Carnovali, Marta; Luzi, Livio; Banfi, Giuseppe; Mariotti, Massimo

    2016-12-01

    Type II diabetes mellitus is a metabolic disease characterized by chronic hyperglycemia that induce other pathologies including diabetic retinopathy and bone disease. The mechanisms implicated in bone alterations induced by type II diabetes mellitus have been debated for years and are not yet clear because there are other factors involved that hide bone mineral density alterations. Despite this, it is well known that chronic hyperglycemia affects bone health causing fragility, mechanical strength reduction and increased propensity of fractures because of impaired bone matrix microstructure and aberrant bone cells function. Adult Danio rerio (zebrafish) represents a powerful model to study glucose and bone metabolism. Then, the aim of this study was to evaluate bone effects of chronic hyperglycemia in a new type II diabetes mellitus zebrafish model created by glucose administration in the water. Fish blood glucose levels have been monitored in time course experiments and basal glycemia was found increased. After 1 month treatment, the morphology of the retinal blood vessels showed abnormalities resembling to the human diabetic retinopathy. The adult bone metabolism has been evaluated in fish using the scales as read-out system. The scales of glucose-treated fish didn't depose new mineralized matrix and shown bone resorption lacunae associated with an intense osteoclast activity. In addition, hyperglycemic fish scales have shown a significant decrease of alkaline phosphatase activity and increase of tartrate-resistant acid phosphatase activity, in association with alterations in other bone-specific markers. These data indicates an imbalance in bone metabolism, which leads to the osteoporotic-like phenotype visualized through scale mineral matrix staining. The zebrafish model of hyperglycemic damage can contribute to elucidate in vivo the molecular mechanisms of metabolic changes, which influence the bone tissues regulation in human diabetic patients.

  3. Scaling predictive modeling in drug development with cloud computing.

    Science.gov (United States)

    Moghadam, Behrooz Torabi; Alvarsson, Jonathan; Holm, Marcus; Eklund, Martin; Carlsson, Lars; Spjuth, Ola

    2015-01-26

    Growing data sets with increased time for analysis is hampering predictive modeling in drug discovery. Model building can be carried out on high-performance computer clusters, but these can be expensive to purchase and maintain. We have evaluated ligand-based modeling on cloud computing resources where computations are parallelized and run on the Amazon Elastic Cloud. We trained models on open data sets of varying sizes for the end points logP and Ames mutagenicity and compare with model building parallelized on a traditional high-performance computing cluster. We show that while high-performance computing results in faster model building, the use of cloud computing resources is feasible for large data sets and scales well within cloud instances. An additional advantage of cloud computing is that the costs of predictive models can be easily quantified, and a choice can be made between speed and economy. The easy access to computational resources with no up-front investments makes cloud computing an attractive alternative for scientists, especially for those without access to a supercomputer, and our study shows that it enables cost-efficient modeling of large data sets on demand within reasonable time.

  4. Klobuchar-like Ionospheric Model for Different Scales Areas

    Directory of Open Access Journals (Sweden)

    LIU Chen

    2017-05-01

    Full Text Available Nowadays, Klobuchar is the most widely used ionospheric model in the positioning based on single-frequency terminal, and its different refined models have been proposed for a higher and higher accuracy of positioning. The variation of nighttime TEC with local time and the variation of TEC (total electron content with latitude have been analyzed using GIMs. After summarizing the model refinement schemes with wide applications, we proposed a Klobuchar-like model for regions with different scales in this paper. The Klobuchar-like, 14-paramaters Klobuchar and 8-paramaters Klobuchar models were established for the small, large and global regions by GIMs (global ionospheric maps in different solar activity periods and seasons, respectively. Klobuchar-like models, with the correction rates of 92.96%, 91.55% and 72.67% respectively in the small, large and global regions, have higher correction rates than 14-paramaters Klobuchar,8-paramaters Klobuchar and GPS Klobuchar models, which have verified the effectiveness and practicability of Klobuchar-like model.

  5. Geostatistical Modeling of Malaria Endemicity using Serological Indicators of Exposure Collected through School Surveys

    Science.gov (United States)

    Ashton, Ruth A.; Kefyalew, Takele; Rand, Alison; Sime, Heven; Assefa, Ashenafi; Mekasha, Addis; Edosa, Wasihun; Tesfaye, Gezahegn; Cano, Jorge; Teka, Hiwot; Reithinger, Richard; Pullan, Rachel L.; Drakeley, Chris J.; Brooker, Simon J.

    2015-01-01

    Ethiopia has a diverse ecology and geography resulting in spatial and temporal variation in malaria transmission. Evidence-based strategies are thus needed to monitor transmission intensity and target interventions. A purposive selection of dried blood spots collected during cross-sectional school-based surveys in Oromia Regional State, Ethiopia, were tested for presence of antibodies against Plasmodium falciparum and P. vivax antigens. Spatially explicit binomial models of seroprevalence were created for each species using a Bayesian framework, and used to predict seroprevalence at 5 km resolution across Oromia. School seroprevalence showed a wider prevalence range than microscopy for both P. falciparum (0–50% versus 0–12.7%) and P. vivax (0–53.7% versus 0–4.5%), respectively. The P. falciparum model incorporated environmental predictors and spatial random effects, while P. vivax seroprevalence first-order trends were not adequately explained by environmental variables, and a spatial smoothing model was developed. This is the first demonstration of serological indicators being used to detect large-scale heterogeneity in malaria transmission using samples from cross-sectional school-based surveys. The findings support the incorporation of serological indicators into periodic large-scale surveillance such as Malaria Indicator Surveys, and with particular utility for low transmission and elimination settings. PMID:25962770

  6. Creating a collection development model for a marine science library

    OpenAIRE

    Robinson, Carla

    2011-01-01

    The purpose of this article is to update and build on the approximate 10,000 item collection of the Harbor Branch Oceanographic Institute Library. This article will present a history of Harbor Branch and its library, and a literature review, outlining the collection development methods of other marine science libraries and academic libraries. The article will relate brief histories of three marine science libraries. A comparative table is constructed to compare Harbor Branch Library with thr...

  7. Quantification of structural uncertainties in multi-scale models; case study of the Lublin Basin, Poland

    Science.gov (United States)

    Małolepszy, Zbigniew; Szynkaruk, Ewa

    2015-04-01

    The multiscale static modeling of regional structure of the Lublin Basin is carried on in the Polish Geological Institute, in accordance with principles of integrated 3D geological modelling. The model is based on all available geospatial data from Polish digital databases and analogue archives. Mapped regional structure covers the area of 260x80 km located between Warsaw and Polish-Ukrainian border, along NW-SE-trending margin of the East European Craton. Within the basin, the Paleozoic beds with coalbearing Carboniferous and older formations containing hydrocarbons and unconventional prospects are covered unconformably by Permo-Mesozoic and younger rocks. Vertical extent of the regional model is set from topographic surface to 6000 m ssl and at the bottom includes some Proterozoic crystalline formations of the craton. The project focuses on internal consistency of the models built at different scales - from basin (small) scale to field-scale (large-scale). The models, nested in the common structural framework, are being constructed with regional geological knowledge, ensuring smooth transition in the 3D model resolution and amount of geological detail. Major challenge of the multiscale approach to subsurface modelling is the assessment and consistent quantification of various types of geological uncertainties tied to those various scale sub-models. Decreasing amount of information with depth and, particularly, very limited data collected below exploration targets, as well as accuracy and quality of data, all have the most critical impact on the modelled structure. In deeper levels of the Lublin Basin model, seismic interpretation of 2D surveys is sparsely tied to well data. Therefore time-to-depth conversion carries one of the major uncertainties in the modeling of structures, especially below 3000 m ssl. Furthermore, as all models at different scales are based on the same dataset, we must deal with different levels of generalization of geological structures. The

  8. Electron-scale reduced fluid models with gyroviscous effects

    Science.gov (United States)

    Passot, T.; Sulem, P. L.; Tassi, E.

    2017-08-01

    Reduced fluid models for collisionless plasmas including electron inertia and finite Larmor radius corrections are derived for scales ranging from the ion to the electron gyroradii. Based either on pressure balance or on the incompressibility of the electron fluid, they respectively capture kinetic Alfvén waves (KAWs) or whistler waves (WWs), and can provide suitable tools for reconnection and turbulence studies. Both isothermal regimes and Landau fluid closures permitting anisotropic pressure fluctuations are considered. For small values of the electron beta parameter e$ , a perturbative computation of the gyroviscous force valid at scales comparable to the electron inertial length is performed at order e)$ , which requires second-order contributions in a scale expansion. Comparisons with kinetic theory are performed in the linear regime. The spectrum of transverse magnetic fluctuations for strong and weak turbulence energy cascades is also phenomenologically predicted for both types of waves. In the case of moderate ion to electron temperature ratio, a new regime of KAW turbulence at scales smaller than the electron inertial length is obtained, where the magnetic energy spectrum decays like \\bot -13/3$ , thus faster than the \\bot -11/3$ spectrum of WW turbulence.

  9. The multi-scale aerosol-climate model PNNL-MMF: model description and evaluation

    Directory of Open Access Journals (Sweden)

    M. Wang

    2011-03-01

    Full Text Available Anthropogenic aerosol effects on climate produce one of the largest uncertainties in estimates of radiative forcing of past and future climate change. Much of this uncertainty arises from the multi-scale nature of the interactions between aerosols, clouds and large-scale dynamics, which are difficult to represent in conventional general circulation models (GCMs. In this study, we develop a multi-scale aerosol-climate model that treats aerosols and clouds across different scales, and evaluate the model performance, with a focus on aerosol treatment. This new model is an extension of a multi-scale modeling framework (MMF model that embeds a cloud-resolving model (CRM within each grid column of a GCM. In this extension, the effects of clouds on aerosols are treated by using an explicit-cloud parameterized-pollutant (ECPP approach that links aerosol and chemical processes on the large-scale grid with statistics of cloud properties and processes resolved by the CRM. A two-moment cloud microphysics scheme replaces the simple bulk microphysics scheme in the CRM, and a modal aerosol treatment is included in the GCM. With these extensions, this multi-scale aerosol-climate model allows the explicit simulation of aerosol and chemical processes in both stratiform and convective clouds on a global scale.

    Simulated aerosol budgets in this new model are in the ranges of other model studies. Simulated gas and aerosol concentrations are in reasonable agreement with observations (within a factor of 2 in most cases, although the model underestimates black carbon concentrations at the surface by a factor of 2–4. Simulated aerosol size distributions are in reasonable agreement with observations in the marine boundary layer and in the free troposphere, while the model underestimates the accumulation mode number concentrations near the surface, and overestimates the accumulation mode number concentrations in the middle and upper free troposphere by a factor

  10. Crises and Collective Socio-Economic Phenomena: Simple Models and Challenges

    Science.gov (United States)

    Bouchaud, Jean-Philippe

    2013-05-01

    Financial and economic history is strewn with bubbles and crashes, booms and busts, crises and upheavals of all sorts. Understanding the origin of these events is arguably one of the most important problems in economic theory. In this paper, we review recent efforts to include heterogeneities and interactions in models of decision. We argue that the so-called Random Field Ising model ( rfim) provides a unifying framework to account for many collective socio-economic phenomena that lead to sudden ruptures and crises. We discuss different models that can capture potentially destabilizing self-referential feedback loops, induced either by herding, i.e. reference to peers, or trending, i.e. reference to the past, and that account for some of the phenomenology missing in the standard models. We discuss some empirically testable predictions of these models, for example robust signatures of rfim-like herding effects, or the logarithmic decay of spatial correlations of voting patterns. One of the most striking result, inspired by statistical physics methods, is that Adam Smith's invisible hand can fail badly at solving simple coordination problems. We also insist on the issue of time-scales, that can be extremely long in some cases, and prevent socially optimal equilibria from being reached. As a theoretical challenge, the study of so-called "detailed-balance" violating decision rules is needed to decide whether conclusions based on current models (that all assume detailed-balance) are indeed robust and generic.

  11. Nonlinear Synapses for Large-Scale Models: An Efficient Representation Enables Complex Synapse Dynamics Modeling in Large-Scale Simulations

    Directory of Open Access Journals (Sweden)

    Eric eHu

    2015-09-01

    Full Text Available Chemical synapses are comprised of a wide collection of intricate signaling pathways involving complex dynamics. These mechanisms are often reduced to simple spikes or exponential representations in order to enable computer simulations at higher spatial levels of complexity. However, these representations cannot capture important nonlinear dynamics found in synaptic transmission. Here, we propose an input-output (IO synapse model capable of generating complex nonlinear dynamics while maintaining low computational complexity. This IO synapse model is an extension of a detailed mechanistic glutamatergic synapse model capable of capturing the input-output relationships of the mechanistic model using the Volterra functional power series. We demonstrate that the IO synapse model is able to successfully track the nonlinear dynamics of the synapse up to the third order with high accuracy. We also evaluate the accuracy of the IO synapse model at different input frequencies and compared its performance with that of kinetic models in compartmental neuron models. Our results demonstrate that the IO synapse model is capable of efficiently replicating complex nonlinear dynamics that were represented in the original mechanistic model and provide a method to replicate complex and diverse synaptic transmission within neuron network simulations.

  12. Building-Scale Atmospheric Modeling for Understanding and Anticipating Environmental Risks to Urban Populations

    Science.gov (United States)

    Warner, T. T.; Swerdlin, S. P.; Chen, F.; Hayden, M.

    2009-05-01

    The innovative use of Computational Fluid-Dynamics (CFD) models to define the building- and street-scale atmospheric environment in urban areas can benefit society in a number of ways. Design criteria used by architectural climatologists, who help plan the livable cities of the future, require information about air movement within street canyons for different seasons and weather regimes. Understanding indoor urban air- quality problems and their mitigation, especially for older buildings, requires data on air movement and associated dynamic pressures near buildings. Learning how heat waves and anthropogenic forcing in cities collectively affect the health of vulnerable residents is a problem in building thermodynamics, human behavior, and neighborhood-scale and street-canyon-scale atmospheric sciences. And, predicting the movement of plumes of hazardous material released in urban industrial or transportation accidents requires detailed information about vertical and horizontal air motions in the street canyons. These challenges are closer to being addressed because of advances in CFD modeling, the coupling of CFD models with models of indoor air motion and air quality, and the coupling of CFD models with mesoscale weather-prediction models. This paper will review some of the new knowledge and technologies that are being developed to meet these atmospheric-environment needs of our growing urban populations.

  13. Analysis and modeling of scale-invariance in plankton abundance

    CERN Document Server

    Pelletier, J D

    1996-01-01

    The power spectrum, $S$, of horizontal transects of plankton abundance are often observed to have a power-law dependence on wavenumber, $k$, with exponent close to $-2$: $S(k)\\propto k^{-2}$ over a wide range of scales. I present power spectral analyses of aircraft lidar measurements of phytoplankton abundance from scales of 1 to 100 km. A power spectrum $S(k)\\propto k^{-2}$ is obtained. As a model for this observation, I consider a stochastic growth equation where the rate of change of plankton abundance is determined by turbulent mixing, modeled as a diffusion process in two dimensions, and exponential growth with a stochastically variable net growth rate representing a fluctuating environment. The model predicts a lognormal distribution of abundance and a power spectrum of horizontal transects $S(k)\\propto k^{-1.8}$, close to the observed spectrum. The model equation predicts that the power spectrum of variations in abundance in time at a point in space is $S(f)\\propto f^{-1.5}$ (where $f$ is the frequency...

  14. Multi-scale modeling of carbon capture systems

    Energy Technology Data Exchange (ETDEWEB)

    Kress, Joel David [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-03-03

    The development and scale up of cost effective carbon capture processes is of paramount importance to enable the widespread deployment of these technologies to significantly reduce greenhouse gas emissions. The U.S. Department of Energy initiated the Carbon Capture Simulation Initiative (CCSI) in 2011 with the goal of developing a computational toolset that would enable industry to more effectively identify, design, scale up, operate, and optimize promising concepts. The first half of the presentation will introduce the CCSI Toolset consisting of basic data submodels, steady-state and dynamic process models, process optimization and uncertainty quantification tools, an advanced dynamic process control framework, and high-resolution filtered computationalfluid- dynamics (CFD) submodels. The second half of the presentation will describe a high-fidelity model of a mesoporous silica supported, polyethylenimine (PEI)-impregnated solid sorbent for CO2 capture. The sorbent model includes a detailed treatment of transport and amine-CO2- H2O interactions based on quantum chemistry calculations. Using a Bayesian approach for uncertainty quantification, we calibrate the sorbent model to Thermogravimetric (TGA) data.

  15. European Continental Scale Hydrological Model, Limitations and Challenges

    Science.gov (United States)

    Rouholahnejad, E.; Abbaspour, K.

    2014-12-01

    The pressures on water resources due to increasing levels of societal demand, increasing conflict of interest and uncertainties with regard to freshwater availability create challenges for water managers and policymakers in many parts of Europe. At the same time, climate change adds a new level of pressure and uncertainty with regard to freshwater supplies. On the other hand, the small-scale sectoral structure of water management is now reaching its limits. The integrated management of water in basins requires a new level of consideration where water bodies are to be viewed in the context of the whole river system and managed as a unit within their basins. In this research we present the limitations and challenges of modelling the hydrology of the continent Europe. The challenges include: data availability at continental scale and the use of globally available data, streamgauge data quality and their misleading impacts on model calibration, calibration of large-scale distributed model, uncertainty quantification, and computation time. We describe how to avoid over parameterization in calibration process and introduce a parallel processing scheme to overcome high computation time. We used Soil and Water Assessment Tool (SWAT) program as an integrated hydrology and crop growth simulator to model water resources of the Europe continent. Different components of water resources are simulated and crop yield and water quality are considered at the Hydrological Response Unit (HRU) level. The water resources are quantified at subbasin level with monthly time intervals for the period of 1970-2006. The use of a large-scale, high-resolution water resources models enables consistent and comprehensive examination of integrated system behavior through physically-based, data-driven simulation and provides the overall picture of water resources temporal and spatial distribution across the continent. The calibrated model and results provide information support to the European Water

  16. Length scale effects and multiscale modeling of thermally induced phase transformation kinetics in NiTi SMA

    Science.gov (United States)

    Frantziskonis, George N.; Gur, Sourav

    2017-06-01

    Thermally induced phase transformation in NiTi shape memory alloys (SMAs) shows strong size and shape, collectively termed length scale effects, at the nano to micrometer scales, and that has important implications for the design and use of devices and structures at such scales. This paper, based on a recently developed multiscale model that utilizes molecular dynamics (MDs) simulations at small scales and MD-verified phase field (PhF) simulations at larger scales, reports results on specific length scale effects, i.e. length scale effects in martensite phase fraction (MPF) evolution, transformation temperatures (martensite and austenite start and finish) and in the thermally cyclic transformation between austenitic and martensitic phase. The multiscale study identifies saturation points for length scale effects and studies, for the first time, the length scale effect on the kinetics (i.e. developed internal strains) in the B19‧ phase during phase transformation. The major part of the work addresses small scale single crystals in specific orientations. However, the multiscale method is used in a unique and novel way to indirectly study length scale and grain size effects on evolution kinetics in polycrystalline NiTi, and to compare the simulation results to experiments. The interplay of the grain size and the length scale effect on the thermally induced MPF evolution is also shown in this present study. Finally, the multiscale coupling results are employed to improve phenomenological material models for NiTi SMA.

  17. Customized Mobile Apps: Improving data collection methods in large-scale field works in Finnish Lapland

    Science.gov (United States)

    Kupila, Juho

    2017-04-01

    Since the 1990s, a huge amount of data related to the groundwater and soil has been collected in several regional projects in Finland. EU -funded project "The coordination of groundwater protection and aggregates industry in Finnish Lapland, phase II" started in July 2016 and it covers the last unstudied areas in these projects in Finland. Project is carried out by Geological Survey of Finland (GTK), University of Oulu and Finnish Environment Institute and the main topic is to consolidate the groundwater protection and extractable use of soil resource in Lapland area. As earlier, several kinds of studies are also carried out throughout this three-year research and development project. These include e.g. drilling with setting up of groundwater observation wells, GPR-survey and many kinds of point-type observations, like sampling and general mapping on the field. Due to size of a study area (over 80 000 km2, about one quarter of a total area of Finland), improvement of the field work methods has become essential. To the general observation on the field, GTK has developed a specific mobile applications for Android -devices. With these Apps, data can be easily collected for example from a certain groundwater area and then uploaded directly to the GTK's database. Collected information may include sampling data, photos, layer observations, groundwater data etc. and it is all linked to the current GPS-location. New data is also easily available for post-processing. In this project the benefits of these applications will be field-tested and e.g. ergonomics, economy and usability in general will be taken account and related to the other data collecting methods, like working with heavy fieldwork laptops. Although these Apps are designed for usage in GTK's projects, they are free to download from Google Play for anyone interested. Geological Survey of Finland has the main role in this project with support from national and local authorities and stakeholders. Project is funded

  18. From micro-scale 3D simulations to macro-scale model of periodic porous media

    Science.gov (United States)

    Crevacore, Eleonora; Tosco, Tiziana; Marchisio, Daniele; Sethi, Rajandrea; Messina, Francesca

    2015-04-01

    In environmental engineering, the transport of colloidal suspensions in porous media is studied to understand the fate of potentially harmful nano-particles and to design new remediation technologies. In this perspective, averaging techniques applied to micro-scale numerical simulations are a powerful tool to extrapolate accurate macro-scale models. Choosing two simplified packing configurations of soil grains and starting from a single elementary cell (module), it is possible to take advantage of the periodicity of the structures to reduce the computation costs of full 3D simulations. Steady-state flow simulations for incompressible fluid in laminar regime are implemented. Transport simulations are based on the pore-scale advection-diffusion equation, that can be enriched introducing also the Stokes velocity (to consider the gravity effect) and the interception mechanism. Simulations are carried on a domain composed of several elementary modules, that serve as control volumes in a finite volume method for the macro-scale method. The periodicity of the medium involves the periodicity of the flow field and this will be of great importance during the up-scaling procedure, allowing relevant simplifications. Micro-scale numerical data are treated in order to compute the mean concentration (volume and area averages) and fluxes on each module. The simulation results are used to compare the micro-scale averaged equation to the integral form of the macroscopic one, making a distinction between those terms that could be computed exactly and those for which a closure in needed. Of particular interest it is the investigation of the origin of macro-scale terms such as the dispersion and tortuosity, trying to describe them with micro-scale known quantities. Traditionally, to study the colloidal transport many simplifications are introduced, such those concerning ultra-simplified geometry that usually account for a single collector. Gradual removal of such hypothesis leads to a

  19. Research on large-scale wind farm modeling

    Science.gov (United States)

    Ma, Longfei; Zhang, Baoqun; Gong, Cheng; Jiao, Ran; Shi, Rui; Chi, Zhongjun; Ding, Yifeng

    2017-01-01

    Due to intermittent and adulatory properties of wind energy, when large-scale wind farm connected to the grid, it will have much impact on the power system, which is different from traditional power plants. Therefore it is necessary to establish an effective wind farm model to simulate and analyze the influence wind farms have on the grid as well as the transient characteristics of the wind turbines when the grid is at fault. However we must first establish an effective WTGs model. As the doubly-fed VSCF wind turbine has become the mainstream wind turbine model currently, this article first investigates the research progress of doubly-fed VSCF wind turbine, and then describes the detailed building process of the model. After that investigating the common wind farm modeling methods and pointing out the problems encountered. As WAMS is widely used in the power system, which makes online parameter identification of the wind farm model based on off-output characteristics of wind farm be possible, with a focus on interpretation of the new idea of identification-based modeling of large wind farms, which can be realized by two concrete methods.

  20. A Goddard Multi-Scale Modeling System with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo

    2010-01-01

    A multi-scale modeling system with unified physics has been developed at NASA Goddard Space Flight Center (GSFC). The system consists of an MMF, the coupled NASA Goddard finite-volume GCM (fvGCM) and Goddard Cumulus Ensemble model (GCE, a CRM); the state-of-the-art Weather Research and Forecasting model (WRF) and the stand alone GCE. These models can share the same microphysical schemes, radiation (including explicitly calculated cloud optical properties), and surface models that have been developed, improved and tested for different environments. In this talk, I will present: (1) A brief review on GCE model and its applications on the impact of the aerosol on deep precipitation processes, (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), and (3) A discussion on the Goddard WRF version (its developments and applications). We are also performing the inline tracer calculation to comprehend the physical processes (i.e., boundary layer and each quadrant in the boundary layer) related to the development and structure of hurricanes and mesoscale convective systems. In addition, high - resolution (spatial. 2km, and temporal, I minute) visualization showing the model results will be presented.

  1. Relating the CMSSM and SUGRA models with GUT scale and Super-GUT scale Supersymmetry Breaking

    CERN Document Server

    Dudas, Emilian; Mustafayev, Azar; Olive, Keith A.

    2012-01-01

    While the constrained minimal supersymmetric standard model (CMSSM) with universal gaugino masses, $m_{1/2}$, scalar masses, $m_0$, and A-terms, $A_0$, defined at some high energy scale (usually taken to be the GUT scale) is motivated by general features of supergravity models, it does not carry all of the constraints imposed by minimal supergravity (mSUGRA). In particular, the CMSSM does not impose a relation between the trilinear and bilinear soft supersymmetry breaking terms, $B_0 = A_0 - m_0$, nor does it impose the relation between the soft scalar masses and the gravitino mass, $m_0 = m_{3/2}$. As a consequence, $\\tan \\beta$ is computed given values of the other CMSSM input parameters. By considering a Giudice-Masiero (GM) extension to mSUGRA, one can introduce new parameters to the K\\"ahler potential which are associated with the Higgs sector and recover many of the standard CMSSM predictions. However, depending on the value of $A_0$, one may have a gravitino or a neutralino dark matter candidate. We al...

  2. Light moduli in almost no-scale models

    Energy Technology Data Exchange (ETDEWEB)

    Buchmueller, Wilfried; Moeller, Jan; Schmidt, Jonas

    2009-09-15

    We discuss the stabilization of the compact dimension for a class of five-dimensional orbifold supergravity models. Supersymmetry is broken by the superpotential on a boundary. Classically, the size L of the fifth dimension is undetermined, with or without supersymmetry breaking, and the effective potential is of no-scale type. The size L is fixed by quantum corrections to the Kaehler potential, the Casimir energy and Fayet-Iliopoulos (FI) terms localized at the boundaries. For an FI scale of order M{sub GUT}, as in heterotic string compactifications with anomalous U(1) symmetries, one obtains L{proportional_to}1/M{sub GUT}. A small mass is predicted for the scalar fluctuation associated with the fifth dimension, m{sub {rho}}

  3. Density Functional Theory and Materials Modeling at Atomistic Length Scales

    Directory of Open Access Journals (Sweden)

    Swapan K. Ghosh

    2002-04-01

    Full Text Available Abstract: We discuss the basic concepts of density functional theory (DFT as applied to materials modeling in the microscopic, mesoscopic and macroscopic length scales. The picture that emerges is that of a single unified framework for the study of both quantum and classical systems. While for quantum DFT, the central equation is a one-particle Schrodinger-like Kohn-Sham equation, the classical DFT consists of Boltzmann type distributions, both corresponding to a system of noninteracting particles in the field of a density-dependent effective potential, the exact functional form of which is unknown. One therefore approximates the exchange-correlation potential for quantum systems and the excess free energy density functional or the direct correlation functions for classical systems. Illustrative applications of quantum DFT to microscopic modeling of molecular interaction and that of classical DFT to a mesoscopic modeling of soft condensed matter systems are highlighted.

  4. Next-generation genome-scale models for metabolic engineering.

    Science.gov (United States)

    King, Zachary A; Lloyd, Colton J; Feist, Adam M; Palsson, Bernhard O

    2015-12-01

    Constraint-based reconstruction and analysis (COBRA) methods have become widely used tools for metabolic engineering in both academic and industrial laboratories. By employing a genome-scale in silico representation of the metabolic network of a host organism, COBRA methods can be used to predict optimal genetic modifications that improve the rate and yield of chemical production. A new generation of COBRA models and methods is now being developed--encompassing many biological processes and simulation strategies-and next-generation models enable new types of predictions. Here, three key examples of applying COBRA methods to strain optimization are presented and discussed. Then, an outlook is provided on the next generation of COBRA models and the new types of predictions they will enable for systems metabolic engineering. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Development and experimental verification of a genome-scale metabolic model for Corynebacterium glutamicum

    Directory of Open Access Journals (Sweden)

    Hirasawa Takashi

    2009-08-01

    Full Text Available Abstract Background In silico genome-scale metabolic models enable the analysis of the characteristics of metabolic systems of organisms. In this study, we reconstructed a genome-scale metabolic model of Corynebacterium glutamicum on the basis of genome sequence annotation and physiological data. The metabolic characteristics were analyzed using flux balance analysis (FBA, and the results of FBA were validated using data from culture experiments performed at different oxygen uptake rates. Results The reconstructed genome-scale metabolic model of C. glutamicum contains 502 reactions and 423 metabolites. We collected the reactions and biomass components from the database and literatures, and made the model available for the flux balance analysis by filling gaps in the reaction networks and removing inadequate loop reactions. Using the framework of FBA and our genome-scale metabolic model, we first simulated the changes in the metabolic flux profiles that occur on changing the oxygen uptake rate. The predicted production yields of carbon dioxide and organic acids agreed well with the experimental data. The metabolic profiles of amino acid production phases were also investigated. A comprehensive gene deletion study was performed in which the effects of gene deletions on metabolic fluxes were simulated; this helped in the identification of several genes whose deletion resulted in an improvement in organic acid production. Conclusion The genome-scale metabolic model provides useful information for the evaluation of the metabolic capabilities and prediction of the metabolic characteristics of C. glutamicum. This can form a basis for the in silico design of C. glutamicum metabolic networks for improved bioproduction of desirable metabolites.

  6. Transient Recharge Estimability Through Field-Scale Groundwater Model Calibration.

    Science.gov (United States)

    Knowling, Matthew J; Werner, Adrian D

    2017-11-01

    The estimation of recharge through groundwater model calibration is hampered by the nonuniqueness of recharge and aquifer parameter values. It has been shown recently that the estimability of spatially distributed recharge through calibration of steady-state models for practical situations (i.e., real-world, field-scale aquifer settings) is limited by the need for excessive amounts of hydraulic-parameter and groundwater-level data. However, the extent to which temporal recharge variability can be informed through transient model calibration, which involves larger water-level datasets, but requires the additional consideration of storage parameters, is presently unknown for practical situations. In this study, time-varying recharge estimates, inferred through calibration of a field-scale highly parameterized groundwater model, are systematically investigated subject to changes in (1) the degree to which hydraulic parameters including hydraulic conductivity (K) and specific yield (S y ) are constrained, (2) the number of water-level calibration targets, and (3) the temporal resolution (up to monthly time steps) at which recharge is estimated. The analysis involves the use of a synthetic reality (a reference model) based on a groundwater model of Uley South Basin, South Australia. Identifiability statistics are used to evaluate the ability of recharge and hydraulic parameters to be estimated uniquely. Results show that reasonable estimates of monthly recharge (recharge root-mean-squared error) require a considerable amount of transient water-level data, and that the spatial distribution of K is known. Joint estimation of recharge, S y and K, however, precludes reasonable inference of recharge and hydraulic parameter values. We conclude that the estimation of temporal recharge variability through calibration may be impractical for real-world settings. © 2017, National Ground Water Association.

  7. Predictive Modelling to Identify Near-Shore, Fine-Scale Seabird Distributions during the Breeding Season.

    Directory of Open Access Journals (Sweden)

    Victoria C Warwick-Evans

    Full Text Available During the breeding season seabirds are constrained to coastal areas and are restricted in their movements, spending much of their time in near-shore waters either loafing or foraging. However, in using these areas they may be threatened by anthropogenic activities such as fishing, watersports and coastal developments including marine renewable energy installations. Although many studies describe large scale interactions between seabirds and the environment, the drivers behind near-shore, fine-scale distributions are not well understood. For example, Alderney is an important breeding ground for many species of seabird and has a diversity of human uses of the marine environment, thus providing an ideal location to investigate the near-shore fine-scale interactions between seabirds and the environment. We used vantage point observations of seabird distribution, collected during the 2013 breeding season in order to identify and quantify some of the environmental variables affecting the near-shore, fine-scale distribution of seabirds in Alderney's coastal waters. We validate the models with observation data collected in 2014 and show that water depth, distance to the intertidal zone, and distance to the nearest seabird nest are key predictors in the distribution of Alderney's seabirds. AUC values for each species suggest that these models perform well, although the model for shags performed better than those for auks and gulls. While further unexplained underlying localised variation in the environmental conditions will undoubtedly effect the fine-scale distribution of seabirds in near-shore waters we demonstrate the potential of this approach in marine planning and decision making.

  8. Dynamic occupancy models for analyzing species' range dynamics across large geographic scales.

    Science.gov (United States)

    Bled, Florent; Nichols, James D; Altwegg, Res

    2013-12-01

    Large-scale biodiversity data are needed to predict species' responses to global change and to address basic questions in macroecology. While such data are increasingly becoming available, their analysis is challenging because of the typically large heterogeneity in spatial sampling intensity and the need to account for observation processes. Two further challenges are accounting for spatial effects that are not explained by covariates, and drawing inference on dynamics at these large spatial scales. We developed dynamic occupancy models to analyze large-scale atlas data. In addition to occupancy, these models estimate local colonization and persistence probabilities. We accounted for spatial autocorrelation using conditional autoregressive models and autologistic models. We fitted the models to detection/nondetection data collected on a quarter-degree grid across southern Africa during two atlas projects, using the hadeda ibis (Bostrychia hagedash) as an example. The model accurately reproduced the range expansion between the first (SABAP1: 1987-1992) and second (SABAP2: 2007-2012) Southern African Bird Atlas Project into the drier parts of interior South Africa. Grid cells occupied during SABAP1 generally remained occupied, but colonization of unoccupied grid cells was strongly dependent on the number of occupied grid cells in the neighborhood. The detection probability strongly varied across space due to variation in effort, observer identity, seasonality, and unexplained spatial effects. We present a flexible hierarchical approach for analyzing grid-based atlas data using dynamical occupancy models. Our model is similar to a species' distribution model obtained using generalized additive models but has a number of advantages. Our model accounts for the heterogeneous sampling process, spatial correlation, and perhaps most importantly, allows us to examine dynamic aspects of species ranges.

  9. Device Scale Modeling of Solvent Absorption using MFIX-TFM

    Energy Technology Data Exchange (ETDEWEB)

    Carney, Janine E. [National Energy Technology Lab. (NETL), Albany, OR (United States); Finn, Justin R. [National Energy Technology Lab. (NETL), Albany, OR (United States); Oak Ridge Inst. for Science and Education (ORISE), Oak Ridge, TN (United States)

    2016-10-01

    Recent climate change is largely attributed to greenhouse gases (e.g., carbon dioxide, methane) and fossil fuels account for a large majority of global CO2 emissions. That said, fossil fuels will continue to play a significant role in the generation of power for the foreseeable future. The extent to which CO2 is emitted needs to be reduced, however, carbon capture and sequestration are also necessary actions to tackle climate change. Different approaches exist for CO2 capture including both post-combustion and pre-combustion technologies, oxy-fuel combustion and/or chemical looping combustion. The focus of this effort is on post-combustion solvent-absorption technology. To apply CO2 technologies at commercial scale, the availability and maturity and the potential for scalability of that technology need to be considered. Solvent absorption is a proven technology but not at the scale needed by typical power plant. The scale up and down and design of laboratory and commercial packed bed reactors depends heavily on the specific knowledge of two-phase pressure drop, liquid holdup, the wetting efficiency and mass transfer efficiency as a function of operating conditions. Simple scaling rules often fail to provide proper design. Conventional reactor design modeling approaches will generally characterize complex non-ideal flow and mixing patterns using simplified and/or mechanistic flow assumptions. While there are varying levels of complexity used within these approaches, none of these models resolve the local velocity fields. Consequently, they are unable to account for important design factors such as flow maldistribution and channeling from a fundamental perspective. Ideally design would be aided by development of predictive models based on truer representation of the physical and chemical processes that occur at different scales. Computational fluid dynamic (CFD) models are based on multidimensional flow equations with first

  10. Workshop on Human Activity at Scale in Earth System Models

    Energy Technology Data Exchange (ETDEWEB)

    Allen, Melissa R. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Aziz, H. M. Abdul [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Coletti, Mark A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Kennedy, Joseph H. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Nair, Sujithkumar S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Omitaomu, Olufemi A. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2017-01-26

    Changing human activity within a geographical location may have significant influence on the global climate, but that activity must be parameterized in such a way as to allow these high-resolution sub-grid processes to affect global climate within that modeling framework. Additionally, we must have tools that provide decision support and inform local and regional policies regarding mitigation of and adaptation to climate change. The development of next-generation earth system models, that can produce actionable results with minimum uncertainties, depends on understanding global climate change and human activity interactions at policy implementation scales. Unfortunately, at best we currently have only limited schemes for relating high-resolution sectoral emissions to real-time weather, ultimately to become part of larger regions and well-mixed atmosphere. Moreover, even our understanding of meteorological processes at these scales is imperfect. This workshop addresses these shortcomings by providing a forum for discussion of what we know about these processes, what we can model, where we have gaps in these areas and how we can rise to the challenge to fill these gaps.

  11. Modelling biological invasions: Individual to population scales at interfaces

    KAUST Repository

    Belmonte-Beitia, J.

    2013-10-01

    Extracting the population level behaviour of biological systems from that of the individual is critical in understanding dynamics across multiple scales and thus has been the subject of numerous investigations. Here, the influence of spatial heterogeneity in such contexts is explored for interfaces with a separation of the length scales characterising the individual and the interface, a situation that can arise in applications involving cellular modelling. As an illustrative example, we consider cell movement between white and grey matter in the brain which may be relevant in considering the invasive dynamics of glioma. We show that while one can safely neglect intrinsic noise, at least when considering glioma cell invasion, profound differences in population behaviours emerge in the presence of interfaces with only subtle alterations in the dynamics at the individual level. Transport driven by local cell sensing generates predictions of cell accumulations along interfaces where cell motility changes. This behaviour is not predicted with the commonly used Fickian diffusion transport model, but can be extracted from preliminary observations of specific cell lines in recent, novel, cryo-imaging. Consequently, these findings suggest a need to consider the impact of individual behaviour, spatial heterogeneity and especially interfaces in experimental and modelling frameworks of cellular dynamics, for instance in the characterisation of glioma cell motility. © 2013 Elsevier Ltd.

  12. A Dynamic Pore-Scale Model of Imbibition

    DEFF Research Database (Denmark)

    Mogensen, Kristian; Stenby, Erling Halfdan

    1998-01-01

    We present a dynamic pore-scale network model of imbibition, capable of calculating residual oil saturation for any given capillary number, viscosity ratio, contact angle and aspect ratio. Our goal is not to predict the outcome of core floods, but rather to perform a sensitivity analysis of the a......We present a dynamic pore-scale network model of imbibition, capable of calculating residual oil saturation for any given capillary number, viscosity ratio, contact angle and aspect ratio. Our goal is not to predict the outcome of core floods, but rather to perform a sensitivity analysis...... of the above-mentioned parameters, except the viscosity ratio. We find that contact angle, aspect ratio and capillary number all have a significant influence on the competition between piston-like advance, leading to high recovery, and snap-off, causing oil entrapment. Due to enormous CPU-time requirements we...... been entirely inhibited, in agreement with results obtained by Blunt using a quasi-static model. For higher aspect ratios, the effect of rate and contact angle is more pronounced. Many core floods are conducted at capillary numbers in the range 10 to10.6. We believe that the excellent recoveries...

  13. Utility of collecting metadata to manage a large scale conditions database in ATLAS

    CERN Document Server

    Gallas, EJ; The ATLAS collaboration; Borodin, M; Formica, A

    2014-01-01

    The ATLAS Conditions Database, based on the LCG Conditions Database infrastructure, contains a wide variety of information needed in online data taking and offline analysis. The total volume of ATLAS conditions data is in the multi-Terabyte range. Internally, the active data is divided into 65 separate schemas (each with hundreds of underlying tables) according to overall data taking type, detector subsystem, and whether the data is used offline or strictly online. While each schema has a common infrastructure, each schema's data is entirely independent of other schemas, except at the highest level, where sets of conditions from each subsystem are tagged globally for ATLAS event data reconstruction and reprocessing. The partitioned nature of the conditions infrastructure works well for most purposes, but metadata about each schema is problematic to collect in global tools from such a system because it is only accessible via LCG tools schema by schema. This makes it difficult to get an overview of all schemas,...

  14. Upscaling hydraulic conductivity from measurement-scale to model-scale

    Science.gov (United States)

    Gunnink, Jan; Stafleu, Jan; Maljers, Densie; Schokker, Jeroen

    2013-04-01

    The Geological Survey of the Netherlands systematically produces both shallow (uncertainty of the model results to be calculated. One of the parameters that is subsequently assigned to the voxels in the GeoTOP model, is hydraulic conductivity (both horizontal and vertical). Hydraulic conductivities are measured on samples taken from high-quality drillings, which are subjected to falling head hydraulic conductivity tests. Samples are taken for all combinations of lithostratigraphy, facies and lithology that are present in the GeoTOP model. The volume of the samples is orders of magnitude smaller than the volume of a voxel in the GeoTOP model. Apart from that, the heterogeneity that occurs within a voxel is not accounted for in the GeoTOP model, since every voxel gets a single lithology that is deemed representative for the entire voxel To account for both the difference in volume and the within-voxel heterogeneity, an upscaling procedure is developed to produce up-scaled hydraulic conductivities for each GeoTOP voxel. A very fine 3D grid of 0.5 x 0.5 x 0.05 m is created that covers the GeoTOP voxel size (100 x 100 x 0.5 m) plus half of the dimensions of the GeoTOP voxel to counteract undesired edge-effects. It is assumed that the scale of the samples is comparable to the voxel size of this fine grid. For each lithostratigraphy and facies combination the spatial correlation structure (variogram) of the lithological classes is used to create 50 equiprobable distributions of lithology for the fine grid with sequential indicator simulation. Then, for each of the lithology realizations, a hydraulic conductivity is assigned to the simulated lithology class, using Sequential Gaussian Simulation, again with the appropriate variogram This results in 50 3D models of hydraulic conductivities on the fine grid. For each of these hydraulic conductivity models, a hydraulic head difference of 1m between top and bottom of the model is used to calculate the flux at the bottom of the

  15. Hybrid Modelling of Individual Movement and Collective Behaviour

    KAUST Repository

    Franz, Benjamin

    2013-01-01

    Mathematical models of dispersal in biological systems are often written in terms of partial differential equations (PDEs) which describe the time evolution of population-level variables (concentrations, densities). A more detailed modelling approach is given by individual-based (agent-based) models which describe the behaviour of each organism. In recent years, an intermediate modelling methodology - hybrid modelling - has been applied to a number of biological systems. These hybrid models couple an individual-based description of cells/animals with a PDE-model of their environment. In this chapter, we overview hybrid models in the literature with the focus on the mathematical challenges of this modelling approach. The detailed analysis is presented using the example of chemotaxis, where cells move according to extracellular chemicals that can be altered by the cells themselves. In this case, individual-based models of cells are coupled with PDEs for extracellular chemical signals. Travelling waves in these hybrid models are investigated. In particular, we show that in contrary to the PDEs, hybrid chemotaxis models only develop a transient travelling wave. © 2013 Springer-Verlag Berlin Heidelberg.

  16. A structural equation modelling of the academic self-concept scale

    Directory of Open Access Journals (Sweden)

    Musa Matovu

    2014-03-01

    Full Text Available The study aimed at validating the academic self-concept scale by Liu and Wang (2005 in measuring academic self-concept among university students. Structural equation modelling was used to validate the scale which was composed of two subscales; academic confidence and academic effort. The study was conducted on university students; males and females from different levels of study and faculties. In this study the influence of academic self-concept on academic achievement was assessed, tested whether the hypothesised model fitted the data, analysed the invariance of the path coefficients among the moderating variables, and also, highlighted whether academic confidence and academic effort measured academic selfconcept. The results from the model revealed that academic self-concept influenced academic achievement and the hypothesised model fitted the data. The results also supported the model as the causal structure was not sensitive to gender, levels of study, and faculties of students; hence, applicable to all the groups taken as moderating variables. It was also noted that academic confidence and academic effort are a measure of academic self-concept. According to the results the academic self-concept scale by Liu and Wang (2005 was deemed adequate in collecting information about academic self-concept among university students.

  17. Multiscale Modeling of Cell Interaction in Angiogenesis: From the Micro- to Macro-scale

    Science.gov (United States)

    Pillay, Samara; Maini, Philip; Byrne, Helen

    Solid tumors require a supply of nutrients to grow in size. To this end, tumors induce the growth of new blood vessels from existing vasculature through the process of angiogenesis. In this work, we use a discrete agent-based approach to model the behavior of individual endothelial cells during angiogenesis. We incorporate crowding effects through volume exclusion, motility of cells through biased random walks, and include birth and death processes. We use the transition probabilities associated with the discrete models to determine collective cell behavior, in terms of partial differential equations, using a Markov chain and master equation framework. We find that the cell-level dynamics gives rise to a migrating cell front in the form of a traveling wave on the macro-scale. The behavior of this front depends on the cell interactions that are included and the extent to which volume exclusion is taken into account in the discrete micro-scale model. We also find that well-established continuum models of angiogenesis cannot distinguish between certain types of cell behavior on the micro-scale. This may impact drug development strategies based on these models.

  18. Puget Sound Dissolved Oxygen Modeling Study: Development of an Intermediate Scale Water Quality Model

    Energy Technology Data Exchange (ETDEWEB)

    Khangaonkar, Tarang; Sackmann, Brandon S.; Long, Wen; Mohamedali, Teizeen; Roberts, Mindy

    2012-10-01

    The Salish Sea, including Puget Sound, is a large estuarine system bounded by over seven thousand miles of complex shorelines, consists of several subbasins and many large inlets with distinct properties of their own. Pacific Ocean water enters Puget Sound through the Strait of Juan de Fuca at depth over the Admiralty Inlet sill. Ocean water mixed with freshwater discharges from runoff, rivers, and wastewater outfalls exits Puget Sound through the brackish surface outflow layer. Nutrient pollution is considered one of the largest threats to Puget Sound. There is considerable interest in understanding the effect of nutrient loads on the water quality and ecological health of Puget Sound in particular and the Salish Sea as a whole. The Washington State Department of Ecology (Ecology) contracted with Pacific Northwest National Laboratory (PNNL) to develop a coupled hydrodynamic and water quality model. The water quality model simulates algae growth, dissolved oxygen, (DO) and nutrient dynamics in Puget Sound to inform potential Puget Sound-wide nutrient management strategies. Specifically, the project is expected to help determine 1) if current and potential future nitrogen loadings from point and non-point sources are significantly impairing water quality at a large scale and 2) what level of nutrient reductions are necessary to reduce or control human impacts to DO levels in the sensitive areas. The project did not include any additional data collection but instead relied on currently available information. This report describes model development effort conducted during the period 2009 to 2012 under a U.S. Environmental Protection Agency (EPA) cooperative agreement with PNNL, Ecology, and the University of Washington awarded under the National Estuary Program

  19. Leptogenesis in GeV-scale seesaw models

    Energy Technology Data Exchange (ETDEWEB)

    Hernández, P.; Kekic, M. [Instituto de Física Corpuscular, Universidad de Valencia and CSIC,Edificio Institutos Investigación, Apt. 22085, Valencia, E-46071 (Spain); López-Pavón, J. [SISSA and INFN Sezione di Trieste,via Bonomea 265, Trieste, 34136 (Italy); Racker, J.; Rius, N. [Instituto de Física Corpuscular, Universidad de Valencia and CSIC,Edificio Institutos Investigación, Apt. 22085, Valencia, E-46071 (Spain)

    2015-10-09

    We revisit the production of leptonic asymmetries in minimal extensions of the Standard Model that can explain neutrino masses, involving extra singlets with Majorana masses in the GeV scale. We study the quantum kinetic equations both analytically, via a perturbative expansion up to third order in the mixing angles, and numerically. The analytical solution allows us to identify the relevant CP invariants, and simplifies the exploration of the parameter space. We find that sizeable lepton asymmetries are compatible with non-degenerate neutrino masses and measurable active-sterile mixings.

  20. Scale Adaptive Simulation Model for the Darrieus Wind Turbine

    DEFF Research Database (Denmark)

    Rogowski, K.; Hansen, Martin Otto Laver; Maroński, R.

    2016-01-01

    the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads......Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine...

  1. Scale Adaptive Simulation Model for the Darrieus Wind Turbine

    OpenAIRE

    Rogowski, K.; Hansen, Martin Otto Laver; Maroński, R.; Lichota, P.

    2016-01-01

    Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimen...

  2. Collection/aggregation algorithms in Lagrangian cloud microphysical models: rigorous evaluation in box model simulations

    Science.gov (United States)

    Unterstrasser, Simon; Hoffmann, Fabian; Lerch, Marion

    2017-04-01

    Recently, several Lagrangian microphysical models have been developed which use a large number of (computational) particles to represent a cloud. In particular, the collision process leading to coalescence of cloud droplets or aggregation of ice crystals is implemented differently in various models. Three existing implementations are reviewed and extended, and their performance is evaluated by a comparison with well-established analytical and bin model solutions. In this first step of rigorous evaluation, box model simulations, with collection/aggregation being the only process considered, have been performed for the three well-known kernels of Golovin, Long and Hall. Besides numerical parameters, like the time step and the number of simulation particles (SIPs) used, the details of how the initial SIP ensemble is created from a prescribed analytically defined size distribution is crucial for the performance of the algorithms. Using a constant weight technique, as done in previous studies, greatly underestimates the quality of the algorithms. Using better initialisation techniques considerably reduces the number of required SIPs to obtain realistic results. From the box model results, recommendations for the collection/aggregation implementation in higher dimensional model setups are derived. Suitable algorithms are equally relevant to treating the warm rain process and aggregation in cirrus.

  3. Sample size for collecting germplasms–a polyploid model with ...

    Indian Academy of Sciences (India)

    Numerous expressions/results developed for germplasm collection/regeneration for diploid populations by earlier workers can be directly deduced from our general expression by assigning appropriate values of the corresponding parameters. A seed factor which influences the plant sample size has also been isolated to ...

  4. Sample size for collecting germplasms – a polyploid model with ...

    Indian Academy of Sciences (India)

    Unknown

    germplasm collection/regeneration for diploid populations by earlier workers can be directly deduced from our general expression by assigning appropriate values of the corresponding parameters. A seed factor which influences the plant sample size has also been isolated to aid the collectors in selecting the appropriate.

  5. Recent European Challenges and the Danish Collective Agreement Model

    DEFF Research Database (Denmark)

    Larsen, Trine Pernille; Navrbjerg, Steen Erik

    are related to the new forms of cross-border collaboration and negotiations taking place within multi-national corporations (MNC's). This research paper examines a series of challenges facing the collective bargaining systems in Denmark, Estonia, Northern Ireland and Sweden. These countries represent four...

  6. Evaluation of two pollutant dispersion models over continental scales

    Science.gov (United States)

    Rodriguez, D.; Walker, H.; Klepikova, N.; Kostrikov, A.; Zhuk, Y.

    Two long-range, emergency response models—one based on the particle-in-cell method of pollutant representation (ADPIC/U.S.) the other based on the superposition of Gaussian puffs released periodically in time (EXPRESS/Russia)—are evaluated using perfluorocarbon tracer data from the Across North America Tracer Experiment (ANATEX). The purpose of the study is to assess our current capabilities for simulating continental-scale dispersion processes and to use these assessments as a means to improve our modeling tools. The criteria for judging model performance are based on protocols devised by the Environmental Protection Agency and on other complementary tests. Most of these measures require the formation and analysis of surface concentration footprints (the surface manifestations of tracer clouds, which are sampled over 24-h intervals), whose dimensions, center-of-mass coordinates and integral characteristics provide a basis for comparing observed and calculated concentration distributions. Generally speaking, the plumes associated with the 20 releases of perfluorocarbon (10 each from sources at Glasgow, MT and St. Cloud, MN) in January 1987, are poorly resolved by the sampling network when the source-to-receptor distances are less than about 1000 km. Within this undersampled region, both models chronically overpredict the sampler concentrations. Given this tendency, the computed areas of the surface footprints and their integral concentrations are likewise excessive. When the actual plumes spread out sufficiently for reasonable resolution, the observed ( O) and calculated ( C) footprint areas are usually within a factor of two of one another, thereby suggesting that the models possess some skill in the prediction of long-range diffusion. Deviations in the O and C plume trajectories, as measured by the distances of separation between the plume centroids, are on the other of 125 km d -1 for both models. It appears that the inability of the models to simulate large-scale

  7. The Site-Scale Saturated Zone Flow Model for Yucca Mountain

    Science.gov (United States)

    Al-Aziz, E.; James, S. C.; Arnold, B. W.; Zyvoloski, G. A.

    2006-12-01

    This presentation provides a reinterpreted conceptual model of the Yucca Mountain site-scale flow system subject to all quality assurance procedures. The results are based on a numerical model of site-scale saturated zone beneath Yucca Mountain, which is used for performance assessment predictions of radionuclide transport and to guide future data collection and modeling activities. This effort started from the ground up with a revised and updated hydrogeologic framework model, which incorporates the latest lithology data, and increased grid resolution that better resolves the hydrogeologic framework, which was updated throughout the model domain. In addition, faults are much better represented using the 250× 250- m2 spacing (compared to the previous model's 500× 500-m2 spacing). Data collected since the previous model calibration effort have been included and they comprise all Nye County water-level data through Phase IV of their Early Warning Drilling Program. Target boundary fluxes are derived from the newest (2004) Death Valley Regional Flow System model from the US Geologic Survey. A consistent weighting scheme assigns importance to each measured water-level datum and boundary flux extracted from the regional model. The numerical model is calibrated by matching these weighted water level measurements and boundary fluxes using parameter estimation techniques, along with more informal comparisons of the model to hydrologic and geochemical information. The model software (hydrologic simulation code FEHM~v2.24 and parameter estimation software PEST~v5.5) and model setup facilitates efficient calibration of multiple conceptual models. Analyses evaluate the impact of these updates and additional data on the modeled potentiometric surface and the flowpaths emanating from below the repository. After examining the heads and permeabilities obtained from the calibrated models, we present particle pathways from the proposed repository and compare them to those from the

  8. Analysis, scale modeling, and full-scale tests of low-level nuclear-waste-drum response to accident environments

    Energy Technology Data Exchange (ETDEWEB)

    Huerta, M.; Lamoreaux, G.H.; Romesberg, L.E.; Yoshimura, H.R.; Joseph, B.J.; May, R.A.

    1983-01-01

    This report describes extensive full-scale and scale-model testing of 55-gallon drums used for shipping low-level radioactive waste materials. The tests conducted include static crush, single-can impact tests, and side impact tests of eight stacked drums. Static crush forces were measured and crush energies calculated. The tests were performed in full-, quarter-, and eighth-scale with different types of waste materials. The full-scale drums were modeled with standard food product cans. The response of the containers is reported in terms of drum deformations and lid behavior. The results of the scale model tests are correlated to the results of the full-scale drums. Two computer techniques for calculating the response of drum stacks are presented. 83 figures, 9 tables.

  9. Georeferenced and secure mobile health system for large scale data collection in primary care.

    Science.gov (United States)

    Sa, Joao H G; Rebelo, Marina S; Brentani, Alexandra; Grisi, Sandra J F E; Iwaya, Leonardo H; Simplicio, Marcos A; Carvalho, Tereza C M B; Gutierrez, Marco A

    2016-10-01

    Mobile health consists in applying mobile devices and communication capabilities for expanding the coverage and improving the effectiveness of health care programs. The technology is particularly promising for developing countries, in which health authorities can take advantage of the flourishing mobile market to provide adequate health care to underprivileged communities, especially primary care. In Brazil, the Primary Care Information System (SIAB) receives primary health care data from all regions of the country, creating a rich database for health-related action planning. Family Health Teams (FHTs) collect this data in periodic visits to families enrolled in governmental programs, following an acquisition procedure that involves filling in paper forms. This procedure compromises the quality of the data provided to health care authorities and slows down the decision-making process. To develop a mobile system (GeoHealth) that should address and overcome the aforementioned problems and deploy the proposed solution in a wide underprivileged metropolitan area of a major city in Brazil. The proposed solution comprises three main components: (a) an Application Server, with a database containing family health conditions; and two clients, (b) a Web Browser running visualization tools for management tasks, and (c) a data-gathering device (smartphone) to register and to georeference the family health data. A data security framework was designed to ensure the security of data, which was stored locally and transmitted over public networks. The system was successfully deployed at six primary care units in the city of Sao Paulo, where a total of 28,324 families/96,061 inhabitants are regularly followed up by government health policies. The health conditions observed from the population covered were: diabetes in 3.40%, hypertension (age >40) in 23.87% and tuberculosis in 0.06%. This estimated prevalence has enabled FHTs to set clinical appointments proactively, with the aim of

  10. Diagnostics for stochastic genome-scale modeling via model slicing and debugging.

    Directory of Open Access Journals (Sweden)

    Kevin J Tsai

    Full Text Available Modeling of biological behavior has evolved from simple gene expression plots represented by mathematical equations to genome-scale systems biology networks. However, due to obstacles in complexity and scalability of creating genome-scale models, several biological modelers have turned to programming or scripting languages and away from modeling fundamentals. In doing so, they have traded the ability to have exchangeable, standardized model representation formats, while those that remain true to standardized model representation are faced with challenges in model complexity and analysis. We have developed a model diagnostic methodology inspired by program slicing and debugging and demonstrate the effectiveness of the methodology on a genome-scale metabolic network model published in the BioModels database. The computer-aided identification revealed specific points of interest such as reversibility of reactions, initialization of species amounts, and parameter estimation that improved a candidate cell's adenosine triphosphate production. We then compared the advantages of our methodology over other modeling techniques such as model checking and model reduction. A software application that implements the methodology is available at http://gel.ym.edu.tw/gcs/.

  11. Diagnostics for stochastic genome-scale modeling via model slicing and debugging.

    Science.gov (United States)

    Tsai, Kevin J; Chang, Chuan-Hsiung

    2014-01-01

    Modeling of biological behavior has evolved from simple gene expression plots represented by mathematical equations to genome-scale systems biology networks. However, due to obstacles in complexity and scalability of creating genome-scale models, several biological modelers have turned to programming or scripting languages and away from modeling fundamentals. In doing so, they have traded the ability to have exchangeable, standardized model representation formats, while those that remain true to standardized model representation are faced with challenges in model complexity and analysis. We have developed a model diagnostic methodology inspired by program slicing and debugging and demonstrate the effectiveness of the methodology on a genome-scale metabolic network model published in the BioModels database. The computer-aided identification revealed specific points of interest such as reversibility of reactions, initialization of species amounts, and parameter estimation that improved a candidate cell's adenosine triphosphate production. We then compared the advantages of our methodology over other modeling techniques such as model checking and model reduction. A software application that implements the methodology is available at http://gel.ym.edu.tw/gcs/.

  12. A simple landslide model at a laboratory scale

    Science.gov (United States)

    Atmajati, Elisabeth Dian; Yuliza, Elfi; Habil, Husni; Sadisun, Imam Ahmad; Munir, Muhammad Miftahul; Khairurrijal

    2017-07-01

    Landslide, which is one of the natural disasters that occurs frequently, often causes very adverse effects. Landslide early warning systems, which are installed at prone areas, measure physical parameters closely related to landslides and give warning signals indicating that landslides would occur. To determine the critical values of the measured physical parameters or test the early warning system itself, a laboratory scale model of a rotational landslide was developed. This rotational landslide model had a size of 250×45×40 cm3 and was equipped with soil moisture sensors, accelerometers, and automated measurement system. The soil moisture sensors were used to determine the water content in soil sample. The accelerometers were employed to detect movements in x-, y-, and z-direction. Therefore, the flow and rotational landslides were expected to be modeled and characterized. The developed landslide model could be used to evaluate the effects of slope, soil type, and water seepage on the incidence of landslides. The present experiment showed that the model can show the occurrence of landslides. The presence of water seepage made the slope crack. As the time went by, the crack became bigger. After evaluating the obtained characteristics, the occurred landslide was the flow type. This landslide occurred when the soil sample was in a saturated condition with water. The soil movements in x-, y-, and z-direction were also observed. Further experiments should be performed to realize the rotational landslide.

  13. Modelling of vegetative filter strips in catchment scale erosion control

    Directory of Open Access Journals (Sweden)

    K. RANKINEN

    2008-12-01

    Full Text Available The efficiency of vegetative filter strips to reduce erosion was assessed by simulation modelling in two catchments located in different parts of Finland. The areas of high erosion risk were identified by a Geographical Information System (GIS combining digital spatial data of soil type, land use and field slopes. The efficiency of vegetative filter strips (VFS was assessed by the ICECREAM model, a derivative of the CREAMS model which has been modified and adapted for Finnish conditions. The simulation runs were performed without the filter strips and with strips of 1 m, 3 m and 15 m width. Four soil types and two crops (spring barley, winter wheat were studied. The model assessments for fields without VFS showed that the amount of erosion is clearly dominated by slope gradient. The soil texture had a greater impact on erosion than the crop. The impact of the VFS on erosion reduction was highly variable. These model results were scaled up by combining them to the digital spatial data. The simulated efficiency of the VFS in erosion control in the whole catchment varied from 50 to 89%. A GIS-based erosion risk map of the other study catchment and an identification carried out by manual study using topographical paper maps were evaluated and validated by ground truthing. Both methods were able to identify major erosion risk areas, i.e areas where VFS are particularly necessary. A combination of the GIS and the field method gives the best outcome.

  14. A multi-scale strength model with phase transformation

    Science.gov (United States)

    Barton, N.; Arsenlis, A.; Rhee, M.; Marian, J.; Bernier, J.; Tang, M.; Yang, L.

    2011-06-01

    We present a multi-scale strength model that includes phase transformation. In each phase, strength depends on pressure, strain rate, temperature, and evolving dislocation density descriptors. A donor cell type of approach is used for the transfer of dislocation density between phases. While the shear modulus can be modeled as smooth through the BCC to rhombohedral transformation in vanadium, the multi-phase strength model predicts abrupt changes in the material strength due to changes in dislocation kinetics. In the rhombohedral phase, the dislocation density is decomposed into populations associated with short and long Burgers vectors. Strength model construction employs an information passing paradigm to span from the atomistic level to the continuum level. Simulation methods in the overall hierarchy include density functional theory, molecular statics, molecular dynamics, dislocation dynamics, and continuum based approaches. We demonstrate the behavior of the model through simulations of Rayleigh Taylor instability growth experiments of the type used to assess material strength at high pressure and strain rate. This work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344 (LLNL-ABS-464695).

  15. Simplified scaling model for the THETA-pinch

    Energy Technology Data Exchange (ETDEWEB)

    Ewing, K. J.; Thomson, D. B.

    1982-02-01

    A simple ID scaling model for the fast THETA-pinch was developed and written as a code that would be flexible, inexpensive in computer time, and readily available for use with the Los Alamos explosive-driven high-magnetic-field program. The simplified model uses three successive separate stages: (1) a snowplow-like radial implosion, (2) an idealized resistive annihilation of reverse bias field, and (3) an adiabatic compression stage of a BETA = 1 plasma for which ideal pressure balance is assumed to hold. The code uses one adjustable fitting constant whose value was first determined by comparison with results from the Los Alamos Scylla III, Scyllacita, and Scylla IA THETA-pinches.

  16. Uncertainty Quantification for Large-Scale Ice Sheet Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [Univ. of Texas, Austin, TX (United States)

    2016-02-05

    This report summarizes our work to develop advanced forward and inverse solvers and uncertainty quantification capabilities for a nonlinear 3D full Stokes continental-scale ice sheet flow model. The components include: (1) forward solver: a new state-of-the-art parallel adaptive scalable high-order-accurate mass-conservative Newton-based 3D nonlinear full Stokes ice sheet flow simulator; (2) inverse solver: a new adjoint-based inexact Newton method for solution of deterministic inverse problems governed by the above 3D nonlinear full Stokes ice flow model; and (3) uncertainty quantification: a novel Hessian-based Bayesian method for quantifying uncertainties in the inverse ice sheet flow solution and propagating them forward into predictions of quantities of interest such as ice mass flux to the ocean.

  17. Reconstruction of groundwater depletion using a global scale groundwater model

    Science.gov (United States)

    de Graaf, Inge; van Beek, Rens; Sutanudjaja, Edwin; Wada, Yoshi; Bierkens, Marc

    2015-04-01

    Groundwater forms an integral part of the global hydrological cycle and is the world's largest accessible source of fresh water to satisfy human water needs. It buffers variable recharge rates over time, thereby effectively sustaining river flows in times of drought as well as evaporation in areas with shallow water tables. Moreover, although lateral groundwater flows are often slow, they cross topographic and administrative boundaries at appreciable rates. Despite the importance of groundwater, most global scale hydrological models do not consider surface water-groundwater interactions or include a lateral groundwater flow component. The main reason of this omission is the lack of consistent global-scale hydrogeological information needed to arrive at a more realistic representation of the groundwater system, i.e. including information on aquifer depths and the presence of confining layers. The latter holds vital information on the accessibility and quality of the global groundwater resource. In this study we developed a high resolution (5 arc-minutes) global scale transient groundwater model comprising confined and unconfined aquifers. This model is based on MODFLOW (McDonald and Harbaugh, 1988) and coupled with the land-surface model PCR GLOBWB (van Beek et al., 2011) via recharge and surface water levels. Aquifers properties were based on newly derived estimates of aquifer depths (de Graaf et al., 2014b) and thickness of confining layers from an integration of lithological and topographical information. They were further parameterized using available global datasets on lithology (Hartmann and Moosdorf, 2011) and permeability (Gleeson et al., 2014). In a sensitivity analysis the model was run with various hydrogeological parameter settings, under natural recharge only. Scenarios of past groundwater abstractions and corresponding recharge (Wada et al., 2012, de Graaf et al. 2014a) were evaluated. The resulting estimates of groundwater depletion are lower than

  18. Lichen elemental content bioindicators for air quality in upper Midwest, USA: A model for large-scale monitoring

    Science.gov (United States)

    Susan Will-Wolf; Sarah Jovan; Michael C. Amacher

    2017-01-01

    Our development of lichen elemental bioindicators for a United States of America (USA) national monitoring program is a useful model for other large-scale programs. Concentrations of 20 elements were measured, validated, and analyzed for 203 samples of five common lichen species. Collections were made by trained non-specialists near 75 permanent plots and an expert...

  19. Multi-scale modelling for HEDP experiments on Orion

    Science.gov (United States)

    Sircombe, N. J.; Ramsay, M. G.; Hughes, S. J.; Hoarty, D. J.

    2016-05-01

    The Orion laser at AWE couples high energy long-pulse lasers with high intensity short-pulses, allowing material to be compressed beyond solid density and heated isochorically. This experimental capability has been demonstrated as a platform for conducting High Energy Density Physics material properties experiments. A clear understanding of the physics in experiments at this scale, combined with a robust, flexible and predictive modelling capability, is an important step towards more complex experimental platforms and ICF schemes which rely on high power lasers to achieve ignition. These experiments present a significant modelling challenge, the system is characterised by hydrodynamic effects over nanoseconds, driven by long-pulse lasers or the pre-pulse of the petawatt beams, and fast electron generation, transport, and heating effects over picoseconds, driven by short-pulse high intensity lasers. We describe the approach taken at AWE; to integrate a number of codes which capture the detailed physics for each spatial and temporal scale. Simulations of the heating of buried aluminium microdot targets are discussed and we consider the role such tools can play in understanding the impact of changes to the laser parameters, such as frequency and pre-pulse, as well as understanding effects which are difficult to observe experimentally.

  20. A small-scale anatomical dosimetry model of the liver

    Science.gov (United States)

    Stenvall, Anna; Larsson, Erik; Strand, Sven-Erik; Jönsson, Bo-Anders

    2014-07-01

    Radionuclide therapy is a growing and promising approach for treating and prolonging the lives of patients with cancer. For therapies where high activities are administered, the liver can become a dose-limiting organ; often with a complex, non-uniform activity distribution and resulting non-uniform absorbed-dose distribution. This paper therefore presents a small-scale dosimetry model for various source-target combinations within the human liver microarchitecture. Using Monte Carlo simulations, Medical Internal Radiation Dose formalism-compatible specific absorbed fractions were calculated for monoenergetic electrons; photons; alpha particles; and 125I, 90Y, 211At, 99mTc, 111In, 177Lu, 131I and 18F. S values and the ratio of local absorbed dose to the whole-organ average absorbed dose was calculated, enabling a transformation of dosimetry calculations from macro- to microstructure level. For heterogeneous activity distributions, for example uptake in Kupffer cells of radionuclides emitting low-energy electrons (125I) or high-LET alpha particles (211At) the target absorbed dose for the part of the space of Disse, closest to the source, was more than eight- and five-fold the average absorbed dose to the liver, respectively. With the increasing interest in radionuclide therapy of the liver, the presented model is an applicable tool for small-scale liver dosimetry in order to study detailed dose-effect relationships in the liver.

  1. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2003-01-31

    This paper discusses using the wavelets modeling technique as a mechanism for querying large-scale spatio-temporal scientific simulation data. Wavelets have been used successfully in time series analysis and in answering surprise and trend queries. Our approach however is driven by the need for compression, which is necessary for viable throughput given the size of the targeted data, along with the end user requirements from the discovery process. Our users would like to run fast queries to check the validity of the simulation algorithms used. In some cases users are welling to accept approximate results if the answer comes back within a reasonable time. In other cases they might want to identify a certain phenomena and track it over time. We face a unique problem because of the data set sizes. It may take months to generate one set of the targeted data; because of its shear size, the data cannot be stored on disk for long and thus needs to be analyzed immediately before it is sent to tape. We integrated wavelets within AQSIM, a system that we are developing to support exploration and analyses of tera-scale size data sets. We will discuss the way we utilized wavelets decomposition in our domain to facilitate compression and in answering a specific class of queries that is harder to answer with any other modeling technique. We will also discuss some of the shortcomings of our implementation and how to address them.

  2. Calibration of Airframe and Occupant Models for Two Full-Scale Rotorcraft Crash Tests

    Science.gov (United States)

    Annett, Martin S.; Horta, Lucas G.; Polanco, Michael A.

    2012-01-01

    Two full-scale crash tests of an MD-500 helicopter were conducted in 2009 and 2010 at NASA Langley's Landing and Impact Research Facility in support of NASA s Subsonic Rotary Wing Crashworthiness Project. The first crash test was conducted to evaluate the performance of an externally mounted composite deployable energy absorber under combined impact conditions. In the second crash test, the energy absorber was removed to establish baseline loads that are regarded as severe but survivable. Accelerations and kinematic data collected from the crash tests were compared to a system integrated finite element model of the test article. Results from 19 accelerometers placed throughout the airframe were compared to finite element model responses. The model developed for the purposes of predicting acceleration responses from the first crash test was inadequate when evaluating more severe conditions seen in the second crash test. A newly developed model calibration approach that includes uncertainty estimation, parameter sensitivity, impact shape orthogonality, and numerical optimization was used to calibrate model results for the second full-scale crash test. This combination of heuristic and quantitative methods was used to identify modeling deficiencies, evaluate parameter importance, and propose required model changes. It is shown that the multi-dimensional calibration techniques presented here are particularly effective in identifying model adequacy. Acceleration results for the calibrated model were compared to test results and the original model results. There was a noticeable improvement in the pilot and co-pilot region, a slight improvement in the occupant model response, and an over-stiffening effect in the passenger region. This approach should be adopted early on, in combination with the building-block approaches that are customarily used, for model development and test planning guidance. Complete crash simulations with validated finite element models can be used

  3. Fine Scale Projections of Indian Monsoonal Rainfall Using Statistical Models

    Science.gov (United States)

    Kulkarni, S.; Ghosh, S.; Rajendran, K.

    2012-12-01

    years of Indian precipitation pattern. The reason behind the failure of bias corrected model in projecting spatially non-uniform precipitation is the inability of the GCMs in modeling finer scale geophysical processes in changed condition. The results highlight the need to revisit the bias correction methods for future projections, to incorporate of finer scale processes.

  4. Modeling complex systems: From the individual to the collective

    Science.gov (United States)

    Malmgren, R. Dean

    Over the past decade, researchers have identified several unifying properties of complex networks across technological, biological, and sociological disciplines. Although it is believed that these ubiquitous structural properties of complex networks may be explained by a unifying model, there is scarcely any evidence for an all-encompassing model which describes the evolution of all complex systems. In this thesis, we take some first steps toward understanding the evolution of complex system by developing models for how individuals behave in response to environmental queues and interactions with other individuals. A distinguishing feature of this research is the systematic use of Monte Carlo hypothesis testing, which enables us to statistically test our models against empirical data and quantify the significance of the agreement. Using this methodology, we develop a model of human communication patterns and identify intruiguing correlations in mentorship networks.

  5. Application of computer-aided multi-scale modelling framework - Aerosol case study

    DEFF Research Database (Denmark)

    Heitzig, Martina; Gregson, Christopher; Sin, Gürkan

    2011-01-01

    A computer-aided modelling tool for efficient multi-scale modelling has been developed and is applied to solve a multi-scale modelling problem related to design and evaluation of fragrance aerosol products. The developed modelling scenario spans three length scales and describes how droplets...

  6. A rate-dependent multi-scale crack model for concrete

    NARCIS (Netherlands)

    Karamnejad, A.; Nguyen, V.P.; Sluys, L.J.

    2013-01-01

    A multi-scale numerical approach for modeling cracking in heterogeneous quasi-brittle materials under dynamic loading is presented. In the model, a discontinuous crack model is used at macro-scale to simulate fracture and a gradient-enhanced damage model has been used at meso-scale to simulate

  7. Extension of landscape-based population viability models to ecoregional scales for conservation planning

    Science.gov (United States)

    Thomas W. Bonnot; Frank R. III Thompson; Joshua Millspaugh

    2011-01-01

    Landscape-based population models are potentially valuable tools in facilitating conservation planning and actions at large scales. However, such models have rarely been applied at ecoregional scales. We extended landscape-based population models to ecoregional scales for three species of concern in the Central Hardwoods Bird Conservation Region and compared model...

  8. A hybrid pore-scale and continuum-scale model for solute diffusion, reaction, and biofilm development in porous media

    Science.gov (United States)

    Tang, Youneng; Valocchi, Albert J.; Werth, Charles J.

    2015-03-01

    It is a challenge to upscale solute transport in porous media for multispecies bio-kinetic reactions because of incomplete mixing within the elementary volume and because biofilm growth can change porosity and affect pore-scale flow and diffusion. To address this challenge, we present a hybrid model that couples pore-scale subdomains to continuum-scale subdomains. While the pore-scale subdomains involving significant biofilm growth and reaction are simulated using pore-scale equations, the other subdomains are simulated using continuum-scale equations to save computational time. The pore-scale and continuum-scale subdomains are coupled using a mortar method to ensure continuity of solute concentration and flux at the interfaces. We present results for a simplified two-dimensional system, neglect advection, and use dual Monod kinetics for solute utilization and biofilm growth. The results based on the hybrid model are consistent with the results based on a pore-scale model for three test cases that cover a wide range of Damköhler (Da = reaction rate/diffusion rate) numbers for both homogeneous (spatially periodic) and heterogeneous pore structures. We compare results from the hybrid method with an upscaled continuum model and show that the latter is valid only for cases of small Damköhler numbers, consistent with other results reported in the literature.

  9. Stainless steel corrosion scale formed in reclaimed water: Characteristics, model for scale growth and metal element release.

    Science.gov (United States)

    Cui, Yong; Liu, Shuming; Smith, Kate; Hu, Hongying; Tang, Fusheng; Li, Yuhong; Yu, Kanghua

    2016-10-01

    Stainless steels generally have extremely good corrosion resistance, but are still susceptible to pitting corrosion. As a result, corrosion scales can form on the surface of stainless steel after extended exposure to aggressive aqueous environments. Corrosion scales play an important role in affecting water quality. These research results showed that interior regions of stainless steel corrosion scales have a high percentage of chromium phases. We reveal the morphology, micro-structure and physicochemical characteristics of stainless steel corrosion scales. Stainless steel corrosion scale is identified as a podiform chromite deposit according to these characteristics, which is unlike deposit formed during iron corrosion. A conceptual model to explain the formation and growth of stainless steel corrosion scale is proposed based on its composition and structure. The scale growth process involves pitting corrosion on the stainless steel surface and the consecutive generation and homogeneous deposition of corrosion products, which is governed by a series of chemical and electrochemical reactions. This model shows the role of corrosion scales in the mechanism of iron and chromium release from pitting corroded stainless steel materials. The formation of corrosion scale is strongly related to water quality parameters. The presence of HClO results in higher ferric content inside the scales. Cl- and SO42- ions in reclaimed water play an important role in corrosion pitting of stainless steel and promote the formation of scales. Copyright © 2016. Published by Elsevier B.V.

  10. Toward Multi-scale Modeling and simulation of conduction in heterogeneous materials

    Energy Technology Data Exchange (ETDEWEB)

    Lechman, Jeremy B. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Battaile, Corbett Chandler. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Bolintineanu, Dan [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Cooper, Marcia A. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Erikson, William W. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Foiles, Stephen M. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Kay, Jeffrey J [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Phinney, Leslie M. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Piekos, Edward S. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Specht, Paul Elliott [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Wixom, Ryan R. [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States); Yarrington, Cole [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    This report summarizes a project in which the authors sought to develop and deploy: (i) experimental techniques to elucidate the complex, multiscale nature of thermal transport in particle-based materials; and (ii) modeling approaches to address current challenges in predicting performance variability of materials (e.g., identifying and characterizing physical- chemical processes and their couplings across multiple length and time scales, modeling information transfer between scales, and statically and dynamically resolving material structure and its evolution during manufacturing and device performance). Experimentally, several capabilities were successfully advanced. As discussed in Chapter 2 a flash diffusivity capability for measuring homogeneous thermal conductivity of pyrotechnic powders (and beyond) was advanced; leading to enhanced characterization of pyrotechnic materials and properties impacting component development. Chapter 4 describes success for the first time, although preliminary, in resolving thermal fields at speeds and spatial scales relevant to energetic components. Chapter 7 summarizes the first ever (as far as the authors know) application of TDTR to actual pyrotechnic materials. This is the first attempt to actually characterize these materials at the interfacial scale. On the modeling side, new capabilities in image processing of experimental microstructures and direct numerical simulation on complicated structures were advanced (see Chapters 3 and 5). In addition, modeling work described in Chapter 8 led to improved prediction of interface thermal conductance from first principles calculations. Toward the second point, for a model system of packed particles, significant headway was made in implementing numerical algorithms and collecting data to justify the approach in terms of highlighting the phenomena at play and pointing the way forward in developing and informing the kind of modeling approach originally envisioned (see Chapter 6). In

  11. Site-scale groundwater flow modelling of Ceberg

    Energy Technology Data Exchange (ETDEWEB)

    Walker, D. [Duke Engineering and Services (United States); Gylling, B. [Kemakta Konsult AB, Stockholm (Sweden)

    1999-06-01

    The Swedish Nuclear Fuel and Waste Management Company (SKB) SR 97 study is a comprehensive performance assessment illustrating the results for three hypothetical repositories in Sweden. In support of SR 97, this study examines the hydrogeologic modelling of the hypothetical site called Ceberg, which adopts input parameters from the SKB study site near Gideaa, in northern Sweden. This study uses a nested modelling approach, with a deterministic regional model providing boundary conditions to a site-scale stochastic continuum model. The model is run in Monte Carlo fashion to propagate the variability of the hydraulic conductivity to the advective travel paths from representative canister locations. A series of variant cases addresses uncertainties in the inference of parameters and the model of conductive fracturezones. The study uses HYDRASTAR, the SKB stochastic continuum (SC) groundwater modelling program, to compute the heads, Darcy velocities at each representative canister position, and the advective travel times and paths through the geosphere. The volumetric flow balance between the regional and site-scale models suggests that the nested modelling and associated upscaling of hydraulic conductivities preserve mass balance only in a general sense. In contrast, a comparison of the base and deterministic (Variant 4) cases indicates that the upscaling is self-consistent with respect to median travel time and median canister flux. These suggest that the upscaling of hydraulic conductivity is approximately self-consistent but the nested modelling could be improved. The Base Case yields the following results for a flow porosity of {epsilon}{sub f} 10{sup -4} and a flow-wetted surface area of a{sub r} = 0.1 m{sup 2}/(m{sup 3} rock): The median travel time is 1720 years. The median canister flux is 3.27x10{sup -5} m/year. The median F-ratio is 1.72x10{sup 6} years/m. The base case and the deterministic variant suggest that the variability of the travel times within

  12. Impact of Scattering Model on Disdrometer Derived Attenuation Scaling

    Science.gov (United States)

    Zemba, Michael; Luini, Lorenzo; Nessel, James; Riva, Carlo (Compiler)

    2016-01-01

    NASA Glenn Research Center (GRC), the Air Force Research Laboratory (AFRL), and the Politecnico di Milano (POLIMI) are currently entering the third year of a joint propagation study in Milan, Italy utilizing the 20 and 40 GHz beacons of the Alphasat TDP5 Aldo Paraboni scientific payload. The Ka- and Q-band beacon receivers were installed at the POLIMI campus in June of 2014 and provide direct measurements of signal attenuation at each frequency. Collocated weather instrumentation provides concurrent measurement of atmospheric conditions at the receiver; included among these weather instruments is a Thies Clima Laser Precipitation Monitor (optical disdrometer) which records droplet size distributions (DSD) and droplet velocity distributions (DVD) during precipitation events. This information can be used to derive the specific attenuation at frequencies of interest and thereby scale measured attenuation data from one frequency to another. Given the ability to both predict the 40 GHz attenuation from the disdrometer and the 20 GHz timeseries as well as to directly measure the 40 GHz attenuation with the beacon receiver, the Milan terminal is uniquely able to assess these scaling techniques and refine the methods used to infer attenuation from disdrometer data.In order to derive specific attenuation from the DSD, the forward scattering coefficient must be computed. In previous work, this has been done using the Mie scattering model, however, this assumes a spherical droplet shape. The primary goal of this analysis is to assess the impact of the scattering model and droplet shape on disdrometer derived attenuation predictions by comparing the use of the Mie scattering model to the use of the T-matrix method, which does not assume a spherical droplet. In particular, this paper will investigate the impact of these two scattering approaches on the error of the resulting predictions as well as on the relationship between prediction error and rain rate.

  13. Endogenous Crisis Waves: Stochastic Model with Synchronized Collective Behavior

    Science.gov (United States)

    Gualdi, Stanislao; Bouchaud, Jean-Philippe; Cencetti, Giulia; Tarzia, Marco; Zamponi, Francesco

    2015-02-01

    We propose a simple framework to understand commonly observed crisis waves in macroeconomic agent-based models, which is also relevant to a variety of other physical or biological situations where synchronization occurs. We compute exactly the phase diagram of the model and the location of the synchronization transition in parameter space. Many modifications and extensions can be studied, confirming that the synchronization transition is extremely robust against various sources of noise or imperfections.

  14. Derivation of a GIS-based watershed-scale conceptual model for the St. Jones River Delaware from habitat-scale conceptual models.

    Science.gov (United States)

    Reiter, Michael A; Saintil, Max; Yang, Ziming; Pokrajac, Dragoljub

    2009-08-01

    Conceptual modeling is a useful tool for identifying pathways between drivers, stressors, Valued Ecosystem Components (VECs), and services that are central to understanding how an ecosystem operates. The St. Jones River watershed, DE is a complex ecosystem, and because management decisions must include ecological, social, political, and economic considerations, a conceptual model is a good tool for accommodating the full range of inputs. In 2002, a Four-Component, Level 1 conceptual model was formed for the key habitats of the St. Jones River watershed, but since the habitat level of resolution is too fine for some important watershed-scale issues we developed a functional watershed-scale model using the existing narrowed habitat-scale models. The narrowed habitat-scale conceptual models and associated matrices developed by Reiter et al. (2006) were combined with data from the 2002 land use/land cover (LULC) GIS-based maps of Kent County in Delaware to assemble a diagrammatic and numerical watershed-scale conceptual model incorporating the calculated weight of each habitat within the watershed. The numerical component of the assembled watershed model was subsequently subjected to the same Monte Carlo narrowing methodology used for the habitat versions to refine the diagrammatic component of the watershed-scale model. The narrowed numerical representation of the model was used to generate forecasts for changes in the parameters "Agriculture" and "Forest", showing that land use changes in these habitats propagated through the results of the model by the weighting factor. Also, the narrowed watershed-scale conceptual model identified some key parameters upon which to focus research attention and management decisions at the watershed scale. The forecast and simulation results seemed to indicate that the watershed-scale conceptual model does lead to different conclusions than the habitat-scale conceptual models for some issues at the larger watershed scale.

  15. Aflatoxin levels in sunflower seeds and cakes collected from micro- and small-scale sunflower oil processors in Tanzania.

    Science.gov (United States)

    Mmongoyo, Juma A; Wu, Felicia; Linz, John E; Nair, Muraleedharan G; Mugula, Jovin K; Tempelman, Robert J; Strasburg, Gale M

    2017-01-01

    Aflatoxin, a mycotoxin found commonly in maize and peanuts worldwide, is associated with liver cancer, acute toxicosis, and growth impairment in humans and animals. In Tanzania, sunflower seeds are a source of snacks, cooking oil, and animal feed. These seeds are a potential source of aflatoxin contamination. However, reports on aflatoxin contamination in sunflower seeds and cakes are scarce. The objective of the current study was to determine total aflatoxin concentrations in sunflower seeds and cakes from small-scale oil processors across Tanzania. Samples of sunflower seeds (n = 90) and cakes (n = 92) were collected across two years, and analyzed for total aflatoxin concentrations using a direct competitive enzyme-linked immunosorbent assay (ELISA). For seed samples collected June-August 2014, the highest aflatoxin concentrations were from Dodoma (1.7-280.6 ng/g), Singida (1.4-261.8 ng/g), and Babati-Manyara (1.8-162.0 ng/g). The highest concentrations for cakes were from Mbeya (2.8-97.7 ng/g), Dodoma (1.9-88.2 ng/g), and Singida (2.0-34.3 ng/g). For seed samples collected August-October 2015, the highest concentrations were from Morogoro (2.8-662.7 ng/g), Singida (1.6-217.6 ng/g) and Mbeya (1.4-174.2 ng/g). The highest concentrations for cakes were from Morogoro (2.7-536.0 ng/g), Dodoma (1.4-598.4 ng/g) and Singida (3.2-52.8 ng/g). In summary, humans and animals are potentially at high risk of exposure to aflatoxins through sunflower seeds and cakes from micro-scale millers in Tanzania; and location influences risk.

  16. Small-Scale Modeling of Waves and Floes in the Marginal Ice Zone

    Science.gov (United States)

    Orzech, M.; Shi, F.; Calantoni, J.; Bateman, S. P.; Veeramony, J.

    2014-12-01

    We are conducting a model-based investigation into the small-scale (O(m)) physics of wave-ice floe interaction in the marginal ice zone (MIZ), in order to test and improve parameterizations utilized by large-scale climate models. The presentation will describe the development and validation of a coupled system to track the wave-forced motion of floating objects (collections of bonded particles) and the concurrent effects of the moving objects on the surrounding fluid. NHWAVE, a fully dispersive wave model with a vertical sigma-coordinate, is extended to model moving objects by including vertical boundary fitted meshing and horizontal immersed boundary method. LIGGGHTS, a discrete element granular particle-tracking simulator, is configured to include realistic bonding forces between elements and incorporate velocity and pressure gradient effects from the fluid model. Following an overview of the coupled system, validation results will be presented for the standalone wave and ice models. For NHWAVE, model estimates of surface wave patterns generated by oscillating surface objects are compared to LIDAR measurements from corresponding laboratory experiments. For LIGGGHTS, the stress-strain response is measured for collections of bonded particles under tension and/or compression, then compared to available lab and field data. Results will also be presented from simplified MIZ simulations with the coupled system, in which waves pass through groups of rigid ice blocks and their refraction, diffraction, and reflection are measured. Finally, we will provide a preview of an upcoming series of targeted virtual experiments in which momentum/energy exchange between waves and ice floes is measured under varied conditions.

  17. Linking Fine-Scale Observations and Model Output with Imagery at Multiple Scales

    Science.gov (United States)

    Sadler, J.; Walthall, C. L.

    2014-12-01

    The development and implementation of a system for seasonal worldwide agricultural yield estimates is underway with the international Group on Earth Observations GeoGLAM project. GeoGLAM includes a research component to continually improve and validate its algorithms. There is a history of field measurement campaigns going back decades to draw upon for ways of linking surface measurements and model results with satellite observations. Ground-based, in-situ measurements collected by interdisciplinary teams include yields, model inputs and factors affecting scene radiation. Data that is comparable across space and time with careful attention to calibration is essential for the development and validation of agricultural applications of remote sensing. Data management to ensure stewardship, availability and accessibility of the data are best accomplished when considered an integral part of the research. The expense and logistical challenges of field measurement campaigns can be cost-prohibitive and because of short funding cycles for research, access to consistent, stable study sites can be lost. The use of a dedicated staff for baseline data needed by multiple investigators, and conducting measurement campaigns using existing measurement networks such as the USDA Long Term Agroecosystem Research network can fulfill these needs and ensure long-term access to study sites.

  18. Modelling catchment non-stationarity - multi-scale modelling and data assimilation

    Science.gov (United States)

    Wheater, H. S.; Bulygina, N.; Ballard, C. E.; Jackson, B. M.; McIntyre, N.

    2012-12-01

    Modelling environmental change is in many senses a 'Grand Challenge' for hydrology, but poses major methodological challenges for hydrological models. Conceptual models represent complex processes in a simplified and spatially aggregated manner; typically parameters have no direct relationship to measurable physical properties. Calibration using observed data results in parameter equifinality, unless highly parsimonious model structures are employed. Use of such models to simulate effects of catchment non-stationarity is essentially speculative, unless attention is given to the analysis of parameter temporal variability in a non-stationary observation record. Black-box models are similarly constrained by the information content of the observational data. In contrast, distributed physics-based models provide a stronger theoretical basis for the prediction of change. However, while such models have parameters that are in principle measurable, in practice, for catchment-scale application, the measurement scale is inconsistent with the scale of model representation, the costs associated with such an exercise are high, and key properties are spatially variable, often strongly non-linear, and highly uncertain. In this paper we present a framework for modelling catchment non-stationarity that integrates information (with uncertainty) from multiple models and data sources. The context is the need to model the effects of agricultural land use change at multiple scales. A detailed UK multi-scale and multi-site experimental programme has provided data to support high resolution physics-based models of runoff processes that can, for example, represent the effects of soil structural change (due to grazing densities or trafficking), localised tree planting and drainage. Such models necessarily have high spatial resolution (1m in the horizontal plane, 1 cm in the vertical in this case), and hence can be applied at the scale of a field or hillslope element, but would be

  19. Multi-scale salient feature extraction on mesh models

    KAUST Repository

    Yang, Yongliang

    2012-01-01

    We present a new method of extracting multi-scale salient features on meshes. It is based on robust estimation of curvature on multiple scales. The coincidence between salient feature and the scale of interest can be established straightforwardly, where detailed feature appears on small scale and feature with more global shape information shows up on large scale. We demonstrate this multi-scale description of features accords with human perception and can be further used for several applications as feature classification and viewpoint selection. Experiments exhibit that our method as a multi-scale analysis tool is very helpful for studying 3D shapes. © 2012 Springer-Verlag.

  20. URBAN MORPHOLOGY FOR HOUSTON TO DRIVE MODELS-3/CMAQ AT NEIGHBORHOOD SCALES

    Science.gov (United States)

    Air quality simulation models applied at various horizontal scales require different degrees of treatment in the specifications of the underlying surfaces. As we model neighborhood scales ( 1 km horizontal grid spacing), the representation of urban morphological structures (e....

  1. Acting in solidarity : Testing an extended dual pathway model of collective action by bystander group members

    NARCIS (Netherlands)

    Saab, Rim; Tausch, Nicole; Spears, Russell; Cheung, Wing-Yee

    We examined predictors of collective action among bystander group members in solidarity with a disadvantaged group by extending the dual pathway model of collective action, which proposes one efficacy-based and one emotion-based path to collective action (Van Zomeren, Spears, Fischer, & Leach,

  2. Protesters as "passionate economists" : A dynamic dual pathway model of approach coping with collective disadvantage

    NARCIS (Netherlands)

    van Zomeren, Martijn; Leach, Colin Wayne; Spears, Russell

    To explain the psychology behind individuals' motivation to participate in collective action against collective disadvantage (e.g., protest marches), the authors introduce a dynamic dual pathway model of approach coping that integrates many common explanations of collective action (i.e., group

  3. Modelling aggregation on the large scale and regularity on the small scale in spatial point pattern datasets

    DEFF Research Database (Denmark)

    Lavancier, Frédéric; Møller, Jesper

    We consider a dependent thinning of a regular point process with the aim of obtaining aggregation on the large scale and regularity on the small scale in the resulting target point process of retained points. Various parametric models for the underlying processes are suggested and the properties ...

  4. Macro and micro-scale modeling of polyurethane foaming processes

    Science.gov (United States)

    Geier, S.; Piesche, M.

    2014-05-01

    Mold filling processes of refrigerators, car dashboards or steering wheels are some of the many application areas of polyurethane foams. The design of these processes still mainly relies on empirical approaches. Therefore, we first developed a modeling approach describing mold filling processes in complex geometries. Hence, it is possible to study macroscopic foam flow and to identify voids. The final properties of polyurethane foams may vary significantly depending on the location within a product. Additionally, the local foam structure influences foam properties like thermal conductivity or impact strength significantly. It is neither possible nor would it be efficient to model complex geometries completely on bubble scale. For this reason, we developed a modeling approach describing the bubble growth and the evolution of the foam structure for a limited number of bubbles in a representative volume. Finally, we coupled our two simulation approaches by introducing tracer particles into our mold filling simulations. Through this coupling, a basis for studying the evolution of the local foam structure in complex geometries is provided.

  5. Performance Analysis, Modeling and Scaling of HPC Applications and Tools

    Energy Technology Data Exchange (ETDEWEB)

    Bhatele, Abhinav [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2016-01-13

    E cient use of supercomputers at DOE centers is vital for maximizing system throughput, mini- mizing energy costs and enabling science breakthroughs faster. This requires complementary e orts along several directions to optimize the performance of scienti c simulation codes and the under- lying runtimes and software stacks. This in turn requires providing scalable performance analysis tools and modeling techniques that can provide feedback to physicists and computer scientists developing the simulation codes and runtimes respectively. The PAMS project is using time allocations on supercomputers at ALCF, NERSC and OLCF to further the goals described above by performing research along the following fronts: 1. Scaling Study of HPC applications; 2. Evaluation of Programming Models; 3. Hardening of Performance Tools; 4. Performance Modeling of Irregular Codes; and 5. Statistical Analysis of Historical Performance Data. We are a team of computer and computational scientists funded by both DOE/NNSA and DOE/ ASCR programs such as ECRP, XStack (Traleika Glacier, PIPER), ExaOSR (ARGO), SDMAV II (MONA) and PSAAP II (XPACC). This allocation will enable us to study big data issues when analyzing performance on leadership computing class systems and to assist the HPC community in making the most e ective use of these resources.

  6. Scaling exponents in space plasmas: a fractional Levy model

    Science.gov (United States)

    Watkins, N. W.; Credgington, D.; Hnat, B.; Chapman, S. C.; Freeman, M. P.; Greenhough, J.

    Mandelbrot introduced the concept of fractals to describe the non-Euclidean shape of many aspects of the natural world In the time series context he proposed the use of fractional Brownian motion fBm to model non-negligible temporal persistence the Joseph Effect and Levy flights to quantify large discontinuities the Noah Effect In space physics the effects are manifested as intermittency and long-range correlation well-established features of geomagnetic indices and their solar wind drivers In order to capture and quantify the Noah and Joseph effects in one compact model we propose the application of a bridge -fractional Levy motion fLm -to space physics We perform an initial evaluation of some previous scaling results in this paradigm and show how fLm can model the previously observed exponents physics 0509058 in press Space Science Reviews We discuss the similarities and differences between fLm and ambivalent processes based on fractional kinetic equations e g Brockmann et al Nature 2006 and suggest some new directions for the future

  7. The Collective Impact Model and Its Potential for Health Promotion: Overview and Case Study of a Healthy Retail Initiative in San Francisco

    Science.gov (United States)

    Flood, Johnna; Minkler, Meredith; Lavery, Susana Hennessey; Estrada, Jessica; Falbe, Jennifer

    2015-01-01

    As resources for health promotion become more constricted, it is increasingly important to collaborate across sectors, including the private sector. Although many excellent models for cross-sector collaboration have shown promise in the health field, collective impact (CI), an emerging model for creating larger scale change, has yet to receive…

  8. Integrative modeling reveals the principles of multi-scale chromatin boundary formation in human nuclear organization.

    Science.gov (United States)

    Moore, Benjamin L; Aitken, Stuart; Semple, Colin A

    2015-05-27

    Interphase chromosomes adopt a hierarchical structure, and recent data have characterized their chromatin organization at very different scales, from sub-genic regions associated with DNA-binding proteins at the order of tens or hundreds of bases, through larger regions with active or repressed chromatin states, up to multi-megabase-scale domains associated with nuclear positioning, replication timing and other qualities. However, we have lacked detailed, quantitative models to understand the interactions between these different strata. Here we collate large collections of matched locus-level chromatin features and Hi-C interaction data, representing higher-order organization, across three human cell types. We use quantitative modeling approaches to assess whether locus-level features are sufficient to explain higher-order structure, and identify the most influential underlying features. We identify structurally variable domains between cell types and examine the underlying features to discover a general association with cell-type-specific enhancer activity. We also identify the most prominent features marking the boundaries of two types of higher-order domains at different scales: topologically associating domains and nuclear compartments. We find parallel enrichments of particular chromatin features for both types, including features associated with active promoters and the architectural proteins CTCF and YY1. We show that integrative modeling of large chromatin dataset collections using random forests can generate useful insights into chromosome structure. The models produced recapitulate known biological features of the cell types involved, allow exploration of the antecedents of higher-order structures and generate testable hypotheses for further experimental studies.

  9. A large-scale forest landscape model incorporating multi-scale processes and utilizing forest inventory data

    Science.gov (United States)

    Wen J. Wang; Hong S. He; Martin A. Spetich; Stephen R. Shifley; Frank R. Thompson III; David R. Larsen; Jacob S. Fraser; Jian. Yang

    2013-01-01

    Two challenges confronting forest landscape models (FLMs) are how to simulate fine, standscale processes while making large-scale (i.e., .107 ha) simulation possible, and how to take advantage of extensive forest inventory data such as U.S. Forest Inventory and Analysis (FIA) data to initialize and constrain model parameters. We present the LANDIS PRO model that...

  10. A Plume Scale Model of Chlorinated Ethene Degradation

    DEFF Research Database (Denmark)

    Murray, Alexandra Marie; Broholm, Mette Martina; Badin, Alice

    Although much is known about the biotic degradation pathways of chlorinated solvents, application of the degradation mechanism at the field scale is still challenging [1]. There are many microbial kinetic models to describe the reductive dechlorination in soil and groundwater, however none of them...... leaked from a dry cleaning facility, and a 2 km plume extends from the source in an unconfined aquifer of homogenous fluvio-glacial sand. The area has significant iron deposits, most notably pyrite, which can abiotically degrade chlorinated ethenes. The source zone underwent thermal (steam) remediation...... in 2006; the plume has received no treatment. The evolution of the site has been intensely documented since before the source treatment. This includes microbial analysis – Dehalococcoides sp. and vcrA genes have been identified and quantified by qPCR – and dual carbon-chlorine isotope analysis [1...

  11. Modeling of large-scale oxy-fuel combustion processes

    DEFF Research Database (Denmark)

    Yin, Chungen

    2012-01-01

    Quite some studies have been conducted in order to implement oxy-fuel combustion with flue gas recycle in conventional utility boilers as an effective effort of carbon capture and storage. However, combustion under oxy-fuel conditions is significantly different from conventional air-fuel firing......, among which radiative heat transfer under oxy-fuel conditions is one of the fundamental issues. This paper demonstrates the nongray-gas effects in modeling of large-scale oxy-fuel combustion processes. Oxy-fuel combustion of natural gas in a 609MW utility boiler is numerically studied, in which...... calculation of the oxy-fuel WSGGM remarkably over-predicts the radiative heat transfer to the furnace walls and under-predicts the gas temperature at the furnace exit plane, which also result in a higher incomplete combustion in the gray calculation. Moreover, the gray and non-gray calculations of the same...

  12. Scale Adaptive Simulation Model for the Darrieus Wind Turbine

    Science.gov (United States)

    Rogowski, K.; Hansen, M. O. L.; Maroński, R.; Lichota, P.

    2016-09-01

    Accurate prediction of aerodynamic loads for the Darrieus wind turbine using more or less complex aerodynamic models is still a challenge. One of the problems is the small amount of experimental data available to validate the numerical codes. The major objective of the present study is to examine the scale adaptive simulation (SAS) approach for performance analysis of a one-bladed Darrieus wind turbine working at a tip speed ratio of 5 and at a blade Reynolds number of 40 000. The three-dimensional incompressible unsteady Navier-Stokes equations are used. Numerical results of aerodynamic loads and wake velocity profiles behind the rotor are compared with experimental data taken from literature. The level of agreement between CFD and experimental results is reasonable.

  13. How best to collect demographic data for PVA models

    Czech Academy of Sciences Publication Activity Database

    Münzbergová, Zuzana; Ehrlén, J.

    2005-01-01

    Roč. 42, - (2005), s. 1115-1120 ISSN 0021-8901 R&D Projects: GA ČR GA206/02/0590; GA AV ČR KSK6005114 Institutional research plan: CEZ:AV0Z60050516 Keywords : PVA * demography * model Subject RIV: EF - Botanics Impact factor: 4.594, year: 2005

  14. A methodology for ecosystem-scale modeling of selenium

    Science.gov (United States)

    Presser, T.S.; Luoma, S.N.

    2010-01-01

    The main route of exposure for selenium (Se) is dietary, yet regulations lack biologically based protocols for evaluations of risk. We propose here an ecosystem-scale model that conceptualizes and quantifies the variables that determinehow Se is processed from water through diet to predators. This approach uses biogeochemical and physiological factors from laboratory and field studies and considers loading, speciation, transformation to particulate material, bioavailability, bioaccumulation in invertebrates, and trophic transfer to predators. Validation of the model is through data sets from 29 historic and recent field case studies of Se-exposed sites. The model links Se concentrations across media (water, particulate, tissue of different food web species). It can be used to forecast toxicity under different management or regulatory proposals or as a methodology for translating a fish-tissue (or other predator tissue) Se concentration guideline to a dissolved Se concentration. The model illustrates some critical aspects of implementing a tissue criterion: 1) the choice of fish species determines the food web through which Se should be modeled, 2) the choice of food web is critical because the particulate material to prey kinetics of bioaccumulation differs widely among invertebrates, 3) the characterization of the type and phase of particulate material is important to quantifying Se exposure to prey through the base of the food web, and 4) the metric describing partitioning between particulate material and dissolved Se concentrations allows determination of a site-specific dissolved Se concentration that would be responsible for that fish body burden in the specific environment. The linked approach illustrates that environmentally safe dissolved Se concentrations will differ among ecosystems depending on the ecological pathways and biogeochemical conditions in that system. Uncertainties and model sensitivities can be directly illustrated by varying exposure

  15. Puget Sound Dissolved Oxygen Modeling Study: Development of an Intermediate-Scale Hydrodynamic Model

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Zhaoqing; Khangaonkar, Tarang; Labiosa, Rochelle G.; Kim, Taeyun

    2010-11-30

    The Washington State Department of Ecology contracted with Pacific Northwest National Laboratory to develop an intermediate-scale hydrodynamic and water quality model to study dissolved oxygen and nutrient dynamics in Puget Sound and to help define potential Puget Sound-wide nutrient management strategies and decisions. Specifically, the project is expected to help determine 1) if current and potential future nitrogen loadings from point and non-point sources are significantly impairing water quality at a large scale and 2) what level of nutrient reductions are necessary to reduce or dominate human impacts to dissolved oxygen levels in the sensitive areas. In this study, an intermediate-scale hydrodynamic model of Puget Sound was developed to simulate the hydrodynamics of Puget Sound and the Northwest Straits for the year 2006. The model was constructed using the unstructured Finite Volume Coastal Ocean Model. The overall model grid resolution within Puget Sound in its present configuration is about 880 m. The model was driven by tides, river inflows, and meteorological forcing (wind and net heat flux) and simulated tidal circulations, temperature, and salinity distributions in Puget Sound. The model was validated against observed data of water surface elevation, velocity, temperature, and salinity at various stations within the study domain. Model validation indicated that the model simulates tidal elevations and currents in Puget Sound well and reproduces the general patterns of the temperature and salinity distributions.

  16. Large scale solar district heating. Evaluation, modelling and designing

    Energy Technology Data Exchange (ETDEWEB)

    Heller, A.

    2000-07-01

    The main objective of the research was to evaluate large-scale solar heating connected to district heating (CSDHP), to build up a simulation tool and to demonstrate the application of the tool for design studies and on a local energy planning case. The evaluation of the central solar heating technology is based on measurements on the case plant in Marstal, Denmark, and on published and unpublished data for other, mainly Danish, CSDHP plants. Evaluations on the thermal, economical and environmental performances are reported, based on the experiences from the last decade. The measurements from the Marstal case are analysed, experiences extracted and minor improvements to the plant design proposed. For the detailed designing and energy planning of CSDHPs, a computer simulation model is developed and validated on the measurements from the Marstal case. The final model is then generalised to a 'generic' model for CSDHPs in general. The meteorological reference data, Danish Reference Year, is applied to find the mean performance for the plant designs. To find the expectable variety of the thermal performance of such plants, a method is proposed where data from a year with poor solar irradiation and a year with strong solar irradiation are applied. Equipped with a simulation tool design studies are carried out spreading from parameter analysis over energy planning for a new settlement to a proposal for the combination of plane solar collectors with high performance solar collectors, exemplified by a trough solar collector. The methodology of utilising computer simulation proved to be a cheap and relevant tool in the design of future solar heating plants. The thesis also exposed the demand for developing computer models for the more advanced solar collector designs and especially for the control operation of CSHPs. In the final chapter the CSHP technology is put into perspective with respect to other possible technologies to find the relevance of the application

  17. Land surface evapotranspiration modelling at the regional scale

    Science.gov (United States)

    Raffelli, Giulia; Ferraris, Stefano; Canone, Davide; Previati, Maurizio; Gisolo, Davide; Provenzale, Antonello

    2017-04-01

    Climate change has relevant implications for the environment, water resources and human life in general. The observed increment of mean air temperature, in addition to a more frequent occurrence of extreme events such as droughts, may have a severe effect on the hydrological cycle. Besides climate change, land use changes are assumed to be another relevant component of global change in terms of impacts on terrestrial ecosystems: socio-economic changes have led to conversions between meadows and pastures and in most cases to a complete abandonment of grasslands. Water is subject to different physical processes among which evapotranspiration (ET) is one of the most significant. In fact, ET plays a key role in estimating crop growth, water demand and irrigation water management, so estimating values of ET can be crucial for water resource planning, irrigation requirement and agricultural production. Potential evapotranspiration (PET) is the amount of evaporation that occurs when a sufficient water source is available. It can be estimated just knowing temperatures (mean, maximum and minimum) and solar radiation. Actual evapotranspiration (AET) is instead the real quantity of water which is consumed by soil and vegetation; it is obtained as a fraction of PET. The aim of this work was to apply a simplified hydrological model to calculate AET for the province of Turin (Italy) in order to assess the water content and estimate the groundwater recharge at a regional scale. The soil is seen as a bucket (FAO56 model, Allen et al., 1998) made of different layers, which interact with water and vegetation. The water balance is given by precipitations (both rain and snow) and dew as positive inputs, while AET, runoff and drainage represent the rate of water escaping from soil. The difference between inputs and outputs is the water stock. Model data inputs are: soil characteristics (percentage of clay, silt, sand, rocks and organic matter); soil depth; the wilting point (i.e. the

  18. Analysis of effectiveness of possible queuing models at gas stations using the large-scale queuing theory

    Directory of Open Access Journals (Sweden)

    Slaviša M. Ilić

    2011-10-01

    Full Text Available This paper analyzes the effectiveness of possible models for queuing at gas stations, using a mathematical model of the large-scale queuing theory. Based on actual data collected and the statistical analysis of the expected intensity of vehicle arrivals and queuing at gas stations, the mathematical modeling of the real process of queuing was carried out and certain parameters quantified, in terms of perception of the weaknesses of the existing models and the possible benefits of an automated queuing model.

  19. Consistency between hydrological models and field observations: Linking processes at the hillslope scale to hydrological responses at the watershed scale

    Science.gov (United States)

    Clark, M.P.; Rupp, D.E.; Woods, R.A.; Tromp-van, Meerveld; Peters, N.E.; Freer, J.E.

    2009-01-01

    The purpose of this paper is to identify simple connections between observations of hydrological processes at the hillslope scale and observations of the response of watersheds following rainfall, with a view to building a parsimonious model of catchment processes. The focus is on the well-studied Panola Mountain Research Watershed (PMRW), Georgia, USA. Recession analysis of discharge Q shows that while the relationship between dQ/dt and Q is approximately consistent with a linear reservoir for the hillslope, there is a deviation from linearity that becomes progressively larger with increasing spatial scale. To account for these scale differences conceptual models of streamflow recession are defined at both the hillslope scale and the watershed scale, and an assessment made as to whether models at the hillslope scale can be aggregated to be consistent with models at the watershed scale. Results from this study show that a model with parallel linear reservoirs provides the most plausible explanation (of those tested) for both the linear hillslope response to rainfall and non-linear recession behaviour observed at the watershed outlet. In this model each linear reservoir is associated with a landscape type. The parallel reservoir model is consistent with both geochemical analyses of hydrological flow paths and water balance estimates of bedrock recharge. Overall, this study demonstrates that standard approaches of using recession analysis to identify the functional form of storage-discharge relationships identify model structures that are inconsistent with field evidence, and that recession analysis at multiple spatial scales can provide useful insights into catchment behaviour. Copyright ?? 2008 John Wiley & Sons, Ltd.

  20. Impact of Spatial Scale on Calibration and Model Output for a Grid-based SWAT Model

    Science.gov (United States)

    Pignotti, G.; Vema, V. K.; Rathjens, H.; Raj, C.; Her, Y.; Chaubey, I.; Crawford, M. M.

    2014-12-01

    The traditional implementation of the Soil and Water Assessment Tool (SWAT) model utilizes common landscape characteristics known as hydrologic response units (HRUs). Discretization into HRUs provides a simple, computationally efficient framework for simulation, but also represents a significant limitation of the model as spatial connectivity between HRUs is ignored. SWATgrid, a newly developed, distributed version of SWAT, provides modified landscape routing via a grid, overcoming these limitations. However, the current implementation of SWATgrid has significant computational overhead, which effectively precludes traditional calibration and limits the total number of grid cells in a given modeling scenario. Moreover, as SWATgrid is a relatively new modeling approach, it remains largely untested with little understanding of the impact of spatial resolution on model output. The objective of this study was to determine the effects of user-defined input resolution on SWATgrid predictions in the Upper Cedar Creek Watershed (near Auburn, IN, USA). Original input data, nominally at 30 m resolution, was rescaled for a range of resolutions between 30 and 4,000 m. A 30 m traditional SWAT model was developed as the baseline for model comparison. Monthly calibration was performed, and the calibrated parameter set was then transferred to all other SWAT and SWATgrid models to focus the effects of resolution on prediction uncertainty relative to the baseline. Model output was evaluated with respect to stream flow at the outlet and water quality parameters. Additionally, output of SWATgrid models were compared to output of traditional SWAT models at each resolution, utilizing the same scaled input data. A secondary objective considered the effect of scale on calibrated parameter values, where each standard SWAT model was calibrated independently, and parameters were transferred to SWATgrid models at equivalent scales. For each model, computational requirements were evaluated

  1. Development and testing of watershed-scale models for poorly drained soils

    Science.gov (United States)

    Glenn P. Fernandez; George M. Chescheir; R. Wayne Skaggs; Devendra M. Amatya

    2005-01-01

    Watershed-scale hydrology and water quality models were used to evaluate the crrmulative impacts of land use and management practices on dowrzstream hydrology and nitrogen loading of poorly drained watersheds. Field-scale hydrology and nutrient dyyrutmics are predicted by DRAINMOD in both models. In the first model (DRAINMOD-DUFLOW), field-scale predictions are coupled...

  2. Evaluation of drought propagation in an ensemble mean of large-scale hydrological models

    NARCIS (Netherlands)

    Loon, van A.F.; Huijgevoort, van M.H.J.; Lanen, van H.A.J.

    2012-01-01

    Hydrological drought is increasingly studied using large-scale models. It is, however, not sure whether large-scale models reproduce the development of hydrological drought correctly. The pressing question is how well do large-scale models simulate the propagation from meteorological to hydrological

  3. Photogrammetric Recording and Reconstruction of Town Scale Models - the Case of the Plan-Relief of Strasbourg

    Science.gov (United States)

    Macher, H.; Grussenmeyer, P.; Landes, T.; Halin, G.; Chevrier, C.; Huyghe, O.

    2017-08-01

    The French collection of Plan-Reliefs, scale models of fortified towns, constitutes a precious testimony of the history of France. The aim of the URBANIA project is the valorisation and the diffusion of this Heritage through the creation of virtual models. The town scale model of Strasbourg at 1/600 currently exhibited in the Historical Museum of Strasbourg was selected as a case study. In this paper, the photogrammetric recording of this scale model is first presented. The acquisition protocol as well as the data post-processing are detailed. Then, the modelling of the city and more specially building blocks is investigated. Based on point clouds of the scale model, the extraction of roof elements is considered. It deals first with the segmentation of the point cloud into building blocks. Then, for each block, points belonging to roofs are identified and the extraction of chimney point clouds as well as roof ridges and roof planes is performed. Finally, the 3D parametric modelling of the building blocks is studied by considering roof polygons and polylines describing chimneys as input. In a future works section, the semantically enrichment and the potential usage scenarios of the scale model are envisaged.

  4. Modeling of a lot scale rainwater tank system in XP-SWMM: a case study in Western Sydney, Australia.

    Science.gov (United States)

    van der Sterren, Marlène; Rahman, Ataur; Ryan, Garry

    2014-08-01

    Lot scale rainwater tank system modeling is often used in sustainable urban storm water management, particularly to estimate the reduction in the storm water run-off and pollutant wash-off at the lot scale. These rainwater tank models often cannot be adequately calibrated and validated due to limited availability of observed rainwater tank quantity and quality data. This paper presents calibration and validation of a lot scale rainwater tank system model using XP-SWMM utilizing data collected from two rainwater tank systems located in Western Sydney, Australia. The modeling considers run-off peak and volume in and out of the rainwater tank system and also a number of water quality parameters (Total Phosphorus (TP), Total Nitrogen (TN) and Total Solids (TS)). It has been found that XP-SWMM can be used successfully to develop a lot scale rainwater system model within an acceptable error margin. It has been shown that TP and TS can be predicted more accurately than TN using the developed model. In addition, it was found that a significant reduction in storm water run-off discharge can be achieved as a result of the rainwater tank up to about one year average recurrence interval rainfall event. The model parameter set assembled in this study can be used for developing lot scale rainwater tank system models at other locations in the Western Sydney region and in other parts of Australia with necessary adjustments for the local site characteristics. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Model for the centralized acquisition of collections in times of crisis

    OpenAIRE

    Lloret Romero, María Nuria

    2012-01-01

    The paper discusses how the inclusion of digital collections has led to a new licensing model that has changed the financial management and administration of budgets for the purchase of collections, starting with the acquisition of scientific journals for specialized centers and extending to all types of resource materials. Lloret Romero, MN. (2012). Model for the centralized acquisition of collections in times of crisis. Bottom Line. 25(4):59-63. doi:10.1108/08880451211292603. Senia ...

  6. Air scaling and modeling studies for the 1/5-scale mark I boiling water reactor pressure suppression experiment

    Energy Technology Data Exchange (ETDEWEB)

    Lai, W.; McCauley, E.W.

    1978-01-04

    Results of table-top model experiments performed to investigate pool dynamics effects due to a postulated loss-of-coolant accident (LOCA) for the Peach Bottom Mark I boiling water reactor containment system guided subsequent conduct of the 1/5-scale torus experiment and provided new insight into the vertical load function (VLF). Pool dynamics results were qualitatively correct. Experiments with a 1/64-scale fully modeled drywell and torus showed that a 90/sup 0/ torus sector was adequate to reveal three-dimensional effects; the 1/5-scale torus experiment confirmed this.

  7. Modeling Subgrid Scale Droplet Deposition in Multiphase-CFD

    Science.gov (United States)

    Agostinelli, Giulia; Baglietto, Emilio

    2017-11-01

    The development of first-principle-based constitutive equations for the Eulerian-Eulerian CFD modeling of annular flow is a major priority to extend the applicability of multiphase CFD (M-CFD) across all two-phase flow regimes. Two key mechanisms need to be incorporated in the M-CFD framework, the entrainment of droplets from the liquid film, and their deposition. Here we focus first on the aspect of deposition leveraging a separate effects approach. Current two-field methods in M-CFD do not include appropriate local closures to describe the deposition of droplets in annular flow conditions. As many integral correlations for deposition have been proposed for lumped parameters methods applications, few attempts exist in literature to extend their applicability to CFD simulations. The integral nature of the approach limits its applicability to fully developed flow conditions, without geometrical or flow variations, therefore negating the scope of CFD application. A new approach is proposed here that leverages local quantities to predict the subgrid-scale deposition rate. The methodology is first tested into a three-field approach CFD model.

  8. Upscaling of U(VI) Desorption and Transport from Decimeter-Scale Heterogeneity to Plume-Scale Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Curtis, Gary P. [U.S. Geological Survey, Menlo Park, CA (United States); Kohler, Matthias [U.S. Geological Survey, Menlo Park, CA (United States); Kannappan, Ramakrishnan [U.S. Geological Survey, Menlo Park, CA (United States); Briggs, Martin [U.S. Geological Survey, Menlo Park, CA (United States); Day-Lewis, Fred [U.S. Geological Survey, Menlo Park, CA (United States)

    2015-02-24

    Scientifically defensible predictions of field scale U(VI) transport in groundwater requires an understanding of key processes at multiple scales. These scales range from smaller than the sediment grain scale (less than 10 μm) to as large as the field scale which can extend over several kilometers. The key processes that need to be considered include both geochemical reactions in solution and at sediment surfaces as well as physical transport processes including advection, dispersion, and pore-scale diffusion. The research summarized in this report includes both experimental and modeling results in batch, column and tracer tests. The objectives of this research were to: (1) quantify the rates of U(VI) desorption from sediments acquired from a uranium contaminated aquifer in batch experiments;(2) quantify rates of U(VI) desorption in column experiments with variable chemical conditions, and(3) quantify nonreactive tracer and U(VI) transport in field tests.

  9. A New Model for the Collective Behavior of Animals

    CERN Document Server

    Nguyen, P The; Diep, H T

    2015-01-01

    We propose a new model in order to study behaviors of self-organized system such as a group of animals. We assume that the individuals have two degrees of freedom corresponding one to their internal state and the other to their external state. The external state is characterized by its moving orientation. The rule of the interaction between the individuals is determined by the internal state which can be either in the non-excited state or in the excited state. The system is put under a source of external perturbation called "noise". To study the behavior of the model with varying noise, we use the Monte-Carlo simulation technique. The result clearly shows two first-order transitions separating the system into three phases: with increasing noise, the system undergoes a phase transition from a frozen dilute phase to an ordered compact phase and then to the disordered dispersed phase. These phases correspond to behaviors of animals: uncollected state at low noise, flocking at medium noise and runaway at high noi...

  10. On a class of scaling FRW cosmological models

    Energy Technology Data Exchange (ETDEWEB)

    Cataldo, Mauricio [Departamento de Física, Universidad del Bío-Bío, Avenida Collao 1202, Casilla 5-C, Concepción (Chile); Arevalo, Fabiola; Minning, Paul, E-mail: mcataldo@ubiobio.cl, E-mail: pminning@udec.cl, E-mail: farevalo@udec.cl [Departamento de Física, Universidad de Concepción, Casilla 160-C, Concepción (Chile)

    2010-02-01

    We study Friedmann-Robertson-Walker cosmological models with matter content composed of two perfect fluids ρ{sub 1} and ρ{sub 2}, with barotropic pressure densities p{sub 1}/ρ{sub 1} = ω{sub 1} = const and p{sub 2}/ρ{sub 2} = ω{sub 2} = const, where one of the energy densities is given by ρ{sub 1} = C{sub 1}a{sup α}+C{sub 2}a{sup β}, with C{sub 1}, C{sub 2}, α and β taking constant values. We solve the field equations by using the conservation equation without breaking it into two interacting parts with the help of a coupling interacting term Q. Nevertheless, with the found solution may be associated an interacting term Q, and then a number of cosmological interacting models studied in the literature correspond to particular cases of our cosmological model. Specifically those models having constant coupling parameters α-tilde , β-tilde and interacting terms given by Q = α-tilde Hρ{sub D{sub M}}, Q = α-tilde Hρ{sub D{sub E}}, Q = α-tilde H(ρ{sub D{sub M}}+ρ{sub D{sub E}}) and Q = α-tilde Hρ{sub D{sub M}}+β-tilde Hρ{sub D{sub E}}, where ρ{sub D{sub M}} and ρ{sub D{sub E}} are the energy densities of dark matter and dark energy respectively. The studied set of solutions contains a class of cosmological models presenting a scaling behavior at early and at late times. On the other hand the two-fluid cosmological models considered in this paper also permit a three fluid interpretation which is also discussed. In this reinterpretation, for flat Friedmann-Robertson-Walker cosmologies, the requirement of positivity of energy densities of the dark matter and dark energy components allows the state parameter of dark energy to be in the range −1.37∼<ω{sub D{sub E}} < −1/3.

  11. SMR Re-Scaling and Modeling for Load Following Studies

    Energy Technology Data Exchange (ETDEWEB)

    Hoover, K.; Wu, Q.; Bragg-Sitton, S.

    2016-11-01

    This study investigates the creation of a new set of scaling parameters for the Oregon State University Multi-Application Small Light Water Reactor (MASLWR) scaled thermal hydraulic test facility. As part of a study being undertaken by Idaho National Lab involving nuclear reactor load following characteristics, full power operations need to be simulated, and therefore properly scaled. Presented here is the scaling analysis and plans for RELAP5-3D simulation.

  12. Measuring and Modeling Behavioral Decision Dynamics in Collective Evacuation

    CERN Document Server

    Carlson, Jean M; Stromberg, Sean P; Bassett, Danielle S; Craparo, Emily M; Gutierrez-Villarreal, Francisco; Otani, Thomas

    2013-01-01

    Identifying and quantifying factors influencing human decision making remains an outstanding challenge, impacting the performance and predictability of social and technological systems. In many cases, system failures are traced to human factors including congestion, overload, miscommunication, and delays. Here we report results of a behavioral network science experiment, targeting decision making in a natural disaster. In each scenario, individuals are faced with a forced "go" versus "no go" evacuation decision, based on information available on competing broadcast and peer-to-peer sources. In this controlled setting, all actions and observations are recorded prior to the decision, enabling development of a quantitative decision making model that accounts for the disaster likelihood, severity, and temporal urgency, as well as competition between networked individuals for limited emergency resources. Individual differences in behavior within this social setting are correlated with individual differences in inh...

  13. Modelling energy production by small hydro power plants in collective irrigation networks of Calabria (Southern Italy)

    Science.gov (United States)

    Zema, Demetrio Antonio; Nicotra, Angelo; Tamburino, Vincenzo; Marcello Zimbone, Santo

    2017-04-01

    The availability of geodetic heads and considerable water flows in collective irrigation networks suggests the possibility of recovery potential energy using small hydro power plants (SHPP) at sustainable costs. This is the case of many Water Users Associations (WUA) in Calabria (Southern Italy), where it could theoretically be possible to recovery electrical energy out of the irrigation season. However, very few Calabrian WUAs have currently built SHPP in their irrigation networks and thus in this region the potential energy is practically fully lost. A previous study (Zema et al., 2016) proposed an original and simple model to site turbines and size their power output as well as to evaluate profits of SHPP in collective irrigation networks. Applying this model at regional scale, this paper estimates the theoretical energy production and the economic performances of SHPP installed in collective irrigation networks of Calabrian WUAs. In more detail, based on digital terrain models processed by GIS and few parameters of the water networks, for each SHPP the model provides: (i) the electrical power output; (iii) the optimal water discharge; (ii) costs, revenues and profits. Moreover, the map of the theoretical energy production by SHPP in collective irrigation networks of Calabria was drawn. The total network length of the 103 water networks surveyed is equal to 414 km and the total geodetic head is 3157 m, of which 63% is lost due to hydraulic losses. Thus, a total power output of 19.4 MW could theoretically be installed. This would provide an annual energy production of 103 GWh, considering SHPPs in operation only out of the irrigation season. The single irrigation networks have a power output in the range 0.7 kW - 6.4 MW. However, the lowest SHPPs (that is, turbines with power output under 5 kW) have been neglected, because the annual profit is very low (on average less than 6%, Zema et al., 2016). On average each irrigation network provides an annual revenue from

  14. Meso-scale modeling of irradiated concrete in test reactor

    Energy Technology Data Exchange (ETDEWEB)

    Giorla, A. [Oak Ridge National Laboratory, One Bethel Valley Road, Oak Ridge, TN 37831 (United States); Vaitová, M. [Czech Technical University, Thakurova 7, 166 29 Praha 6 (Czech Republic); Le Pape, Y., E-mail: lepapeym@ornl.gov [Oak Ridge National Laboratory, One Bethel Valley Road, Oak Ridge, TN 37831 (United States); Štemberk, P. [Czech Technical University, Thakurova 7, 166 29 Praha 6 (Czech Republic)

    2015-12-15

    Highlights: • A meso-scale finite element model for irradiated concrete is developed. • Neutron radiation-induced volumetric expansion is a predominant degradation mode. • Confrontation with expansion and damage obtained from experiments is successful. • Effects of paste shrinkage, creep and ductility are discussed. - Abstract: A numerical model accounting for the effects of neutron irradiation on concrete at the mesoscale is detailed in this paper. Irradiation experiments in test reactor (Elleuch et al., 1972), i.e., in accelerated conditions, are simulated. Concrete is considered as a two-phase material made of elastic inclusions (aggregate) subjected to thermal and irradiation-induced swelling and embedded in a cementitious matrix subjected to shrinkage and thermal expansion. The role of the hardened cement paste in the post-peak regime (brittle-ductile transition with decreasing loading rate), and creep effects are investigated. Radiation-induced volumetric expansion (RIVE) of the aggregate cause the development and propagation of damage around the aggregate which further develops in bridging cracks across the hardened cement paste between the individual aggregate particles. The development of damage is aggravated when shrinkage occurs simultaneously with RIVE during the irradiation experiment. The post-irradiation expansion derived from the simulation is well correlated with the experimental data and, the obtained damage levels are fully consistent with previous estimations based on a micromechanical interpretation of the experimental post-irradiation elastic properties (Le Pape et al., 2015). The proposed modeling opens new perspectives for the interpretation of test reactor experiments in regards to the actual operation of light water reactors.

  15. Multi-Resolution Modeling of Large Scale Scientific Simulation Data

    Energy Technology Data Exchange (ETDEWEB)

    Baldwin, C; Abdulla, G; Critchlow, T

    2002-02-25

    Data produced by large scale scientific simulations, experiments, and observations can easily reach tera-bytes in size. The ability to examine data-sets of this magnitude, even in moderate detail, is problematic at best. Generally this scientific data consists of multivariate field quantities with complex inter-variable correlations and spatial-temporal structure. To provide scientists and engineers with the ability to explore and analyze such data sets we are using a twofold approach. First, we model the data with the objective of creating a compressed yet manageable representation. Second, with that compressed representation, we provide the user with the ability to query the resulting approximation to obtain approximate yet sufficient answers; a process called adhoc querying. This paper is concerned with a wavelet modeling technique that seeks to capture the important physical characteristics of the target scientific data. Our approach is driven by the compression, which is necessary for viable throughput, along with the end user requirements from the discovery process. Our work contrasts existing research which applies wavelets to range querying, change detection, and clustering problems by working directly with a decomposition of the data. The difference in this procedures is due primarily to the nature of the data and the requirements of the scientists and engineers. Our approach directly uses the wavelet coefficients of the data to compress as well as query. We will provide some background on the problem, describe how the wavelet decomposition is used to facilitate data compression and how queries are posed on the resulting compressed model. Results of this process will be shown for several problems of interest and we will end with some observations and conclusions about this research.

  16. The space-scale cube : An integrated model for 2D polygonal areas and scale

    NARCIS (Netherlands)

    Meijers, B.M.; Van Oosterom, P.J.M.

    2011-01-01

    This paper introduces the concept of a space-scale partition, which we term the space-scale cube – analogous with the space-time cube (first introduced by Hägerstrand, 1970). We take the view of ‘map generalization is extrusion of 2D data into the third dimension’ (as introduced by Vermeij et al.,

  17. A Sediment Budget Case Study: Comparing Watershed Scale Erosion Estimates to Modeled and Empirical Sediment Loads

    Science.gov (United States)

    McDavitt, B.; O'Connor, M.

    2003-12-01

    The Pacific Lumber Company Habitat Conservation Plan requires watershed analyses to be conducted on their property. This paper summarizes a portion of that analysis focusing on erosion and sedimentation processes and rates coupled with downstream sediment routing in the Freshwater Creek watershed in northwest California. Watershed scale erosion sources from hillslopes, roads, and channel banks were quantified using field surveys, aerial photo interpretation, and empirical modeling approaches for different elements of the study. Sediment transport rates for bedload were modeled, and sediment transport rates for suspended sediment were estimated based on size distribution of sediment inputs in relation to sizes transported in suspension. Recent short-term, high-quality estimates of suspended sediment yield that a community watershed group collected with technical assistance from the US Forest Service were used to validate the resulting sediment budget. Bedload yield data from an adjacent watershed, Jacoby Creek, provided another check on the sediment budget. The sediment budget techniques and bedload routing models used for this study generated sediment yield estimates that are in good agreement with available data. These results suggest that sediment budget techniques that require moderate levels of fieldwork can be used to provide relatively accurate technical assessments. Ongoing monitoring of sediment sources coupled with sediment routing models and reach scale field data allows for predictions to be made regarding in-channel sediment storage.

  18. Pretest Round Robin Analysis of 1:4-Scale Prestressed Concrete Containment Vessel Model

    Energy Technology Data Exchange (ETDEWEB)

    HESSHEIMER,MICHAEL F.; LUK,VINCENT K.; KLAMERUS,ERIC W.; SHIBATA,S.; MITSUGI,S.; COSTELLO,J.F.

    2000-12-18

    The purpose of the program is to investigate the response of representative scale models of nuclear containment to pressure loading beyond the design basis accident and to compare analytical predictions to measured behavior. This objective is accomplished by conducting static, pneumatic overpressurization tests of scale models at ambient temperature. This research program consists of testing two scale models: a steel containment vessel (SCV) model (tested in 1996) and a prestressed concrete containment vessel (PCCV) model, which is the subject of this paper.

  19. A Unified Multi-scale Model for Cross-Scale Evaluation and Integration of Hydrological and Biogeochemical Processes

    Science.gov (United States)

    Liu, C.; Yang, X.; Bailey, V. L.; Bond-Lamberty, B. P.; Hinkle, C.

    2013-12-01

    Mathematical representations of hydrological and biogeochemical processes in soil, plant, aquatic, and atmospheric systems vary with scale. Process-rich models are typically used to describe hydrological and biogeochemical processes at the pore and small scales, while empirical, correlation approaches are often used at the watershed and regional scales. A major challenge for multi-scale modeling is that water flow, biogeochemical processes, and reactive transport are described using different physical laws and/or expressions at the different scales. For example, the flow is governed by the Navier-Stokes equations at the pore-scale in soils, by the Darcy law in soil columns and aquifer, and by the Navier-Stokes equations again in open water bodies (ponds, lake, river) and atmosphere surface layer. This research explores whether the physical laws at the different scales and in different physical domains can be unified to form a unified multi-scale model (UMSM) to systematically investigate the cross-scale, cross-domain behavior of fundamental processes at different scales. This presentation will discuss our research on the concept, mathematical equations, and numerical execution of the UMSM. Three-dimensional, multi-scale hydrological processes at the Disney Wilderness Preservation (DWP) site, Florida will be used as an example for demonstrating the application of the UMSM. In this research, the UMSM was used to simulate hydrological processes in rooting zones at the pore and small scales including water migration in soils under saturated and unsaturated conditions, root-induced hydrological redistribution, and role of rooting zone biogeochemical properties (e.g., root exudates and microbial mucilage) on water storage and wetting/draining. The small scale simulation results were used to estimate effective water retention properties in soil columns that were superimposed on the bulk soil water retention properties at the DWP site. The UMSM parameterized from smaller

  20. Model-Scale Experiment of the Seakeeping Performance for R/V Melville, Model 5720

    Science.gov (United States)

    2012-07-01

    fiberglass with stainless steel bilge keels. A summary of model particulars, in full and model scale, is provided in Table 1. The hull geometry was...foam. The bilge keels were constructed of stainless steel and fit to match the bilge keel trace from the ship drawings (Figure 6). A weight post...Measuring Devices,” NIST Handbook 44, Tina Butcher, Steve Cook, Linda Crown , and Rick Harshman (Editors), National Institute of Standards and

  1. Forest processes from stands to landscapes: exploring model forecast uncertainties using cross-scale model comparison

    Science.gov (United States)

    Michael J. Papaik; Andrew Fall; Brian Sturtevant; Daniel Kneeshaw; Christian Messier; Marie-Josee Fortin; Neal. Simon

    2010-01-01

    Forest management practices conducted primarily at the stand scale result in simplified forests with regeneration problems and low structural and biological diversity. Landscape models have been used to help design management strategies to address these problems. However, there remains a great deal of uncertainty that the actual management practices result in the...

  2. Fine-scale WRF-CMAQ Modeling for the 2013 DISCOVER-AQ Campaign in California

    Science.gov (United States)

    Gilliam, R. C.; Pleim, J. E.; Appel, W.

    2014-12-01

    Deriving Information on Surface Conditions from Column and Vertically Resolved Observations Relevant to Air Quality (DISCOVER-AQ) is an ongoing four year NASA campaign to improve remote sensing in order to better resolve the distribution of pollutants in the lower atmosphere for public health reasons. These observational campaigns are a prime opportunity to evaluate and improve weather and air quality models, in particular the finer scales, since the collected observations are not only unique (boundary layer profiles, planetary boundary layer height and LIDAR), but of high spatial density. For the first campaign in the Washington DC-Baltimore region, a number of meteorological model improvements were crucial for quality results at the finer grid scales. The main techniques tested in the DISCOVER-AQ Washington DC-Baltimore experiment were iterative indirect soil nudging, a simple urban parameterization based on highly resolved impervious surface data, and the use of a high resolution 1 km sea surface temperature dataset. A fourth technique, first tested in a separate cold season application in the US Rocky Mountains, was the assimilation of high resolution 1 km SNOw Data Assimilation System (SNODAS) data for better snow cover representation in retrospective modeling. These methods will be leveraged using a nested 12-4-2 km WRF-CMAQ modeling platform for the 2013 DISCOVER-AQ California campaign where the 2 km domain covers the entire San Joaquin Valley (SJV), coastal areas and all of Los Angeles. The purpose is to demonstrate methods to derive high quality meteorology for retrospective air quality modeling over geographically complex areas of the Western US where current coarser resolution modeling may not be sufficient. Accurate air quality modeling is particularly important for California, which has some of the most polluted areas in the US, within the SJV. Furthermore, this work may inform modeling in other areas of the Intermountain West that are experiencing air

  3. Interagency Collaborative Team Model for Capacity Building to Scale-Up Evidence-Based Practice.

    Science.gov (United States)

    Hurlburt, Michael; Aarons, Gregory A; Fettes, Danielle; Willging, Cathleen; Gunderson, Lara; Chaffin, Mark J

    2014-04-01

    System-wide scale up of evidence-based practice (EBP) is a complex process. Yet, few strategic approaches exist to support EBP implementation and sustainment across a service system. Building on the Exploration, Preparation, Implementation, and Sustainment (EPIS) implementation framework, we developed and are testing the Interagency Collaborative Team (ICT) process model to implement an evidence-based child neglect intervention (i.e., SafeCare®) within a large children's service system. The ICT model emphasizes the role of local agency collaborations in creating structural supports for successful implementation. We describe the ICT model and present preliminary qualitative results from use of the implementation model in one large scale EBP implementation. Qualitative interviews were conducted to assess challenges in building system, organization, and home visitor collaboration and capacity to implement the EBP. Data collection and analysis centered on EBP implementation issues, as well as the experiences of home visitors under the ICT model. Six notable issues relating to implementation process emerged from participant interviews, including: (a) initial commitment and collaboration among stakeholders, (b) leadership, (c) communication, (d) practice fit with local context, (e) ongoing negotiation and problem solving, and (f) early successes. These issues highlight strengths and areas for development in the ICT model. Use of the ICT model led to sustained and widespread use of SafeCare in one large county. Although some aspects of the implementation model may benefit from enhancement, qualitative findings suggest that the ICT process generates strong structural supports for implementation and creates conditions in which tensions between EBP structure and local contextual variations can be resolved in ways that support the expansion and maintenance of an EBP while preserving potential for public health benefit.

  4. Interagency Collaborative Team Model for Capacity Building to Scale-Up Evidence-Based Practice

    Science.gov (United States)

    Hurlburt, Michael; Aarons, Gregory A; Fettes, Danielle; Willging, Cathleen; Gunderson, Lara; Chaffin, Mark J

    2015-01-01

    Background System-wide scale up of evidence-based practice (EBP) is a complex process. Yet, few strategic approaches exist to support EBP implementation and sustainment across a service system. Building on the Exploration, Preparation, Implementation, and Sustainment (EPIS) implementation framework, we developed and are testing the Interagency Collaborative Team (ICT) process model to implement an evidence-based child neglect intervention (i.e., SafeCare®) within a large children’s service system. The ICT model emphasizes the role of local agency collaborations in creating structural supports for successful implementation. Methods We describe the ICT model and present preliminary qualitative results from use of the implementation model in one large scale EBP implementation. Qualitative interviews were conducted to assess challenges in building system, organization, and home visitor collaboration and capacity to implement the EBP. Data collection and analysis centered on EBP implementation issues, as well as the experiences of home visitors under the ICT model. Results Six notable issues relating to implementation process emerged from participant interviews, including: (a) initial commitment and collaboration among stakeholders, (b) leadership, (c) communication, (d) practice fit with local context, (e) ongoing negotiation and problem solving, and (f) early successes. These issues highlight strengths and areas for development in the ICT model. Conclusions Use of the ICT model led to sustained and widespread use of SafeCare in one large county. Although some aspects of the implementation model may benefit from enhancement, qualitative findings suggest that the ICT process generates strong structural supports for implementation and creates conditions in which tensions between EBP structure and local contextual variations can be resolved in ways that support the expansion and maintenance of an EBP while preserving potential for public health benefit. PMID:27512239

  5. Open source large-scale high-resolution environmental modelling with GEMS

    Science.gov (United States)

    Baarsma, Rein; Alberti, Koko; Marra, Wouter; Karssenberg, Derek

    2016-04-01

    Many environmental, topographic and climate data sets are freely available at a global scale, creating the opportunities to run environmental models for every location on Earth. Collection of the data necessary to do this and the consequent conversion into a useful format is very demanding however, not to mention the computational demand of a model itself. We developed GEMS (Global Environmental Modelling System), an online application to run environmental models on various scales directly in your browser and share the results with other researchers. GEMS is open-source and uses open-source platforms including Flask, Leaflet, GDAL, MapServer and the PCRaster-Python modelling framework to process spatio-temporal models in real time. With GEMS, users can write, run, and visualize the results of dynamic PCRaster-Python models in a browser. GEMS uses freely available global data to feed the models, and automatically converts the data to the relevant model extent and data format. Currently available data includes the SRTM elevation model, a selection of monthly vegetation data from MODIS, land use classifications from GlobCover, historical climate data from WorldClim, HWSD soil information from WorldGrids, population density from SEDAC and near real-time weather forecasts, most with a ±100m resolution. Furthermore, users can add other or their own datasets using a web coverage service or a custom data provider script. With easy access to a wide range of base datasets and without the data preparation that is usually necessary to run environmental models, building and running a model becomes a matter hours. Furthermore, it is easy to share the resulting maps, timeseries data or model scenarios with other researchers through a web mapping service (WMS). GEMS can be used to provide open access to model results. Additionally, environmental models in GEMS can be employed by users with no extensive experience with writing code, which is for example valuable for using models

  6. Using a Core Scientific Metadata Model in Large-Scale Facilities

    Directory of Open Access Journals (Sweden)

    Brian Matthews

    2010-07-01

    Full Text Available In this paper, we present the Core Scientific Metadata Model (CSMD, a model for the representation of scientific study metadata developed within the Science & Technology Facilities Council (STFC to represent the data generated from scientific facilities. The model has been developed to allow management of and access to the data resources of the facilities in a uniform way, although we believe that the model has wider application, especially in areas of “structural science” such as chemistry, materials science and earth sciences. We give some motivations behind the development of the model, and an overview of its major structural elements, centred on the notion of a scientific study formed by a collection of specific investigations. We give some details of the model, with the description of each investigation associated with a particular experiment on a sample generating data, and the associated data holdings are then mapped to the investigation with the appropriate parameters. We then go on to discuss the instantiation of the metadata model within a production quality data management infrastructure, the Information CATalogue (ICAT, which has been developed within STFC for use in large-scale photon and neutron sources. Finally, we give an overview of the relationship between CSMD, and other initiatives, and give some directions for future developments.    

  7. Hydrogeologic Framework Model for the Saturated Zone Site Scale flow and Transport Model

    Energy Technology Data Exchange (ETDEWEB)

    T. Miller

    2004-11-15

    The purpose of this report is to document the 19-unit, hydrogeologic framework model (19-layer version, output of this report) (HFM-19) with regard to input data, modeling methods, assumptions, uncertainties, limitations, and validation of the model results in accordance with AP-SIII.10Q, Models. The HFM-19 is developed as a conceptual model of the geometric extent of the hydrogeologic units at Yucca Mountain and is intended specifically for use in the development of the ''Saturated Zone Site-Scale Flow Model'' (BSC 2004 [DIRS 170037]). Primary inputs to this model report include the GFM 3.1 (DTN: MO9901MWDGFM31.000 [DIRS 103769]), borehole lithologic logs, geologic maps, geologic cross sections, water level data, topographic information, and geophysical data as discussed in Section 4.1. Figure 1-1 shows the information flow among all of the saturated zone (SZ) reports and the relationship of this conceptual model in that flow. The HFM-19 is a three-dimensional (3-D) representation of the hydrogeologic units surrounding the location of the Yucca Mountain geologic repository for spent nuclear fuel and high-level radioactive waste. The HFM-19 represents the hydrogeologic setting for the Yucca Mountain area that covers about 1,350 km2 and includes a saturated thickness of about 2.75 km. The boundaries of the conceptual model were primarily chosen to be coincident with grid cells in the Death Valley regional groundwater flow model (DTN: GS960808312144.003 [DIRS 105121]) such that the base of the site-scale SZ flow model is consistent with the base of the regional model (2,750 meters below a smoothed version of the potentiometric surface), encompasses the exploratory boreholes, and provides a framework over the area of interest for groundwater flow and radionuclide transport modeling. In depth, the model domain extends from land surface to the base of the regional groundwater flow model (D'Agnese et al. 1997 [DIRS 100131], p 2). For the site-scale

  8. Viscoelastic Model for Lung Parenchyma for Multi-Scale Modeling of Respiratory System, Phase II: Dodecahedral Micro-Model

    Energy Technology Data Exchange (ETDEWEB)

    Freed, Alan D.; Einstein, Daniel R.; Carson, James P.; Jacob, Rick E.

    2012-03-01

    In the first year of this contractual effort a hypo-elastic constitutive model was developed and shown to have great potential in modeling the elastic response of parenchyma. This model resides at the macroscopic level of the continuum. In this, the second year of our support, an isotropic dodecahedron is employed as an alveolar model. This is a microscopic model for parenchyma. A hopeful outcome is that the linkage between these two scales of modeling will be a source of insight and inspiration that will aid us in the final year's activity: creating a viscoelastic model for parenchyma.

  9. Common problematic aspects of coupling hydrological models with groundwater flow models on the river catchment scale

    Directory of Open Access Journals (Sweden)

    R. Barthel

    2006-01-01

    Full Text Available Model coupling requires a thorough conceptualisation of the coupling strategy, including an exact definition of the individual model domains, the "transboundary" processes and the exchange parameters. It is shown here that in the case of coupling groundwater flow and hydrological models – in particular on the regional scale – it is very important to find a common definition and scale-appropriate process description of groundwater recharge and baseflow (or "groundwater runoff/discharge" in order to achieve a meaningful representation of the processes that link the unsaturated and saturated zones and the river network. As such, integration by means of coupling established disciplinary models is problematic given that in such models, processes are defined from a purpose-oriented, disciplinary perspective and are therefore not necessarily consistent with definitions of the same process in the model concepts of other disciplines. This article contains a general introduction to the requirements and challenges of model coupling in Integrated Water Resources Management including a definition of the most relevant technical terms, a short description of the commonly used approach of model coupling and finally a detailed consideration of the role of groundwater recharge and baseflow in coupling groundwater models with hydrological models. The conclusions summarize the most relevant problems rather than giving practical solutions. This paper aims to point out that working on a large scale in an integrated context requires rethinking traditional disciplinary workflows and encouraging communication between the different disciplines involved. It is worth noting that the aspects discussed here are mainly viewed from a groundwater perspective, which reflects the author's background.

  10. Relevance of multiple spatial scales in habitat models: A case study with amphibians and grasshoppers

    Science.gov (United States)

    Altmoos, Michael; Henle, Klaus

    2010-11-01

    Habitat models for animal species are important tools in conservation planning. We assessed the need to consider several scales in a case study for three amphibian and two grasshopper species in the post-mining landscapes near Leipzig (Germany). The two species groups were selected because habitat analyses for grasshoppers are usually conducted on one scale only whereas amphibians are thought to depend on more than one spatial scale. First, we analysed how the preference to single habitat variables changed across nested scales. Most environmental variables were only significant for a habitat model on one or two scales, with the smallest scale being particularly important. On larger scales, other variables became significant, which cannot be recognized on lower scales. Similar preferences across scales occurred in only 13 out of 79 cases and in 3 out of 79 cases the preference and avoidance for the same variable were even reversed among scales. Second, we developed habitat models by using a logistic regression on every scale and for all combinations of scales and analysed how the quality of habitat models changed with the scales considered. To achieve a sufficient accuracy of the habitat models with a minimum number of variables, at least two scales were required for all species except for Bufo viridis, for which a single scale, the microscale, was sufficient. Only for the European tree frog ( Hyla arborea), at least three scales were required. The results indicate that the quality of habitat models increases with the number of surveyed variables and with the number of scales, but costs increase too. Searching for simplifications in multi-scaled habitat models, we suggest that 2 or 3 scales should be a suitable trade-off, when attempting to define a suitable microscale.

  11. NASA Standard for Models and Simulations: Credibility Assessment Scale

    Science.gov (United States)

    Babula, Maria; Bertch, William J.; Green, Lawrence L.; Hale, Joseph P.; Mosier, Gary E.; Steele, Martin J.; Woods, Jody

    2009-01-01

    As one of its many responses to the 2003 Space Shuttle Columbia accident, NASA decided to develop a formal standard for models and simulations (M&S). Work commenced in May 2005. An interim version was issued in late 2006. This interim version underwent considerable revision following an extensive Agency-wide review in 2007 along with some additional revisions as a result of the review by the NASA Engineering Management Board (EMB) in the first half of 2008. Issuance of the revised, permanent version, hereafter referred to as the M&S Standard or just the Standard, occurred in July 2008. Bertch, Zang and Steeleiv provided a summary review of the development process of this standard up through the start of the review by the EMB. A thorough recount of the entire development process, major issues, key decisions, and all review processes are available in Ref. v. This is the second of a pair of papers providing a summary of the final version of the Standard. Its focus is the Credibility Assessment Scale, a key feature of the Standard, including an example of its application to a real-world M&S problem for the James Webb Space Telescope. The companion paper summarizes the overall philosophy of the Standard and an overview of the requirements. Verbatim quotes from the Standard are integrated into the text of this paper, and are indicated by quotation marks.

  12. Implementation of meso-scale radioactive dispersion model for GPU

    Energy Technology Data Exchange (ETDEWEB)

    Sunarko [National Nuclear Energy Agency of Indonesia (BATAN), Jakarta (Indonesia). Nuclear Energy Assessment Center; Suud, Zaki [Bandung Institute of Technology (ITB), Bandung (Indonesia). Physics Dept.

    2017-05-15

    Lagrangian Particle Dispersion Method (LPDM) is applied to model atmospheric dispersion of radioactive material in a meso-scale of a few tens of kilometers for site study purpose. Empirical relationships are used to determine the dispersion coefficient for various atmospheric stabilities. Diagnostic 3-D wind-field is solved based on data from one meteorological station using mass-conservation principle. Particles representing radioactive pollutant are dispersed in the wind-field as a point source. Time-integrated air concentration is calculated using kernel density estimator (KDE) in the lowest layer of the atmosphere. Parallel code is developed for GTX-660Ti GPU with a total of 1 344 scalar processors using CUDA. A test of 1-hour release discovers that linear speedup is achieved starting at 28 800 particles-per-hour (pph) up to about 20 x at 14 4000 pph. Another test simulating 6-hour release with 36 000 pph resulted in a speedup of about 60 x. Statistical analysis reveals that resulting grid doses are nearly identical in both CPU and GPU versions of the code.

  13. Overview of the Ares I Scale Model Acoustic Test Program

    Science.gov (United States)

    Counter, Douglas D.; Houston, Janice D.

    2011-01-01

    Launch environments, such as lift-off acoustic (LOA) and ignition overpressure (IOP), are important design factors for any vehicle and are dependent upon the design of both the vehicle and the ground systems. LOA environments are used directly in the development of vehicle vibro-acoustic environments and IOP is used in the loads assessment. The NASA Constellation Program had several risks to the development of the Ares I vehicle linked to LOA. The risks included cost, schedule and technical impacts for component qualification due to high predicted vibro-acoustic environments. One solution is to mitigate the environment at the component level. However, where the environment is too severe for component survivability, reduction of the environment itself is required. The Ares I Scale Model Acoustic Test (ASMAT) program was implemented to verify the Ares I LOA and IOP environments for the vehicle and ground systems including the Mobile Launcher (ML) and tower. An additional objective was to determine the acoustic reduction for the LOA environment with an above deck water sound suppression system. ASMAT was a development test performed at the Marshall Space Flight Center (MSFC) East Test Area (ETA) Test Stand 116 (TS 116). The ASMAT program is described in this presentation.

  14. Small scale modelling of dynamic impact of debris flows

    Science.gov (United States)

    Sanvitale, Nicoletta; Bowman, Elisabeth

    2017-04-01

    Fast landslides, such as debris flows, involve high speed downslope motion of rocks, soil and water. Engineering attempts to reduce the risk posed by these natural hazards often involve the placement of barriers or obstacles to inhibit movement. The impact pressures exert by debris flows are difficult to estimate because they not only depend on the geometry and size of the flow and the obstacle but also on the characteristics of the flow mixture. The presence of a solid phase can increase local impact pressure due to hard contact often caused by single boulder. This can lead to higher impact forces compared to the estimates of the peak pressure value obtained from hydraulic based models commonly adopted in such analyses. The proposed study aims at bringing new insight to the impact loading of structures generated by segregating granular debris flow. A small-scale flume, designed to enable plane laser induced fluorescence (PLIF) and digital image correlation (DIC) to be applied internally will be used for 2D analyses. The flow will incorporate glass particles suitable for refractive index matching (RIM) with a matched fluid to gain optical access to the internal behaviour of the flow, via a laser sheet applied away from sidewall boundaries. For these tests, the focus will be on assessing 2D particle interactions in unsteady flow. The paper will present in details the methodology and the set-up of the experiments together with some preliminary results

  15. Scale-adaptive surface modeling of vascular structures

    Directory of Open Access Journals (Sweden)

    Ma Xin

    2010-11-01

    Full Text Available Abstract Background The effective geometric modeling of vascular structures is crucial for diagnosis, therapy planning and medical education. These applications require good balance with respect to surface smoothness, surface accuracy, triangle quality and surface size. Methods Our method first extracts the vascular boundary voxels from the segmentation result, and utilizes these voxels to build a three-dimensional (3D point cloud whose normal vectors are estimated via covariance analysis. Then a 3D implicit indicator function is computed from the oriented 3D point cloud by solving a Poisson equation. Finally the vessel surface is generated by a proposed adaptive polygonization algorithm for explicit 3D visualization. Results Experiments carried out on several typical vascular structures demonstrate that the presented method yields both a smooth morphologically correct and a topologically preserved two-manifold surface, which is scale-adaptive to the local curvature of the surface. Furthermore, the presented method produces fewer and better-shaped triangles with satisfactory surface quality and accuracy. Conclusions Compared to other state-of-the-art approaches, our method reaches good balance in terms of smoothness, accuracy, triangle quality and surface size. The vessel surfaces produced by our method are suitable for applications such as computational fluid dynamics simulations and real-time virtual interventional surgery.

  16. Energy and time modelling of kerbside waste collection: Changes incurred when adding source separated food waste.

    Science.gov (United States)

    Edwards, Joel; Othman, Maazuza; Burn, Stewart; Crossin, Enda

    2016-10-01

    The collection of source separated kerbside municipal FW (SSFW) is being incentivised in Australia, however such a collection is likely to increase the fuel and time a collection truck fleet requires. Therefore, waste managers need to determine whether the incentives outweigh the cost. With literature scarcely describing the magnitude of increase, and local parameters playing a crucial role in accurately modelling kerbside collection; this paper develops a new general mathematical model that predicts the energy and time requirements of a collection regime whilst incorporating the unique variables of different jurisdictions. The model, Municipal solid waste collect (MSW-Collect), is validated and shown to be more accurate at predicting fuel consumption and trucks required than other common collection models. When predicting changes incurred for five different SSFW collection scenarios, results show that SSFW scenarios require an increase in fuel ranging from 1.38% to 57.59%. There is also a need for additional trucks across most SSFW scenarios tested. All SSFW scenarios are ranked and analysed in regards to fuel consumption; sensitivity analysis is conducted to test key assumptions. Crown Copyright © 2016. Published by Elsevier Ltd. All rights reserved.

  17. Predictive spatial modelling for mapping soil salinity at continental scale

    Science.gov (United States)

    Bui, Elisabeth; Wilford, John; de Caritat, Patrice

    2017-04-01

    Soil salinity is a serious limitation to agriculture and one of the main causes of land degradation. Soil is considered saline if its electrical conductivity (EC) is > 4 dS/m. Maps of saline soil distribution are essential for appropriate land development. Previous attempts to map soil salinity over extensive areas have relied on satellite imagery, aerial electromagnetic (EM) and/or proximally sensed EM data; other environmental (climate, topographic, geologic or soil) datasets are generally not used. Having successfully modelled and mapped calcium carbonate distribution over the 0-80 cm depth in Australian soils using machine learning with point samples from the National Geochemical Survey of Australia (NGSA), we took a similar approach to map soil salinity at 90-m resolution over the continent. The input data were the EC1:5 measurements on the learning software 'Cubist' (www.rulequest.com) was used as the inference engine for the modelling, a 90:10 training:test set data split was used to validate results, and 100 randomly sampled trees were built using the training data. The results were good with an average internal correlation (r) of 0.88 between predicted and measured logEC1:5 (training data), an average external correlation of 0.48 (test subset), and a Lin's concordance correlation coefficient (which evaluates the 1:1 fit) of 0.61. Therefore, the rules derived were mapped and the mean prediction for each 90-m pixel was used for the final logEC1:5 map. This is the most detailed picture of soil salinity over Australia since the 2001 National Land and Water Resources Audit and is generally consistent with it. Our map will be useful as a baseline salinity map circa 2008, when the NGSA samples were collected, for future State of the Environment reports.

  18. Wind and Photovoltaic Large-Scale Regional Models for hourly production evaluation

    DEFF Research Database (Denmark)

    Marinelli, Mattia; Maule, Petr; Hahmann, Andrea N.

    2015-01-01

    This work presents two large-scale regional models used for the evaluation of normalized power output from wind turbines and photovoltaic power plants on a European regional scale. The models give an estimate of renewable production on a regional scale with 1 h resolution, starting from a mesoscale...

  19. Proposing an Educational Scaling-and-Diffusion Model for Inquiry-Based Learning Designs

    Science.gov (United States)

    Hung, David; Lee, Shu-Shing

    2015-01-01

    Education cannot adopt the linear model of scaling used by the medical sciences. "Gold standards" cannot be replicated without considering process-in-learning, diversity, and student-variedness in classrooms. This article proposes a nuanced model of educational scaling-and-diffusion, describing the scaling (top-down supports) and…

  20. Modeling Small Scale Solar Powered ORC Unit for Standalone Application

    Directory of Open Access Journals (Sweden)

    Enrico Bocci

    2012-01-01

    Full Text Available When the electricity from the grid is not available, the generation of electricity in remote areas is an essential challenge to satisfy important needs. In many developing countries the power generation from Diesel engines is the applied technical solution. However the cost and supply of fuel make a strong dependency of the communities on the external support. Alternatives to fuel combustion can be found in photovoltaic generators, and, with suitable conditions, small wind turbines or microhydroplants. The aim of the paper is to simulate the power generation of a generating unit using the Rankine Cycle and using refrigerant R245fa as a working fluid. The generation unit has thermal solar panels as heat source and photovoltaic modules for the needs of the auxiliary items (pumps, electronics, etc.. The paper illustrates the modeling of the system using TRNSYS platform, highlighting standard and “ad hoc” developed components as well as the global system efficiency. In the future the results of the simulation will be compared with the data collected from the 3 kW prototype under construction in the Tuscia University in Italy.

  1. Comparing large-scale computational approaches to epidemic modeling: Agent-based versus structured metapopulation models

    Directory of Open Access Journals (Sweden)

    Merler Stefano

    2010-06-01

    Full Text Available Abstract Background In recent years large-scale computational models for the realistic simulation of epidemic outbreaks have been used with increased frequency. Methodologies adapt to the scale of interest and range from very detailed agent-based models to spatially-structured metapopulation models. One major issue thus concerns to what extent the geotemporal spreading pattern found by different modeling approaches may differ and depend on the different approximations and assumptions used. Methods We provide for the first time a side-by-side comparison of the results obtained with a stochastic agent-based model and a structured metapopulation stochastic model for the progression of a baseline pandemic event in Italy, a large and geographically heterogeneous European country. The agent-based model is based on the explicit representation of the Italian population through highly detailed data on the socio-demographic structure. The metapopulation simulations use the GLobal Epidemic and Mobility (GLEaM model, based on high-resolution census data worldwide, and integrating airline travel flow data with short-range human mobility patterns at the global scale. The model also considers age structure data for Italy. GLEaM and the agent-based models are synchronized in their initial conditions by using the same disease parameterization, and by defining the same importation of infected cases from international travels. Results The results obtained show that both models provide epidemic patterns that are in very good agreement at the granularity levels accessible by both approaches, with differences in peak timing on the order of a few days. The relative difference of the epidemic size depends on the basic reproductive ratio, R0, and on the fact that the metapopulation model consistently yields a larger incidence than the agent-based model, as expected due to the differences in the structure in the intra-population contact pattern of the approaches. The age

  2. Large-scale secondary circulations in the regional climate model COSMO-CLM

    OpenAIRE

    Becker, Nico

    2016-01-01

    Regional climate models (RCMs) are used to add smaller scales to coarser resolved driving data, e. g. from global climate models (GCMs), by using a higher resolution on a limited domain. However, RCMs do not only add scales which are not resolved by the driving model but also deviate from the driving data on larger scales. Thus, RCMs are able to improve the large scales prescribed by the driving data. However, large scale deviations can also lead to instabilities at the model boundaries. A sy...

  3. A model of charge collection for irradiated p sup + n detectors

    CERN Document Server

    Martí i García, S; Casse, G; Greenall, A

    2001-01-01

    The charge collection in irradiated p sup + n silicon detectors was studied as a function of the reverse bias voltage. Oxygenated and non-oxygenated devices were irradiated beyond type inversion with 24 GeV/c protons. The charge collection is successfully described with a model based on the hypothesis that the charge trapping depends on the carriers velocity. With this model, values for the full depletion voltage are extracted which show good agreement with those measured using the CV technique. The model allows a quantitative understanding of why although oxygenation of p sup + n devices improves substantially the full depletion voltage, much less improvement is observed in the charge collection efficiency.

  4. Linear Inverse Modeling and Scaling Analysis of Drainage Inventories.

    Science.gov (United States)

    O'Malley, C.; White, N. J.

    2016-12-01

    constants can be shown to produce reliable uplift histories. However, these erosional constants appear to vary from continent to continent. Future work will investigate the global relationship between our inversion results, scaling laws, climate models, lithological variation and sedimentary flux.

  5. A numerical model for dynamic crustal-scale fluid flow

    Science.gov (United States)

    Sachau, Till; Bons, Paul; Gomez-Rivas, Enrique; Koehn, Daniel

    2015-04-01

    Fluid flow in the crust is often envisaged and modeled as continuous, yet minimal flow, which occurs over large geological times. This is a suitable approximation for flow as long as it is solely controlled by the matrix permeability of rocks, which in turn is controlled by viscous compaction of the pore space. However, strong evidence (hydrothermal veins and ore deposits) exists that a significant part of fluid flow in the crust occurs strongly localized in both space and time, controlled by the opening and sealing of hydrofractures. We developed, tested and applied a novel computer code, which considers this dynamic behavior and couples it with steady, Darcian flow controlled by the matrix permeability. In this dual-porosity model, fractures open depending on the fluid pressure relative to the solid pressure. Fractures form when matrix permeability is insufficient to accommodate fluid flow resulting from compaction, decompression (Staude et al. 2009) or metamorphic dehydration reactions (Weisheit et al. 2013). Open fractures can close when the contained fluid either seeps into the matrix or escapes by fracture propagation: mobile hydrofractures (Bons, 2001). In the model, closing and sealing of fractures is controlled by a time-dependent viscous law, which is based on the effective stress and on either Newtonian or non-Newtonian viscosity. Our simulations indicate that the bulk of crustal fluid flow in the middle to lower upper crust is intermittent, highly self-organized, and occurs as mobile hydrofractures. This is due to the low matrix porosity and permeability, combined with a low matrix viscosity and, hence, fast sealing of fractures. Stable fracture networks, generated by fluid overpressure, are restricted to the uppermost crust. Semi-stable fracture networks can develop in an intermediate zone, if a critical overpressure is reached. Flow rates in mobile hydrofractures exceed those in the matrix porosity and fracture networks by orders of magnitude

  6. A rainfall disaggregation scheme for sub-hourly time scales: Coupling a Bartlett-Lewis based model with adjusting procedures

    Science.gov (United States)

    Kossieris, Panagiotis; Makropoulos, Christos; Onof, Christian; Koutsoyiannis, Demetris

    2018-01-01

    Many hydrological applications, such as flood studies, require the use of long rainfall data at fine time scales varying from daily down to 1 min time step. However, in the real world there is limited availability of data at sub-hourly scales. To cope with this issue, stochastic disaggregation techniques are typically employed to produce possible, statistically consistent, rainfall events that aggregate up to the field data collected at coarser scales. A methodology for the stochastic disaggregation of rainfall at fine time scales was recently introduced, combining the Bartlett-Lewis process to generate rainfall events along with adjusting procedures to modify the lower-level variables (i.e., hourly) so as to be consistent with the higher-level one (i.e., daily). In the present paper, we extend the aforementioned scheme, initially designed and tested for the disaggregation of daily rainfall into hourly depths, for any sub-hourly time scale. In addition, we take advantage of the recent developments in Poisson-cluster processes incorporating in the methodology a Bartlett-Lewis model variant that introduces dependence between cell intensity and duration in order to capture the variability of rainfall at sub-hourly time scales. The disaggregation scheme is implemented in an R package, named HyetosMinute, to support disaggregation from daily down to 1-min time scale. The applicability of the methodology was assessed on a 5-min rainfall records collected in Bochum, Germany, comparing the performance of the above mentioned model variant against the original Bartlett-Lewis process (non-random with 5 parameters). The analysis shows that the disaggregation process reproduces adequately the most important statistical characteristics of rainfall at wide range of time scales, while the introduction of the model with dependent intensity-duration results in a better performance in terms of skewness, rainfall extremes and dry proportions.

  7. Government information collections in the networked environment new issues and models

    CERN Document Server

    Cheverie, Joan F

    2013-01-01

    This insightful book explores the challenging issues related to effective access to government information.Amidst all the chaos of today's dynamic information transition period, the only constants related to government information are change and inconsistency, yet with Government Information Collections in the Networked Environment: New Issues and Models, you will defeat the challenging issues and take advantage of the opportunities that networked government information collections have to offer. This valuable book gives you a fresh opportunity to rethink collecting activities and to

  8. Mokken Scale Analysis for Dichotomous Items Using Marginal Models

    Science.gov (United States)

    van der Ark, L. Andries; Croon, Marcel A.; Sijtsma, Klaas

    2008-01-01

    Scalability coefficients play an important role in Mokken scale analysis. For a set of items, scalability coefficients have been defined for each pair of items, for each individual item, and for the entire scale. Hypothesis testing with respect to these scalability coefficients has not been fully developed. This study introduces marginal modelling…

  9. Strategies for Measuring Wind Erosion for Regional Scale Modeling

    NARCIS (Netherlands)

    Youssef, F.; Visser, S.; Karssenberg, D.J.; Slingerland, E.; Erpul, G.; Ziadat, F.; Stroosnijder, L. Prof.dr.ir.

    2012-01-01

    Windblown sediment transport is mostly measured at field or plot scale due to the high spatial variability over the study area. Regional scale measurements are often limited to measurements of the change in the elevation providing information on net erosion or deposition. For the calibration and

  10. Married with Children : A Collective Labor Supply Model with Detailed Time Use and Intrahousehold Expenditure Information

    NARCIS (Netherlands)

    Cherchye, L.J.H.; de Rock, B.; Vermeulen, F.M.P.

    2010-01-01

    We propose a collective labor supply model with household production that generalizes an original model of Blundell, Chiappori and Meghir (2005). In our model, adults' individual preferences do not only depend on own leisure and individual private consumption of market goods. They also depend on the

  11. Ares I Scale Model Acoustic Test Instrumentation for Acoustic and Pressure Measurements

    Science.gov (United States)

    Vargas, Magda B.; Counter, Douglas

    2011-01-01

    Ares I Scale Model Acoustic Test (ASMAT) is a 5% scale model test of the Ares I vehicle, launch pad and support structures conducted at MSFC to verify acoustic and ignition environments and evaluate water suppression systems Test design considerations 5% measurements must be scaled to full scale requiring high frequency measurements Users had different frequencies of interest Acoustics: 200 - 2,000 Hz full scale equals 4,000 - 40,000 Hz model scale Ignition Transient: 0 - 100 Hz full scale equals 0 - 2,000 Hz model scale Environment exposure Weather exposure: heat, humidity, thunderstorms, rain, cold and snow Test environments: Plume impingement heat and pressure, and water deluge impingement Several types of sensors were used to measure the environments Different instrument mounts were used according to the location and exposure to the environment This presentation addresses the observed effects of the selected sensors and mount design on the acoustic and pressure measurements

  12. Drift-Scale Coupled Processes (DST and TH Seepage) Models

    Energy Technology Data Exchange (ETDEWEB)

    J. Birkholzer; S. Mukhopadhyay

    2004-09-29

    The purpose of this report is to document drift-scale modeling work performed to evaluate the thermal-hydrological (TH) behavior in Yucca Mountain fractured rock close to waste emplacement drifts. The heat generated by the decay of radioactive waste results in rock temperatures elevated from ambient for thousands of years after emplacement. Depending on the thermal load, these temperatures are high enough to cause boiling conditions in the rock, giving rise to water redistribution and altered flow paths. The predictive simulations described in this report are intended to investigate fluid flow in the vicinity of an emplacement drift for a range of thermal loads. Understanding the TH coupled processes is important for the performance of the repository because the thermally driven water saturation changes affect the potential seepage of water into waste emplacement drifts. Seepage of water is important because if enough water gets into the emplacement drifts and comes into contact with any exposed radionuclides, it may then be possible for the radionuclides to be transported out of the drifts and to the groundwater below the drifts. For above-boiling rock temperatures, vaporization of percolating water in the fractured rock overlying the repository can provide an important barrier capability that greatly reduces (and possibly eliminates) the potential of water seeping into the emplacement drifts. In addition to this thermal process, water is inhibited from entering the drift opening by capillary forces, which occur under both ambient and thermal conditions (capillary barrier). The combined barrier capability of vaporization processes and capillary forces in the near-field rock during the thermal period of the repository is analyzed and discussed in this report.

  13. Training Systems Modelers through the Development of a Multi-scale Chagas Disease Risk Model

    Science.gov (United States)

    Hanley, J.; Stevens-Goodnight, S.; Kulkarni, S.; Bustamante, D.; Fytilis, N.; Goff, P.; Monroy, C.; Morrissey, L. A.; Orantes, L.; Stevens, L.; Dorn, P.; Lucero, D.; Rios, J.; Rizzo, D. M.

    2012-12-01

    The goal of our NSF-sponsored Division of Behavioral and Cognitive Sciences grant is to create a multidisciplinary approach to develop spatially explicit models of vector-borne disease risk using Chagas disease as our model. Chagas disease is a parasitic disease endemic to Latin America that afflicts an estimated 10 million people. The causative agent (Trypanosoma cruzi) is most commonly transmitted to humans by blood feeding triatomine insect vectors. Our objectives are: (1) advance knowledge on the multiple interacting factors affecting the transmission of Chagas disease, and (2) provide next generation genomic and spatial analysis tools applicable to the study of other vector-borne diseases worldwide. This funding is a collaborative effort between the RSENR (UVM), the School of Engineering (UVM), the Department of Biology (UVM), the Department of Biological Sciences (Loyola (New Orleans)) and the Laboratory of Applied Entomology and Parasitology (Universidad de San Carlos). Throughout this five-year study, multi-educational groups (i.e., high school, undergraduate, graduate, and postdoctoral) will be trained in systems modeling. This systems approach challenges students to incorporate environmental, social, and economic as well as technical aspects and enables modelers to simulate and visualize topics that would either be too expensive, complex or difficult to study directly (Yasar and Landau 2003). We launch this research by developing a set of multi-scale, epidemiological models of Chagas disease risk using STELLA® software v.9.1.3 (isee systems, inc., Lebanon, NH). We use this particular system dynamics software as a starting point because of its simple graphical user interface (e.g., behavior-over-time graphs, stock/flow diagrams, and causal loops). To date, high school and undergraduate students have created a set of multi-scale (i.e., homestead, village, and regional) disease models. Modeling the system at multiple spatial scales forces recognition that

  14. Confined swirling jet predictions using a multiple-scale turbulence model

    Science.gov (United States)

    Chen, C. P.

    1985-01-01

    A recently developed multiple scale turbulence model is used for the numerical prediction of isothermal, confined turbulent swirling flows. Because of the streamline curvature and nonequilibrium spectral energy transfer nature of the swirling flow, the utilized multiple scale turbulence model includes a different set of response equations for each of the large scale energetic eddies and the small scale transfer eddies. Predictions are made of a confined coaxial swirling jet in a sudden expansion and comparisons are made with experimental data and with the conventional single scale two equation model. The multiple scale model shows significant improvement of predictions of swirling flows over the single scale k epsilon model. The sensitivity study of the effect of prescribed inlet turbulence levels on the flow fields is also included.

  15. Validation and Simulation of ARES I Scale Model Acoustic Test -1- Pathfinder Development

    Science.gov (United States)

    Putnam, G. C.

    2011-01-01

    The Ares I Scale Model Acoustics Test (ASMAT) is a series of live-fire tests of scaled rocket motors meant to simulate the conditions of the Ares I launch configuration. These tests have provided a well documented set of high fidelity measurements useful for validation including data taken over a range of test conditions and containing phenomena like Ignition Over-Pressure and water suppression of acoustics. To take advantage of this data, a digital representation of the ASMAT test setup has been constructed and test firings of the motor have been simulated using the Loci/CHEM computational fluid dynamics software. Within this first of a series of papers, results from ASMAT simulations with the rocket in a held down configuration and without water suppression have then been compared to acoustic data collected from similar live-fire tests to assess the accuracy of the simulations. Detailed evaluations of the mesh features, mesh length scales relative to acoustic signals, Courant-Friedrichs-Lewy numbers, and spatial residual sources have been performed to support this assessment. Results of acoustic comparisons have shown good correlation with the amplitude and temporal shape of pressure features and reasonable spectral accuracy up to approximately 1000 Hz. Major plume and acoustic features have been well captured including the plume shock structure, the igniter pulse transient, and the ignition overpressure. Finally, acoustic propagation patterns illustrated a previously unconsidered issue of tower placement inline with the high intensity overpressure propagation path.

  16. Reduced Fracture Finite Element Model Analysis of an Efficient Two-Scale Hybrid Embedded Fracture Model

    KAUST Repository

    Amir, Sahar Z.

    2017-06-09

    A Hybrid Embedded Fracture (HEF) model was developed to reduce various computational costs while maintaining physical accuracy (Amir and Sun, 2016). HEF splits the computations into fine scale and coarse scale. Fine scale solves analytically for the matrix-fracture flux exchange parameter. Coarse scale solves for the properties of the entire system. In literature, fractures were assumed to be either vertical or horizontal for simplification (Warren and Root, 1963). Matrix-fracture flux exchange parameter was given few equations built on that assumption (Kazemi, 1968; Lemonnier and Bourbiaux, 2010). However, such simplified cases do not apply directly for actual random fracture shapes, directions, orientations …etc. This paper shows that the HEF fine scale analytic solution (Amir and Sun, 2016) generates the flux exchange parameter found in literature for vertical and horizontal fracture cases. For other fracture cases, the flux exchange parameter changes according to the angle, slop, direction, … etc. This conclusion rises from the analysis of both: the Discrete Fracture Network (DFN) and the HEF schemes. The behavior of both schemes is analyzed with exactly similar fracture conditions and the results are shown and discussed. Then, a generalization is illustrated for any slightly compressible single-phase fluid within fractured porous media and its results are discussed.

  17. iAK692: a genome-scale metabolic model of Spirulina platensis C1.

    Science.gov (United States)

    Klanchui, Amornpan; Khannapho, Chiraphan; Phodee, Atchara; Cheevadhanarak, Supapon; Meechai, Asawin

    2012-06-15

    Spirulina (Arthrospira) platensis is a well-known filamentous cyanobacterium used in the production of many industrial products, including high value compounds, healthy food supplements, animal feeds, pharmaceuticals and cosmetics, for example. It has been increasingly studied around the world for scientific purposes, especially for its genome, biology, physiology, and also for the analysis of its small-scale metabolic network. However, the overall description of the metabolic and biotechnological capabilities of S. platensis requires the development of a whole cellular metabolism model. Recently, the S. platensis C1 (Arthrospira sp. PCC9438) genome sequence has become available, allowing systems-level studies of this commercial cyanobacterium. In this work, we present the genome-scale metabolic network analysis of S. platensis C1, iAK692, its topological properties, and its metabolic capabilities and functions. The network was reconstructed from the S. platensis C1 annotated genomic sequence using Pathway Tools software to generate a preliminary network. Then, manual curation was performed based on a collective knowledge base and a combination of genomic, biochemical, and physiological information. The genome-scale metabolic model consists of 692 genes, 837 metabolites, and 875 reactions. We validated iAK692 by conducting fermentation experiments and simulating the model under autotrophic, heterotrophic, and mixotrophic growth conditions using COBRA toolbox. The model predictions under these growth conditions were consistent with the experimental results. The iAK692 model was further used to predict the unique active reactions and essential genes for each growth condition. Additionally, the metabolic states of iAK692 during autotrophic and mixotrophic growths were described by phenotypic phase plane (PhPP) analysis. This study proposes the first genome-scale model of S. platensis C1, iAK692, which is a predictive metabolic platform for a global understanding of

  18. iAK692: A genome-scale metabolic model of Spirulina platensis C1

    Directory of Open Access Journals (Sweden)

    Klanchui Amornpan

    2012-06-01

    Full Text Available Abstract Background Spirulina (Arthrospira platensis is a well-known filamentous cyanobacterium used in the production of many industrial products, including high value compounds, healthy food supplements, animal feeds, pharmaceuticals and cosmetics, for example. It has been increasingly studied around the world for scientific purposes, especially for its genome, biology, physiology, and also for the analysis of its small-scale metabolic network. However, the overall description of the metabolic and biotechnological capabilities of S. platensis requires the development of a whole cellular metabolism model. Recently, the S. platensis C1 (Arthrospira sp. PCC9438 genome sequence has become available, allowing systems-level studies of this commercial cyanobacterium. Results In this work, we present the genome-scale metabolic network analysis of S. platensis C1, iAK692, its topological properties, and its metabolic capabilities and functions. The network was reconstructed from the S. platensis C1 annotated genomic sequence using Pathway Tools software to generate a preliminary network. Then, manual curation was performed based on a collective knowledge base and a combination of genomic, biochemical, and physiological information. The genome-scale metabolic model consists of 692 genes, 837 metabolites, and 875 reactions. We validated iAK692 by conducting fermentation experiments and simulating the model under autotrophic, heterotrophic, and mixotrophic growth conditions using COBRA toolbox. The model predictions under these growth conditions were consistent with the experimental results. The iAK692 model was further used to predict the unique active reactions and essential genes for each growth condition. Additionally, the metabolic states of iAK692 during autotrophic and mixotrophic growths were described by phenotypic phase plane (PhPP analysis. Conclusions This study proposes the first genome-scale model of S. platensis C1, iAK692, which is a

  19. Spatially distributed modelling of pesticide leaching at European scale with the PyCatch modelling framework

    Science.gov (United States)

    Schmitz, Oliver; van der Perk, Marcel; Karssenberg, Derek; Häring, Tim; Jene, Bernhard

    2017-04-01

    The modelling of pesticide transport through the soil and estimating its leaching to groundwater is essential for an appropriate environmental risk assessment. Pesticide leaching models commonly used in regulatory processes often lack the capability of providing a comprehensive spatial view, as they are implemented as non-spatial point models or only use a few combinations of representative soils to simulate specific plots. Furthermore, their handling of spatial input and output data and interaction with available Geographical Information Systems tools is limited. Therefore, executing several scenarios simulating and assessing the potential leaching on national or continental scale at high resolution is rather inefficient and prohibits the straightforward identification of areas prone to leaching. We present a new pesticide leaching model component of the PyCatch framework developed in PCRaster Python, an environmental modelling framework tailored to the development of spatio-temporal models (http://www.pcraster.eu). To ensure a feasible computational runtime of large scale models, we implemented an elementary field capacity approach to model soil water. Currently implemented processes are evapotranspiration, advection, dispersion, sorption, degradation and metabolite transformation. Not yet implemented relevant additional processes such as surface runoff, snowmelt, erosion or other lateral flows can be integrated with components already implemented in PyCatch. A preliminary version of the model executes a 20-year simulation of soil water processes for Germany (20 soil layers, 1 km2 spatial resolution, and daily timestep) within half a day using a single CPU. A comparison of the soil moisture and outflow obtained from the PCRaster implementation and PELMO, a commonly used pesticide leaching model, resulted in an R2 of 0.98 for the FOCUS Hamburg scenario. We will further discuss the validation of the pesticide transport processes and show case studies applied to

  20. Game Theory Models for the Verification of the Collective Behaviour of Autonomous Cars

    OpenAIRE

    Varga, László Z.

    2017-01-01

    The collective of autonomous cars is expected to generate almost optimal traffic. In this position paper we discuss the multi-agent models and the verification results of the collective behaviour of autonomous cars. We argue that non-cooperative autonomous adaptation cannot guarantee optimal behaviour. The conjecture is that intention aware adaptation with a constraint on simultaneous decision making has the potential to avoid unwanted behaviour. The online routing game model is expected to b...

  1. Using local scale 222Rn data to calibrate large scale SGD numerical modeling along the Alabama coastline

    Science.gov (United States)

    Dimova, N. T.

    2016-02-01

    Current Earth System Models (ESM) do not include groundwater as a transport mechanism of land-born constituent to the ocean. However, coastal hydrogeological studies from the last two decades indicate that significant material fluxes have been transported from land to the continental shelf via submarine groundwater discharge (SGD). Constructing realistic large-scale models to assess water and constituent fluxes to coastal areas is fundamental. This paper demonstrates how an independent tracer groundwater tracer approach (based on 222Rn) applied to small scale aquifer system can be used to improve the precision of a larger scale numerical model along the Alabama coastline. Presented here is a case study from the Alabama coastline in the northern Gulf of Mexico (GOM). A simple field technique was used to obtain groundwater seepage (2.4 cm/day) to a small near shore lake, representative to the shallow coastal aquifer. These data were then converted in site-specific hydraulic conductivity (23 m/day) using Darcy's Law and further incorporated into a numerical regional groundwater flow model (MODFLOW/SEAWAT) to improve total SGD flow estimates to GOM. Given the growing awareness of the importance of SGD for material fluxes into the ocean, better calibrations of the regional scale models is critical for realistic forecasts on the potential impacts of climate change and anthropogenic activities.

  2. Scale effect challenges in urban hydrology highlighted with a distributed hydrological model

    Directory of Open Access Journals (Sweden)

    A. Ichiba

    2018-01-01

    Full Text Available Hydrological models are extensively used in urban water management, development and evaluation of future scenarios and research activities. There is a growing interest in the development of fully distributed and grid-based models. However, some complex questions related to scale effects are not yet fully understood and still remain open issues in urban hydrology. In this paper we propose a two-step investigation framework to illustrate the extent of scale effects in urban hydrology. First, fractal tools are used to highlight the scale dependence observed within distributed data input into urban hydrological models. Then an intensive multi-scale modelling work is carried out to understand scale effects on hydrological model performance. Investigations are conducted using a fully distributed and physically based model, Multi-Hydro, developed at Ecole des Ponts ParisTech. The model is implemented at 17 spatial resolutions ranging from 100 to 5 m. Results clearly exhibit scale effect challenges in urban hydrology modelling. The applicability of fractal concepts highlights the scale dependence observed within distributed data. Patterns of geophysical data change when the size of the observation pixel changes. The multi-scale modelling investigation confirms scale effects on hydrological model performance. Results are analysed over three ranges of scales identified in the fractal analysis and confirmed through modelling. This work also discusses some remaining issues in urban hydrology modelling related to the availability of high-quality data at high resolutions, and model numerical instabilities as well as the computation time requirements. The main findings of this paper enable a replacement of traditional methods of model calibration by innovative methods of model resolution alteration based on the spatial data variability and scaling of flows in urban hydrology.

  3. Energy spectrum scaling in an agent-based model for bacterial turbulence

    Science.gov (United States)

    Mikel-Stites, Maxwell; Staples, Anne

    2017-11-01

    Numerous models have been developed to examine the behavior of dense bacterial swarms and to explore the visually striking phenomena of bacterial turbulence. Most models directly impose fluid dynamics physics, either by modeling the active matter as a fluid or by including interactions between the bacteria and a fluid. In this work, however, the `turbulence' is solely an emergent property of the collective behavior of the bacterial population, rather than a consequence of imposed fluid dynamics physical modeling. The system is simulated using a two dimensional Vicsek-style model, with the addition of individual repulsion to simulate bacterial collisions and physical interactions, and without the common flocking or sensing behaviors. Initial results indicate the presence of k-1 scaling in a portion of the kinetic energy spectrum that can be considered analogous to the inertial subrange in turbulent energy spectra. This result suggests that the interaction of large numbers of individual active bacteria may also be a contributing factor in the emergence of fluid dynamics phenomena, in addition to the physical interactions between bacteria and their fluid environment.

  4. Regional scale ecological risk assessment: using the relative risk model

    National Research Council Canada - National Science Library

    Landis, Wayne G

    2005-01-01

    ...) in the performance of regional-scale ecological risk assessments. The initial chapters present the methodology and the critical nature of the interaction between risk assessors and decision makers...

  5. Improving catchment scale water quality modelling with continuous high resolution monitoring of metals in runoff

    Science.gov (United States)

    Saari, Markus; Rossi, Pekka; Blomberg von der Geest, Kalle; Mäkinen, Ari; Postila, Heini; Marttila, Hannu

    2017-04-01

    High metal concentrations in natural waters is one of the key environmental and health problems globally. Continuous in-situ analysis of metals from runoff water is technically challenging but essential for the better understanding of processes which lead to pollutant transport. Currently, typical analytical methods for monitoring elements in liquids are off-line laboratory methods such as ICP-OES (Inductively Coupled Plasma Optical Emission Spectroscopy) and ICP-MS (ICP combined with a mass spectrometer). Disadvantage of the both techniques is time consuming sample collection, preparation, and off-line analysis at laboratory conditions. Thus use of these techniques lack possibility for real-time monitoring of element transport. We combined a novel high resolution on-line metal concentration monitoring with catchment scale physical hydrological modelling in Mustijoki river in Southern Finland in order to study dynamics of processes and form a predictive warning system for leaching of metals. A novel on-line measurement technique based on micro plasma emission spectroscopy (MPES) is tested for on-line detection of selected elements (e.g. Na, Mg, Al, K, Ca, Fe, Ni, Cu, Cd and Pb) in runoff waters. The preliminary results indicate that MPES can sufficiently detect and monitor metal concentrations from river water. Water and Soil Assessment Tool (SWAT) catchment scale model was further calibrated with high resolution metal concentration data. We show that by combining high resolution monitoring and catchment scale physical based modelling, further process studies and creation of early warning systems, for example to optimization of drinking water uptake from rivers, can be achieved.

  6. A Coupled fcGCM-GCE Modeling System: A 3D Cloud Resolving Model and a Regional Scale Model

    Science.gov (United States)

    Tao, Wei-Kuo

    2005-01-01

    Recent GEWEX Cloud System Study (GCSS) model comparison projects have indicated that cloud-resolving models (CRMs) agree with observations better than traditional single-column models in simulating various types of clouds and cloud systems from different geographic locations. Current and future NASA satellite programs can provide cloud, precipitation, aerosol and other data at very fine spatial and temporal scales. It requires a coupled global circulation model (GCM) and cloud-scale model (termed a super-parameterization or multi-scale modeling framework, MMF) to use these satellite data to improve the understanding of the physical processes that are responsible for the variation in global and regional climate and hydrological systems. The use of a GCM will enable global coverage, and the use of a CRM will allow for better and ore sophisticated physical parameterization. NASA satellite and field campaign cloud related datasets can provide initial conditions as well as validation for both the MMF and CRMs. The Goddard MMF is based on the 2D Goddard Cumulus Ensemble (GCE) model and the Goddard finite volume general circulation model (fvGCM), and it has started production runs with two years results (1998 and 1999). Also, at Goddard, we have implemented several Goddard microphysical schemes (21CE, several 31CE), Goddard radiation (including explicity calculated cloud optical properties), and Goddard Land Information (LIS, that includes the CLM and NOAH land surface models) into a next generation regional scale model, WRF. In this talk, I will present: (1) A Brief review on GCE model and its applications on precipitation processes (microphysical and land processes), (2) The Goddard MMF and the major difference between two existing MMFs (CSU MMF and Goddard MMF), and preliminary results (the comparison with traditional GCMs), (3) A discussion on the Goddard WRF version (its developments and applications), and (4) The characteristics of the four-dimensional cloud data

  7. Multiphysics pore-scale model for the rehydration of porous foods

    NARCIS (Netherlands)

    Sman, van der R.G.M.; Vergeldt, F.J.; As, van H.; Dalen, van G.; Voda, A.; Duynhoven, van J.P.M.

    2014-01-01

    In this paper we present a pore-scale model describing the multiphysics occurring during the rehydration of freeze-dried vegetables. This pore-scale model is part of a multiscale simulation model, which should explain the effect of microstructure and pre-treatments on the rehydration rate.

  8. Ares I Scale Model Acoustic Test Liftoff Acoustic Results and Comparisons