WorldWideScience

Sample records for tractable model system

  1. The tractable cognition thesis.

    Science.gov (United States)

    Van Rooij, Iris

    2008-09-01

    The recognition that human minds/brains are finite systems with limited resources for computation has led some researchers to advance the Tractable Cognition thesis: Human cognitive capacities are constrained by computational tractability. This thesis, if true, serves cognitive psychology by constraining the space of computational-level theories of cognition. To utilize this constraint, a precise and workable definition of "computational tractability" is needed. Following computer science tradition, many cognitive scientists and psychologists define computational tractability as polynomial-time computability, leading to the P-Cognition thesis. This article explains how and why the P-Cognition thesis may be overly restrictive, risking the exclusion of veridical computational-level theories from scientific investigation. An argument is made to replace the P-Cognition thesis by the FPT-Cognition thesis as an alternative formalization of the Tractable Cognition thesis (here, FPT stands for fixed-parameter tractable). Possible objections to the Tractable Cognition thesis, and its proposed formalization, are discussed, and existing misconceptions are clarified. 2008 Cognitive Science Society, Inc.

  2. A Tractable Approach to Pass-Through Patterns

    OpenAIRE

    E. Weyl; Michal Fabinger

    2015-01-01

    For tractability, researchers often use equilibrium models that can be solved in closed-form. In practice, this means imposing unintended substantive restrictions on incidence properties that are central to many policy questions. To overcome this limitation, we characterize a set of joint supply and demand systems yielding closed-form solutions. This class is broad enough to allow substantial flexibility and thus realism, and it nests virtually all other tractable systems in the literature. W...

  3. The Tractable Cognition thesis

    NARCIS (Netherlands)

    Rooij, I.J.E.I. van

    2008-01-01

    The recognition that human minds/brains are finite systems with limited resources for computation has led some researchers to advance the Tractable Cognition thesis: Human cognitive capacities are constrained by computational tractability. This thesis, if true, serves cognitive psychology by

  4. The Tractable Cognition Thesis

    Science.gov (United States)

    van Rooij, Iris

    2008-01-01

    The recognition that human minds/brains are finite systems with limited resources for computation has led some researchers to advance the "Tractable Cognition thesis": Human cognitive capacities are constrained by computational tractability. This thesis, if true, serves cognitive psychology by constraining the space of computational-level theories…

  5. An analytically tractable model for community ecology with many species

    Science.gov (United States)

    Dickens, Benjamin; Fisher, Charles; Mehta, Pankaj; Pankaj Mehta Biophysics Theory Group Team

    A fundamental problem in community ecology is to understand how ecological processes such as selection, drift, and immigration yield observed patterns in species composition and diversity. Here, we present an analytically tractable, presence-absence (PA) model for community assembly and use it to ask how ecological traits such as the strength of competition, diversity in competition, and stochasticity affect species composition in a community. In our PA model, we treat species as stochastic binary variables that can either be present or absent in a community: species can immigrate into the community from a regional species pool and can go extinct due to competition and stochasticity. Despite its simplicity, the PA model reproduces the qualitative features of more complicated models of community assembly. In agreement with recent work on large, competitive Lotka-Volterra systems, the PA model exhibits distinct ecological behaviors organized around a special (``critical'') point corresponding to Hubbell's neutral theory of biodiversity. Our results suggest that the concepts of ``phases'' and phase diagrams can provide a powerful framework for thinking about community ecology and that the PA model captures the essential ecological dynamics of community assembly. Pm was supported by a Simons Investigator in the Mathematical Modeling of Living Systems and a Sloan Research Fellowship.

  6. Analytical Evaluation of Chunk-Based Tractable Multi-cell OFDMA system

    Directory of Open Access Journals (Sweden)

    P.Kavitha

    2018-04-01

    Full Text Available This paper evaluates thoroughly the performance of multi-cell OFDMA system. The two types of deployment in multi-cell OFDMA system, such as Strict Fractional Frequency Reuse (FFR and Soft FFR (SFR were evaluated. In order to model the base station locations, homogeneous Poisson point processes were used, i.e. tractable model instead of hexagonal grid was considered. In order to reduce complexity, chunk-based resource allocation scheme was embedded. Each cell divides the users into the users of the central cell area and the users of the cell edge area according to their average received Signal to Interference and Noise Ratio (SINR compared with FFR threshold. The primary stage of the analysis includes the spectral efficiency’s expression deriving from these two deployment scenarios, followed by the analysis with the use of coverage probability. However, the improvement of spectral efficiency is achieved in the case of SFR. On the contrary, coverage probability is far improved by using strict FFR scheme. Through numerical anaysis, We have shown that the optimal FFR threshold to achieve the highest spectral efficiency was 12 dB for both Strict FFR as well as SFR.

  7. A Tractable Method for Describing Complex Couplings between Neurons and Population Rate.

    Science.gov (United States)

    Gardella, Christophe; Marre, Olivier; Mora, Thierry

    2016-01-01

    Neurons within a population are strongly correlated, but how to simply capture these correlations is still a matter of debate. Recent studies have shown that the activity of each cell is influenced by the population rate, defined as the summed activity of all neurons in the population. However, an explicit, tractable model for these interactions is still lacking. Here we build a probabilistic model of population activity that reproduces the firing rate of each cell, the distribution of the population rate, and the linear coupling between them. This model is tractable, meaning that its parameters can be learned in a few seconds on a standard computer even for large population recordings. We inferred our model for a population of 160 neurons in the salamander retina. In this population, single-cell firing rates depended in unexpected ways on the population rate. In particular, some cells had a preferred population rate at which they were most likely to fire. These complex dependencies could not be explained by a linear coupling between the cell and the population rate. We designed a more general, still tractable model that could fully account for these nonlinear dependencies. We thus provide a simple and computationally tractable way to learn models that reproduce the dependence of each neuron on the population rate.

  8. Space-Bounded Church-Turing Thesis and Computational Tractability of Closed Systems.

    Science.gov (United States)

    Braverman, Mark; Schneider, Jonathan; Rojas, Cristóbal

    2015-08-28

    We report a new limitation on the ability of physical systems to perform computation-one that is based on generalizing the notion of memory, or storage space, available to the system to perform the computation. Roughly, we define memory as the maximal amount of information that the evolving system can carry from one instant to the next. We show that memory is a limiting factor in computation even in lieu of any time limitations on the evolving system-such as when considering its equilibrium regime. We call this limitation the space-bounded Church-Turing thesis (SBCT). The SBCT is supported by a simulation assertion (SA), which states that predicting the long-term behavior of bounded-memory systems is computationally tractable. In particular, one corollary of SA is an explicit bound on the computational hardness of the long-term behavior of a discrete-time finite-dimensional dynamical system that is affected by noise. We prove such a bound explicitly.

  9. On tractable query evaluation for SPARQL

    OpenAIRE

    Mengel, Stefan; Skritek, Sebastian

    2017-01-01

    Despite much work within the last decade on foundational properties of SPARQL - the standard query language for RDF data - rather little is known about the exact limits of tractability for this language. In particular, this is the case for SPARQL queries that contain the OPTIONAL-operator, even though it is one of the most intensively studied features of SPARQL. The aim of our work is to provide a more thorough picture of tractable classes of SPARQL queries. In general, SPARQL query evaluatio...

  10. Processing of (in)tractable polymers using reactive solvents, 4: Structure development in the model system poly(ethylene)/styrene

    NARCIS (Netherlands)

    Goossens, J.G.P.; Rastogi, S.; Meijer, H.E.H.; Lemstra, P.J.

    1998-01-01

    The use of reactive solvents provides a unique opportunity to extend the processing characteristics of both intractable and standard (tractable) polymers beyond existing limits. The polymer to be processed is dissolved in the reactive solvent (monomer) and the solution is transferred into a mould.

  11. Capturing intracellular pH dynamics by coupling its molecular mechanisms within a fully tractable mathematical model.

    Directory of Open Access Journals (Sweden)

    Yann Bouret

    Full Text Available We describe the construction of a fully tractable mathematical model for intracellular pH. This work is based on coupling the kinetic equations depicting the molecular mechanisms for pumps, transporters and chemical reactions, which determine this parameter in eukaryotic cells. Thus, our system also calculates the membrane potential and the cytosolic ionic composition. Such a model required the development of a novel algebraic method that couples differential equations for slow relaxation processes to steady-state equations for fast chemical reactions. Compared to classical heuristic approaches based on fitted curves and ad hoc constants, this yields significant improvements. This model is mathematically self-consistent and allows for the first time to establish analytical solutions for steady-state pH and a reduced differential equation for pH regulation. Because of its modular structure, it can integrate any additional mechanism that will directly or indirectly affect pH. In addition, it provides mathematical clarifications for widely observed biological phenomena such as overshooting in regulatory loops. Finally, instead of including a limited set of experimental results to fit our model, we show examples of numerical calculations that are extremely consistent with the wide body of intracellular pH experimental measurements gathered by different groups in many different cellular systems.

  12. Zebrafish yolk lipid processing: a tractable tool for the study of vertebrate lipid transport and metabolism

    Directory of Open Access Journals (Sweden)

    Rosa L. Miyares

    2014-07-01

    Full Text Available Dyslipidemias are a major cause of morbidity and mortality in the world, particularly in developed nations. Investigating lipid and lipoprotein metabolism in experimentally tractable animal models is a crucial step towards understanding and treating human dyslipidemias. The zebrafish, a well-established embryological model, is emerging as a notable system for studies of lipid metabolism. Here, we describe the value of the lecithotrophic, or yolk-metabolizing, stages of the zebrafish as a model for studying lipid metabolism and lipoprotein transport. We demonstrate methods to assay yolk lipid metabolism in embryonic and larval zebrafish. Injection of labeled fatty acids into the zebrafish yolk promotes efficient uptake into the circulation and rapid metabolism. Using a genetic model for abetalipoproteinemia, we show that the uptake of labeled fatty acids into the circulation is dependent on lipoprotein production. Furthermore, we examine the metabolic fate of exogenously delivered fatty acids by assaying their incorporation into complex lipids. Moreover, we demonstrate that this technique is amenable to genetic and pharmacologic studies.

  13. A Tractable Disequilbrium Framework for Integrating Computational Thermodynamics and Geodynamics

    Science.gov (United States)

    Spiegelman, M. W.; Tweed, L. E. L.; Evans, O.; Kelemen, P. B.; Wilson, C. R.

    2017-12-01

    The consistent integration of computational thermodynamics and geodynamics is essential for exploring and understanding a wide range of processes from high-PT magma dynamics in the convecting mantle to low-PT reactive alteration of the brittle crust. Nevertheless, considerable challenges remain for coupling thermodynamics and fluid-solid mechanics within computationally tractable and insightful models. Here we report on a new effort, part of the ENKI project, that provides a roadmap for developing flexible geodynamic models of varying complexity that are thermodynamically consistent with established thermodynamic models. The basic theory is derived from the disequilibrium thermodynamics of De Groot and Mazur (1984), similar to Rudge et. al (2011, GJI), but extends that theory to include more general rheologies, multiple solid (and liquid) phases and explicit chemical reactions to describe interphase exchange. Specifying stoichiometric reactions clearly defines the compositions of reactants and products and allows the affinity of each reaction (A = -Δ/Gr) to be used as a scalar measure of disequilibrium. This approach only requires thermodynamic models to return chemical potentials of all components and phases (as well as thermodynamic quantities for each phase e.g. densities, heat capacity, entropies), but is not constrained to be in thermodynamic equilibrium. Allowing meta-stable phases mitigates some of the computational issues involved with the introduction and exhaustion of phases. Nevertheless, for closed systems, these problems are guaranteed to evolve to the same equilibria predicted by equilibrium thermodynamics. Here we illustrate the behavior of this theory for a range of simple problems (constructed with our open-source model builder TerraFERMA) that model poro-viscous behavior in the well understood Fo-Fa binary phase loop. Other contributions in this session will explore a range of models with more petrologically interesting phase diagrams as well as

  14. A tractable algorithm for the wellfounded model

    NARCIS (Netherlands)

    Jonker, C.M.; Renardel de Lavalette, G.R.

    In the area of general logic programming (negated atoms allowed in the bodies of rules) and reason maintenance systems, the wellfounded model (first defined by Van Gelder, Ross and Schlipf in 1988) is generally considered to be the declarative semantics of the program. In this paper we present

  15. Evidence for the effect of serotonin receptor 1A gene (HTR1A) polymorphism on tractability in Thoroughbred horses.

    Science.gov (United States)

    Hori, Y; Tozaki, T; Nambo, Y; Sato, F; Ishimaru, M; Inoue-Murayama, M; Fujita, K

    2016-02-01

    Tractability, or how easily animals can be trained and controlled, is an important behavioural trait for the management and training of domestic animals, but its genetic basis remains unclear. Polymorphisms in the serotonin receptor 1A gene (HTR1A) have been associated with individual variability in anxiety-related traits in several species. In this study, we examined the association between HTR1A polymorphisms and tractability in Thoroughbred horses. We assessed the tractability of 167 one-year-old horses reared at a training centre for racehorses using a questionnaire consisting of 17 items. A principal components analysis of answers contracted the data to five principal component (PC) scores. We genotyped two non-synonymous single nucleotide polymorphisms (SNPs) in the horse HTR1A coding region. We found that one of the two SNPs, c.709G>A, which causes an amino acid change at the intracellular region of the receptor, was significantly associated with scores of four of five PCs in fillies (all Ps Horses carrying an A allele at c.709G>A showed lower tractability. This result provides the first evidence that a polymorphism in a serotonin-related gene may affect tractability in horses with the effect partially different depending on sex. © 2015 Stichting International Foundation for Animal Genetics.

  16. Developing the anemone Aiptasia as a tractable model for cnidarian-dinoflagellate symbiosis: the transcriptome of aposymbiotic A. pallida.

    Science.gov (United States)

    Lehnert, Erik M; Burriesci, Matthew S; Pringle, John R

    2012-06-22

    Coral reefs are hotspots of oceanic biodiversity, forming the foundation of ecosystems that are important both ecologically and for their direct practical impacts on humans. Corals are declining globally due to a number of stressors, including rising sea-surface temperatures and pollution; such stresses can lead to a breakdown of the essential symbiotic relationship between the coral host and its endosymbiotic dinoflagellates, a process known as coral bleaching. Although the environmental stresses causing this breakdown are largely known, the cellular mechanisms of symbiosis establishment, maintenance, and breakdown are still largely obscure. Investigating the symbiosis using an experimentally tractable model organism, such as the small sea anemone Aiptasia, should improve our understanding of exactly how the environmental stressors affect coral survival and growth. We assembled the transcriptome of a clonal population of adult, aposymbiotic (dinoflagellate-free) Aiptasia pallida from ~208 million reads, yielding 58,018 contigs. We demonstrated that many of these contigs represent full-length or near-full-length transcripts that encode proteins similar to those from a diverse array of pathways in other organisms, including various metabolic enzymes, cytoskeletal proteins, and neuropeptide precursors. The contigs were annotated by sequence similarity, assigned GO terms, and scanned for conserved protein domains. We analyzed the frequency and types of single-nucleotide variants and estimated the size of the Aiptasia genome to be ~421 Mb. The contigs and annotations are available through NCBI (Transcription Shotgun Assembly database, accession numbers JV077153-JV134524) and at http://pringlelab.stanford.edu/projects.html. The availability of an extensive transcriptome assembly for A. pallida will facilitate analyses of gene-expression changes, identification of proteins of interest, and other studies in this important emerging model system.

  17. Tractable flux-driven temperature, density, and rotation profile evolution with the quasilinear gyrokinetic transport model QuaLiKiz

    Science.gov (United States)

    Citrin, J.; Bourdelle, C.; Casson, F. J.; Angioni, C.; Bonanomi, N.; Camenen, Y.; Garbet, X.; Garzotti, L.; Görler, T.; Gürcan, O.; Koechl, F.; Imbeaux, F.; Linder, O.; van de Plassche, K.; Strand, P.; Szepesi, G.; Contributors, JET

    2017-12-01

    Quasilinear turbulent transport models are a successful tool for prediction of core tokamak plasma profiles in many regimes. Their success hinges on the reproduction of local nonlinear gyrokinetic fluxes. We focus on significant progress in the quasilinear gyrokinetic transport model QuaLiKiz (Bourdelle et al 2016 Plasma Phys. Control. Fusion 58 014036), which employs an approximated solution of the mode structures to significantly speed up computation time compared to full linear gyrokinetic solvers. Optimisation of the dispersion relation solution algorithm within integrated modelling applications leads to flux calculations × {10}6-7 faster than local nonlinear simulations. This allows tractable simulation of flux-driven dynamic profile evolution including all transport channels: ion and electron heat, main particles, impurities, and momentum. Furthermore, QuaLiKiz now includes the impact of rotation and temperature anisotropy induced poloidal asymmetry on heavy impurity transport, important for W-transport applications. Application within the JETTO integrated modelling code results in 1 s of JET plasma simulation within 10 h using 10 CPUs. Simultaneous predictions of core density, temperature, and toroidal rotation profiles for both JET hybrid and baseline experiments are presented, covering both ion and electron turbulence scales. The simulations are successfully compared to measured profiles, with agreement mostly in the 5%-25% range according to standard figures of merit. QuaLiKiz is now open source and available at www.qualikiz.com.

  18. Tractable Pareto Optimization of Temporal Preferences

    Science.gov (United States)

    Morris, Robert; Morris, Paul; Khatib, Lina; Venable, Brent

    2003-01-01

    This paper focuses on temporal constraint problems where the objective is to optimize a set of local preferences for when events occur. In previous work, a subclass of these problems has been formalized as a generalization of Temporal CSPs, and a tractable strategy for optimization has been proposed, where global optimality is defined as maximizing the minimum of the component preference values. This criterion for optimality, which we call 'Weakest Link Optimization' (WLO), is known to have limited practical usefulness because solutions are compared only on the basis of their worst value; thus, there is no requirement to improve the other values. To address this limitation, we introduce a new algorithm that re-applies WLO iteratively in a way that leads to improvement of all the values. We show the value of this strategy by proving that, with suitable preference functions, the resulting solutions are Pareto Optimal.

  19. Developing the anemone Aiptasia as a tractable model for cnidarian-dinoflagellate symbiosis: the transcriptome of aposymbiotic A. pallida

    Directory of Open Access Journals (Sweden)

    Lehnert Erik M

    2012-06-01

    Full Text Available Abstract Background Coral reefs are hotspots of oceanic biodiversity, forming the foundation of ecosystems that are important both ecologically and for their direct practical impacts on humans. Corals are declining globally due to a number of stressors, including rising sea-surface temperatures and pollution; such stresses can lead to a breakdown of the essential symbiotic relationship between the coral host and its endosymbiotic dinoflagellates, a process known as coral bleaching. Although the environmental stresses causing this breakdown are largely known, the cellular mechanisms of symbiosis establishment, maintenance, and breakdown are still largely obscure. Investigating the symbiosis using an experimentally tractable model organism, such as the small sea anemone Aiptasia, should improve our understanding of exactly how the environmental stressors affect coral survival and growth. Results We assembled the transcriptome of a clonal population of adult, aposymbiotic (dinoflagellate-free Aiptasia pallida from ~208 million reads, yielding 58,018 contigs. We demonstrated that many of these contigs represent full-length or near-full-length transcripts that encode proteins similar to those from a diverse array of pathways in other organisms, including various metabolic enzymes, cytoskeletal proteins, and neuropeptide precursors. The contigs were annotated by sequence similarity, assigned GO terms, and scanned for conserved protein domains. We analyzed the frequency and types of single-nucleotide variants and estimated the size of the Aiptasia genome to be ~421 Mb. The contigs and annotations are available through NCBI (Transcription Shotgun Assembly database, accession numbers JV077153-JV134524 and at http://pringlelab.stanford.edu/projects.html. Conclusions The availability of an extensive transcriptome assembly for A. pallida will facilitate analyses of gene-expression changes, identification of proteins of interest, and other studies in this

  20. A hybrid agent-based approach for modeling microbiological systems.

    Science.gov (United States)

    Guo, Zaiyi; Sloot, Peter M A; Tay, Joc Cing

    2008-11-21

    Models for systems biology commonly adopt Differential Equations or Agent-Based modeling approaches for simulating the processes as a whole. Models based on differential equations presuppose phenomenological intracellular behavioral mechanisms, while models based on Multi-Agent approach often use directly translated, and quantitatively less precise if-then logical rule constructs. We propose an extendible systems model based on a hybrid agent-based approach where biological cells are modeled as individuals (agents) while molecules are represented by quantities. This hybridization in entity representation entails a combined modeling strategy with agent-based behavioral rules and differential equations, thereby balancing the requirements of extendible model granularity with computational tractability. We demonstrate the efficacy of this approach with models of chemotaxis involving an assay of 10(3) cells and 1.2x10(6) molecules. The model produces cell migration patterns that are comparable to laboratory observations.

  1. Sleep and Development in Genetically Tractable Model Organisms.

    Science.gov (United States)

    Kayser, Matthew S; Biron, David

    2016-05-01

    Sleep is widely recognized as essential, but without a clear singular function. Inadequate sleep impairs cognition, metabolism, immune function, and many other processes. Work in genetic model systems has greatly expanded our understanding of basic sleep neurobiology as well as introduced new concepts for why we sleep. Among these is an idea with its roots in human work nearly 50 years old: sleep in early life is crucial for normal brain maturation. Nearly all known species that sleep do so more while immature, and this increased sleep coincides with a period of exuberant synaptogenesis and massive neural circuit remodeling. Adequate sleep also appears critical for normal neurodevelopmental progression. This article describes recent findings regarding molecular and circuit mechanisms of sleep, with a focus on development and the insights garnered from models amenable to detailed genetic analyses. Copyright © 2016 by the Genetics Society of America.

  2. An empirical model for independent control of variable speed refrigeration system

    International Nuclear Information System (INIS)

    Li Hua; Jeong, Seok-Kwon; Yoon, Jung-In; You, Sam-Sang

    2008-01-01

    This paper deals with an empirical dynamic model for decoupling control of the variable speed refrigeration system (VSRS). To cope with inherent complexity and nonlinearity in system dynamics, the model parameters are first obtained based on experimental data. In the study, the dynamic characteristics of indoor temperature and superheat are assumed to be first-order model with time delay. While the compressor frequency and opening angle of electronic expansion valve are varying, the indoor temperature and the superheat exhibit interfering characteristics each other in the VSRS. Thus, each decoupling model has been proposed to eliminate such interference. Finally, the experiment and simulation results indicate that the proposed model offers more tractable means for describing the actual VSRS comparing to other models currently available

  3. Unified tractable model for downlink MIMO cellular networks using stochastic geometry

    KAUST Repository

    Afify, Laila H.

    2016-07-26

    Several research efforts are invested to develop stochastic geometry models for cellular networks with multiple antenna transmission and reception (MIMO). On one hand, there are models that target abstract outage probability and ergodic rate for simplicity. On the other hand, there are models that sacrifice simplicity to target more tangible performance metrics such as the error probability. Both types of models are completely disjoint in terms of the analytic steps to obtain the performance measures, which makes it challenging to conduct studies that account for different performance metrics. This paper unifies both techniques and proposes a unified stochastic-geometry based mathematical paradigm to account for error probability, outage probability, and ergodic rates in MIMO cellular networks. The proposed model is also unified in terms of the antenna configurations and leads to simpler error probability analysis compared to existing state-of-the-art models. The core part of the analysis is based on abstracting unnecessary information conveyed within the interfering signals by assuming Gaussian signaling. To this end, the accuracy of the proposed framework is verified against state-of-the-art models as well as system level simulations. We provide via this unified study insights on network design by reflecting system parameters effect on different performance metrics. © 2016 IEEE.

  4. Systems, methods and apparatus for modeling, specifying and deploying policies in autonomous and autonomic systems using agent-oriented software engineering

    Science.gov (United States)

    Hinchey, Michael G. (Inventor); Penn, Joaquin (Inventor); Sterritt, Roy (Inventor)

    2011-01-01

    Systems, methods and apparatus are provided through which in some embodiments, an agent-oriented specification modeled with MaCMAS, is analyzed, flaws in the agent-oriented specification modeled with MaCMAS are corrected, and an implementation is derived from the corrected agent-oriented specification. Described herein are systems, method and apparatus that produce fully (mathematically) tractable development of agent-oriented specification(s) modeled with methodology fragment for analyzing complex multiagent systems (MaCMAS) and policies for autonomic systems from requirements through to code generation. The systems, method and apparatus described herein are illustrated through an example showing how user formulated policies can be translated into a formal mode which can then be converted to code. The requirements-based programming systems, method and apparatus described herein may provide faster, higher quality development and maintenance of autonomic systems based on user formulation of policies.

  5. Modeling life the mathematics of biological systems

    CERN Document Server

    Garfinkel, Alan; Guo, Yina

    2017-01-01

    From predator-prey populations in an ecosystem, to hormone regulation within the body, the natural world abounds in dynamical systems that affect us profoundly. This book develops the mathematical tools essential for students in the life sciences to describe these interacting systems and to understand and predict their behavior. Complex feedback relations and counter-intuitive responses are common in dynamical systems in nature; this book develops the quantitative skills needed to explore these interactions. Differential equations are the natural mathematical tool for quantifying change, and are the driving force throughout this book. The use of Euler’s method makes nonlinear examples tractable and accessible to a broad spectrum of early-stage undergraduates, thus providing a practical alternative to the procedural approach of a traditional Calculus curriculum. Tools are developed within numerous, relevant examples, with an emphasis on the construction, evaluation, and interpretation of mathematical models ...

  6. Model predictive control classical, robust and stochastic

    CERN Document Server

    Kouvaritakis, Basil

    2016-01-01

    For the first time, a textbook that brings together classical predictive control with treatment of up-to-date robust and stochastic techniques. Model Predictive Control describes the development of tractable algorithms for uncertain, stochastic, constrained systems. The starting point is classical predictive control and the appropriate formulation of performance objectives and constraints to provide guarantees of closed-loop stability and performance. Moving on to robust predictive control, the text explains how similar guarantees may be obtained for cases in which the model describing the system dynamics is subject to additive disturbances and parametric uncertainties. Open- and closed-loop optimization are considered and the state of the art in computationally tractable methods based on uncertainty tubes presented for systems with additive model uncertainty. Finally, the tube framework is also applied to model predictive control problems involving hard or probabilistic constraints for the cases of multiplic...

  7. Analysis of tractable distortion metrics for EEG compression applications

    International Nuclear Information System (INIS)

    Bazán-Prieto, Carlos; Blanco-Velasco, Manuel; Cruz-Roldán, Fernando; Cárdenas-Barrera, Julián

    2012-01-01

    Coding distortion in lossy electroencephalographic (EEG) signal compression methods is evaluated through tractable objective criteria. The percentage root-mean-square difference, which is a global and relative indicator of the quality held by reconstructed waveforms, is the most widely used criterion. However, this parameter does not ensure compliance with clinical standard guidelines that specify limits to allowable noise in EEG recordings. As a result, expert clinicians may have difficulties interpreting the resulting distortion of the EEG for a given value of this parameter. Conversely, the root-mean-square error is an alternative criterion that quantifies distortion in understandable units. In this paper, we demonstrate that the root-mean-square error is better suited to control and to assess the distortion introduced by compression methods. The experiments conducted in this paper show that the use of the root-mean-square error as target parameter in EEG compression allows both clinicians and scientists to infer whether coding error is clinically acceptable or not at no cost for the compression ratio. (paper)

  8. Robust Model Predictive Control of a Nonlinear System with Known Scheduling Variable and Uncertain Gain

    DEFF Research Database (Denmark)

    Mirzaei, Mahmood; Poulsen, Niels Kjølstad; Niemann, Hans Henrik

    2012-01-01

    Robust model predictive control (RMPC) of a class of nonlinear systems is considered in this paper. We will use Linear Parameter Varying (LPV) model of the nonlinear system. By taking the advantage of having future values of the scheduling variable, we will simplify state prediction. Because...... of the special structure of the problem, uncertainty is only in the B matrix (gain) of the state space model. Therefore by taking advantage of this structure, we formulate a tractable minimax optimization problem to solve robust model predictive control problem. Wind turbine is chosen as the case study and we...... choose wind speed as the scheduling variable. Wind speed is measurable ahead of the turbine, therefore the scheduling variable is known for the entire prediction horizon....

  9. Mathematical approaches for complexity/predictivity trade-offs in complex system models : LDRD final report.

    Energy Technology Data Exchange (ETDEWEB)

    Goldsby, Michael E.; Mayo, Jackson R.; Bhattacharyya, Arnab (Massachusetts Institute of Technology, Cambridge, MA); Armstrong, Robert C.; Vanderveen, Keith

    2008-09-01

    The goal of this research was to examine foundational methods, both computational and theoretical, that can improve the veracity of entity-based complex system models and increase confidence in their predictions for emergent behavior. The strategy was to seek insight and guidance from simplified yet realistic models, such as cellular automata and Boolean networks, whose properties can be generalized to production entity-based simulations. We have explored the usefulness of renormalization-group methods for finding reduced models of such idealized complex systems. We have prototyped representative models that are both tractable and relevant to Sandia mission applications, and quantified the effect of computational renormalization on the predictive accuracy of these models, finding good predictivity from renormalized versions of cellular automata and Boolean networks. Furthermore, we have theoretically analyzed the robustness properties of certain Boolean networks, relevant for characterizing organic behavior, and obtained precise mathematical constraints on systems that are robust to failures. In combination, our results provide important guidance for more rigorous construction of entity-based models, which currently are often devised in an ad-hoc manner. Our results can also help in designing complex systems with the goal of predictable behavior, e.g., for cybersecurity.

  10. Open-System Quantum Annealing in Mean-Field Models with Exponential Degeneracy*

    Directory of Open Access Journals (Sweden)

    Kostyantyn Kechedzhi

    2016-05-01

    Full Text Available Real-life quantum computers are inevitably affected by intrinsic noise resulting in dissipative nonunitary dynamics realized by these devices. We consider an open-system quantum annealing algorithm optimized for such a realistic analog quantum device which takes advantage of noise-induced thermalization and relies on incoherent quantum tunneling at finite temperature. We theoretically analyze the performance of this algorithm considering a p-spin model that allows for a mean-field quasiclassical solution and, at the same time, demonstrates the first-order phase transition and exponential degeneracy of states, typical characteristics of spin glasses. We demonstrate that finite-temperature effects introduced by the noise are particularly important for the dynamics in the presence of the exponential degeneracy of metastable states. We determine the optimal regime of the open-system quantum annealing algorithm for this model and find that it can outperform simulated annealing in a range of parameters. Large-scale multiqubit quantum tunneling is instrumental for the quantum speedup in this model, which is possible because of the unusual nonmonotonous temperature dependence of the quantum-tunneling action in this model, where the most efficient transition rate corresponds to zero temperature. This model calculation is the first analytically tractable example where open-system quantum annealing algorithm outperforms simulated annealing, which can, in principle, be realized using an analog quantum computer.

  11. Modeling leaks from liquid hydrogen storage systems.

    Energy Technology Data Exchange (ETDEWEB)

    Winters, William Stanley, Jr.

    2009-01-01

    This report documents a series of models for describing intended and unintended discharges from liquid hydrogen storage systems. Typically these systems store hydrogen in the saturated state at approximately five to ten atmospheres. Some of models discussed here are equilibrium-based models that make use of the NIST thermodynamic models to specify the states of multiphase hydrogen and air-hydrogen mixtures. Two types of discharges are considered: slow leaks where hydrogen enters the ambient at atmospheric pressure and fast leaks where the hydrogen flow is usually choked and expands into the ambient through an underexpanded jet. In order to avoid the complexities of supersonic flow, a single Mach disk model is proposed for fast leaks that are choked. The velocity and state of hydrogen downstream of the Mach disk leads to a more tractable subsonic boundary condition. However, the hydrogen temperature exiting all leaks (fast or slow, from saturated liquid or saturated vapor) is approximately 20.4 K. At these temperatures, any entrained air would likely condense or even freeze leading to an air-hydrogen mixture that cannot be characterized by the REFPROP subroutines. For this reason a plug flow entrainment model is proposed to treat a short zone of initial entrainment and heating. The model predicts the quantity of entrained air required to bring the air-hydrogen mixture to a temperature of approximately 65 K at one atmosphere. At this temperature the mixture can be treated as a mixture of ideal gases and is much more amenable to modeling with Gaussian entrainment models and CFD codes. A Gaussian entrainment model is formulated to predict the trajectory and properties of a cold hydrogen jet leaking into ambient air. The model shows that similarity between two jets depends on the densimetric Froude number, density ratio and initial hydrogen concentration.

  12. Hierarchical modeling and robust synthesis for the preliminary design of large scale complex systems

    Science.gov (United States)

    Koch, Patrick Nathan

    Large-scale complex systems are characterized by multiple interacting subsystems and the analysis of multiple disciplines. The design and development of such systems inevitably requires the resolution of multiple conflicting objectives. The size of complex systems, however, prohibits the development of comprehensive system models, and thus these systems must be partitioned into their constituent parts. Because simultaneous solution of individual subsystem models is often not manageable iteration is inevitable and often excessive. In this dissertation these issues are addressed through the development of a method for hierarchical robust preliminary design exploration to facilitate concurrent system and subsystem design exploration, for the concurrent generation of robust system and subsystem specifications for the preliminary design of multi-level, multi-objective, large-scale complex systems. This method is developed through the integration and expansion of current design techniques: (1) Hierarchical partitioning and modeling techniques for partitioning large-scale complex systems into more tractable parts, and allowing integration of subproblems for system synthesis, (2) Statistical experimentation and approximation techniques for increasing both the efficiency and the comprehensiveness of preliminary design exploration, and (3) Noise modeling techniques for implementing robust preliminary design when approximate models are employed. The method developed and associated approaches are illustrated through their application to the preliminary design of a commercial turbofan turbine propulsion system; the turbofan system-level problem is partitioned into engine cycle and configuration design and a compressor module is integrated for more detailed subsystem-level design exploration, improving system evaluation.

  13. Applying dynamic priority scheduling scheme to static systems of pinwheel task model in power-aware scheduling.

    Science.gov (United States)

    Seol, Ye-In; Kim, Young-Kuk

    2014-01-01

    Power-aware scheduling reduces CPU energy consumption in hard real-time systems through dynamic voltage scaling (DVS). In this paper, we deal with pinwheel task model which is known as static and predictable task model and could be applied to various embedded or ubiquitous systems. In pinwheel task model, each task's priority is static and its execution sequence could be predetermined. There have been many static approaches to power-aware scheduling in pinwheel task model. But, in this paper, we will show that the dynamic priority scheduling results in power-aware scheduling could be applied to pinwheel task model. This method is more effective than adopting the previous static priority scheduling methods in saving energy consumption and, for the system being still static, it is more tractable and applicable to small sized embedded or ubiquitous computing. Also, we introduce a novel power-aware scheduling algorithm which exploits all slacks under preemptive earliest-deadline first scheduling which is optimal in uniprocessor system. The dynamic priority method presented in this paper could be applied directly to static systems of pinwheel task model. The simulation results show that the proposed algorithm with the algorithmic complexity of O(n) reduces the energy consumption by 10-80% over the existing algorithms.

  14. Survey on queueing models with standbys support

    Directory of Open Access Journals (Sweden)

    Kolledath Sreekanth

    2018-01-01

    Full Text Available This paper is a survey article on queueing models with standbys support. Due to many real life applications of queueing models, it has become an interesting area for researchers and a lot of research work has been exerted so far. It is worthwhile to examine the performance based analysis for queueing modelling system as it provides a valuable insight to the tractability of the system and accelerates its efficiency. The provision of standbys to the queueing modelling of a real system is needed for smooth functioning in the presence of its unavoidable failures. The present survey provides a dig into the research work done, and emphasis the sequential developments on queueing models with standbys support.

  15. Evolution of the archaeal and mammalian information processing systems: towards an archaeal model for human disease.

    Science.gov (United States)

    Lyu, Zhe; Whitman, William B

    2017-01-01

    Current evolutionary models suggest that Eukaryotes originated from within Archaea instead of being a sister lineage. To test this model of ancient evolution, we review recent studies and compare the three major information processing subsystems of replication, transcription and translation in the Archaea and Eukaryotes. Our hypothesis is that if the Eukaryotes arose within the archaeal radiation, their information processing systems will appear to be one of kind and not wholly original. Within the Eukaryotes, the mammalian or human systems are emphasized because of their importance in understanding health. Biochemical as well as genetic studies provide strong evidence for the functional similarity of archaeal homologs to the mammalian information processing system and their dissimilarity to the bacterial systems. In many independent instances, a simple archaeal system is functionally equivalent to more elaborate eukaryotic homologs, suggesting that evolution of complexity is likely an central feature of the eukaryotic information processing system. Because fewer components are often involved, biochemical characterizations of the archaeal systems are often easier to interpret. Similarly, the archaeal cell provides a genetically and metabolically simpler background, enabling convenient studies on the complex information processing system. Therefore, Archaea could serve as a parsimonious and tractable host for studying human diseases that arise in the information processing systems.

  16. Unified tractable model for downlink MIMO cellular networks using stochastic geometry

    KAUST Repository

    Afify, Laila H.; Elsawy, Hesham; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2016-01-01

    for simplicity. On the other hand, there are models that sacrifice simplicity to target more tangible performance metrics such as the error probability. Both types of models are completely disjoint in terms of the analytic steps to obtain the performance measures

  17. Mouse Chromosome Engineering for Modeling Human Disease

    OpenAIRE

    van der Weyden, Louise; Bradley, Allan

    2006-01-01

    Chromosomal rearrangements occur frequently in humans and can be disease-associated or phenotypically neutral. Recent technological advances have led to the discovery of copy-number changes previously undetected by cytogenetic techniques. To understand the genetic consequences of such genomic changes, these mutations need to be modeled in experimentally tractable systems. The mouse is an excellent organism for this analysis because of its biological and genetic similarity to humans, and the e...

  18. Fitting the CDO correlation skew: a tractable structural jump-diffusion model

    DEFF Research Database (Denmark)

    Willemann, Søren

    2007-01-01

    We extend a well-known structural jump-diffusion model for credit risk to handle both correlations through diffusion of asset values and common jumps in asset value. Through a simplifying assumption on the default timing and efficient numerical techniques, we develop a semi-analytic framework...... allowing for instantaneous calibration to heterogeneous CDS curves and fast computation of CDO tranche spreads. We calibrate the model to CDX and iTraxx data from February 2007 and achieve a satisfactory fit. To price the senior tranches for both indices, we require a risk-neutral probability of a market...

  19. Multiprocessor scheduling for real-time systems

    CERN Document Server

    Baruah, Sanjoy; Buttazzo, Giorgio

    2015-01-01

    This book provides a comprehensive overview of both theoretical and pragmatic aspects of resource-allocation and scheduling in multiprocessor and multicore hard-real-time systems.  The authors derive new, abstract models of real-time tasks that capture accurately the salient features of real application systems that are to be implemented on multiprocessor platforms, and identify rules for mapping application systems onto the most appropriate models.  New run-time multiprocessor scheduling algorithms are presented, which are demonstrably better than those currently used, both in terms of run-time efficiency and tractability of off-line analysis.  Readers will benefit from a new design and analysis framework for multiprocessor real-time systems, which will translate into a significantly enhanced ability to provide formally verified, safety-critical real-time systems at a significantly lower cost.

  20. Tractable Stochastic Geometry Model for IoT Access in LTE Networks

    KAUST Repository

    Gharbieh, Mohammad; Elsawy, Hesham; Bader, Ahmed; Alouini, Mohamed-Slim

    2017-01-01

    The Internet of Things (IoT) is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the high volumes of traffic that must be accommodated. Cellular networks are indeed a natural candidate for the data tsunami the IoT is expected to generate in conjunction with legacy human-type traffic. However, the random access process for scheduling request represents a major bottleneck to support IoT via LTE cellular networks. Accordingly, this paper develops a mathematical framework to model and study the random access channel (RACH) scalability to accommodate IoT traffic. The developed model is based on stochastic geometry and discrete time Markov chains (DTMC) to account for different access strategies and possible sources of inter-cell and intra-cell interferences. To this end, the developed model is utilized to assess and compare three different access strategies, which incorporate a combination of transmission persistency, back-off, and power ramping. The analysis and the results showcased herewith clearly illustrate the vulnerability of the random access procedure as the IoT intensity grows. Finally, the paper offers insights into effective scenarios for each transmission strategy in terms of IoT intensity and RACH detection thresholds.

  1. Tractable Stochastic Geometry Model for IoT Access in LTE Networks

    KAUST Repository

    Gharbieh, Mohammad

    2017-02-07

    The Internet of Things (IoT) is large-scale by nature. This is not only manifested by the large number of connected devices, but also by the high volumes of traffic that must be accommodated. Cellular networks are indeed a natural candidate for the data tsunami the IoT is expected to generate in conjunction with legacy human-type traffic. However, the random access process for scheduling request represents a major bottleneck to support IoT via LTE cellular networks. Accordingly, this paper develops a mathematical framework to model and study the random access channel (RACH) scalability to accommodate IoT traffic. The developed model is based on stochastic geometry and discrete time Markov chains (DTMC) to account for different access strategies and possible sources of inter-cell and intra-cell interferences. To this end, the developed model is utilized to assess and compare three different access strategies, which incorporate a combination of transmission persistency, back-off, and power ramping. The analysis and the results showcased herewith clearly illustrate the vulnerability of the random access procedure as the IoT intensity grows. Finally, the paper offers insights into effective scenarios for each transmission strategy in terms of IoT intensity and RACH detection thresholds.

  2. The equity price channel in a New-Keynesian DSGE model with financial frictions and banking

    OpenAIRE

    Hylton Hollander; Guangling Liu

    2013-01-01

    This paper studies the role of the equity price channel in business cycle fluctuations, and highlights the equity price channel as a different aspect to general equilibrium models with financial frictions and, as a result, emphasizes the systemic influence of financial markets on the real economy. We develop a canonical New-Keynesian DSGE model with a tractable role for the equity market in banking, entrepreneur and household economic activities. The model is estimated with Bayesian technique...

  3. High dimensions - a new approach to fermionic lattice models

    International Nuclear Information System (INIS)

    Vollhardt, D.

    1991-01-01

    The limit of high spatial dimensions d, which is well-established in the theory of classical and localized spin models, is shown to be a fruitful approach also to itinerant fermion systems, such as the Hubbard model and the periodic Anderson model. Many investigations which are probability difficult in finite dimensions, become tractable in d=∞. At the same time essential features of systems in d=3 and even lower dimensions are very well described by the results obtained in d=∞. A wide range of applications of this new concept (e.g., in perturbation theory, Fermi liquid theory, variational approaches, exact results, etc.) is discussed and the state-of-the-art is reviewed. (orig.)

  4. LDRD report nonlinear model reduction

    Energy Technology Data Exchange (ETDEWEB)

    Segalman, D.; Heinstein, M.

    1997-09-01

    The very general problem of model reduction of nonlinear systems was made tractable by focusing on the very large subclass consisting of linear subsystems connected by nonlinear interfaces. Such problems constitute a large part of the nonlinear structural problems encountered in addressing the Sandia missions. A synthesis approach to this class of problems was developed consisting of: detailed modeling of the interface mechanics; collapsing the interface simulation results into simple nonlinear interface models; constructing system models by assembling model approximations of the linear subsystems and the nonlinear interface models. These system models, though nonlinear, would have very few degrees of freedom. A paradigm problem, that of machine tool vibration, was selected for application of the reduction approach outlined above. Research results achieved along the way as well as the overall modeling of a specific machine tool have been very encouraging. In order to confirm the interface models resulting from simulation, it was necessary to develop techniques to deduce interface mechanics from experimental data collected from the overall nonlinear structure. A program to develop such techniques was also pursued with good success.

  5. Analytically tractable climate-carbon cycle feedbacks under 21st century anthropogenic forcing

    Science.gov (United States)

    Lade, Steven J.; Donges, Jonathan F.; Fetzer, Ingo; Anderies, John M.; Beer, Christian; Cornell, Sarah E.; Gasser, Thomas; Norberg, Jon; Richardson, Katherine; Rockström, Johan; Steffen, Will

    2018-05-01

    Changes to climate-carbon cycle feedbacks may significantly affect the Earth system's response to greenhouse gas emissions. These feedbacks are usually analysed from numerical output of complex and arguably opaque Earth system models. Here, we construct a stylised global climate-carbon cycle model, test its output against comprehensive Earth system models, and investigate the strengths of its climate-carbon cycle feedbacks analytically. The analytical expressions we obtain aid understanding of carbon cycle feedbacks and the operation of the carbon cycle. Specific results include that different feedback formalisms measure fundamentally the same climate-carbon cycle processes; temperature dependence of the solubility pump, biological pump, and CO2 solubility all contribute approximately equally to the ocean climate-carbon feedback; and concentration-carbon feedbacks may be more sensitive to future climate change than climate-carbon feedbacks. Simple models such as that developed here also provide workbenches for simple but mechanistically based explorations of Earth system processes, such as interactions and feedbacks between the planetary boundaries, that are currently too uncertain to be included in comprehensive Earth system models.

  6. Genetics on the Fly: A Primer on the Drosophila Model System

    Science.gov (United States)

    Hales, Karen G.; Korey, Christopher A.; Larracuente, Amanda M.; Roberts, David M.

    2015-01-01

    Fruit flies of the genus Drosophila have been an attractive and effective genetic model organism since Thomas Hunt Morgan and colleagues made seminal discoveries with them a century ago. Work with Drosophila has enabled dramatic advances in cell and developmental biology, neurobiology and behavior, molecular biology, evolutionary and population genetics, and other fields. With more tissue types and observable behaviors than in other short-generation model organisms, and with vast genome data available for many species within the genus, the fly’s tractable complexity will continue to enable exciting opportunities to explore mechanisms of complex developmental programs, behaviors, and broader evolutionary questions. This primer describes the organism’s natural history, the features of sequenced genomes within the genus, the wide range of available genetic tools and online resources, the types of biological questions Drosophila can help address, and historical milestones. PMID:26564900

  7. Unified Tractable Model for Large-Scale Networks Using Stochastic Geometry: Analysis and Design

    KAUST Repository

    Afify, Laila H.

    2016-12-01

    The ever-growing demands for wireless technologies necessitate the evolution of next generation wireless networks that fulfill the diverse wireless users requirements. However, upscaling existing wireless networks implies upscaling an intrinsic component in the wireless domain; the aggregate network interference. Being the main performance limiting factor, it becomes crucial to develop a rigorous analytical framework to accurately characterize the out-of-cell interference, to reap the benefits of emerging networks. Due to the different network setups and key performance indicators, it is essential to conduct a comprehensive study that unifies the various network configurations together with the different tangible performance metrics. In that regard, the focus of this thesis is to present a unified mathematical paradigm, based on Stochastic Geometry, for large-scale networks with different antenna/network configurations. By exploiting such a unified study, we propose an efficient automated network design strategy to satisfy the desired network objectives. First, this thesis studies the exact aggregate network interference characterization, by accounting for each of the interferers signals in the large-scale network. Second, we show that the information about the interferers symbols can be approximated via the Gaussian signaling approach. The developed mathematical model presents twofold analysis unification for uplink and downlink cellular networks literature. It aligns the tangible decoding error probability analysis with the abstract outage probability and ergodic rate analysis. Furthermore, it unifies the analysis for different antenna configurations, i.e., various multiple-input multiple-output (MIMO) systems. Accordingly, we propose a novel reliable network design strategy that is capable of appropriately adjusting the network parameters to meet desired design criteria. In addition, we discuss the diversity-multiplexing tradeoffs imposed by differently favored

  8. On relativistic models of strange stars

    Indian Academy of Sciences (India)

    tractable models of superdense stars in equilibrium. Several aspects of physical relevance of compact star models, based on Vaidya–Tikekar ansatz, have been in- vestigated [7–10] by a number of workers. Mukherjee et al [11–13] indicated the possibility of using this set-up to describe models of the compact star like Her.

  9. Dependence of Computational Models on Input Dimension: Tractability of Approximation and Optimization Tasks

    Czech Academy of Sciences Publication Activity Database

    Kainen, P.C.; Kůrková, Věra; Sanguineti, M.

    2012-01-01

    Roč. 58, č. 2 (2012), s. 1203-1214 ISSN 0018-9448 R&D Projects: GA MŠk(CZ) ME10023; GA ČR GA201/08/1744; GA ČR GAP202/11/1368 Grant - others:CNR-AV ČR(CZ-IT) Project 2010–2012 Complexity of Neural -Network and Kernel Computational Models Institutional research plan: CEZ:AV0Z10300504 Keywords : dictionary-based computational models * high-dimensional approximation and optimization * model complexity * polynomial upper bounds Subject RIV: IN - Informatics, Computer Science Impact factor: 2.621, year: 2012

  10. Tractable approximations for probabilistic models: The adaptive Thouless-Anderson-Palmer mean field approach

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2001-01-01

    We develop an advanced mean held method for approximating averages in probabilistic data models that is based on the Thouless-Anderson-Palmer (TAP) approach of disorder physics. In contrast to conventional TAP. where the knowledge of the distribution of couplings between the random variables...... is required. our method adapts to the concrete couplings. We demonstrate the validity of our approach, which is so far restricted to models with nonglassy behavior? by replica calculations for a wide class of models as well as by simulations for a real data set....

  11. OCSEGen: Open Components and Systems Environment Generator

    Science.gov (United States)

    Tkachuk, Oksana

    2014-01-01

    To analyze a large system, one often needs to break it into smaller components.To analyze a component or unit under analysis, one needs to model its context of execution, called environment, which represents the components with which the unit interacts. Environment generation is a challenging problem, because the environment needs to be general enough to uncover unit errors, yet precise enough to make the analysis tractable. In this paper, we present a tool for automated environment generation for open components and systems. The tool, called OCSEGen, is implemented on top of the Soot framework. We present the tool's current support and discuss its possible future extensions.

  12. Bioprocess systems engineering: transferring traditional process engineering principles to industrial biotechnology.

    Science.gov (United States)

    Koutinas, Michalis; Kiparissides, Alexandros; Pistikopoulos, Efstratios N; Mantalaris, Athanasios

    2012-01-01

    The complexity of the regulatory network and the interactions that occur in the intracellular environment of microorganisms highlight the importance in developing tractable mechanistic models of cellular functions and systematic approaches for modelling biological systems. To this end, the existing process systems engineering approaches can serve as a vehicle for understanding, integrating and designing biological systems and processes. Here, we review the application of a holistic approach for the development of mathematical models of biological systems, from the initial conception of the model to its final application in model-based control and optimisation. We also discuss the use of mechanistic models that account for gene regulation, in an attempt to advance the empirical expressions traditionally used to describe micro-organism growth kinetics, and we highlight current and future challenges in mathematical biology. The modelling research framework discussed herein could prove beneficial for the design of optimal bioprocesses, employing rational and feasible approaches towards the efficient production of chemicals and pharmaceuticals.

  13. Bioprocess systems engineering: transferring traditional process engineering principles to industrial biotechnology

    Directory of Open Access Journals (Sweden)

    Michalis Koutinas

    2012-10-01

    Full Text Available The complexity of the regulatory network and the interactions that occur in the intracellular environment of microorganisms highlight the importance in developing tractable mechanistic models of cellular functions and systematic approaches for modelling biological systems. To this end, the existing process systems engineering approaches can serve as a vehicle for understanding, integrating and designing biological systems and processes. Here, we review the application of a holistic approach for the development of mathematical models of biological systems, from the initial conception of the model to its final application in model-based control & optimisation. We also discuss the use of mechanistic models that account for gene regulation, in an attempt to advance the empirical expressions traditionally used to describe micro-organism growth kinetics, and we highlight current and future challenges in mathematical biology. The modelling research framework discussed herein could prove beneficial for the design of optimal bioprocesses, employing rational and feasible approaches towards the efficient production of chemicals and pharmaceuticals.

  14. BIOPROCESS SYSTEMS ENGINEERING: TRANSFERRING TRADITIONAL PROCESS ENGINEERING PRINCIPLES TO INDUSTRIAL BIOTECHNOLOGY

    Directory of Open Access Journals (Sweden)

    Michalis Koutinas

    2012-10-01

    Full Text Available The complexity of the regulatory network and the interactions that occur in the intracellular environment of microorganisms highlight the importance in developing tractable mechanistic models of cellular functions and systematic approaches for modelling biological systems. To this end, the existing process systems engineering approaches can serve as a vehicle for understanding, integrating and designing biological systems and processes. Here, we review the application of a holistic approach for the development of mathematical models of biological systems, from the initial conception of the model to its final application in model-based control and optimisation. We also discuss the use of mechanistic models that account for gene regulation, in an attempt to advance the empirical expressions traditionally used to describe micro-organism growth kinetics, and we highlight current and future challenges in mathematical biology. The modelling research framework discussed herein could prove beneficial for the design of optimal bioprocesses, employing rational and feasible approaches towards the efficient production of chemicals and pharmaceuticals.

  15. Heterogeneous expectations in monetary DSGE models

    NARCIS (Netherlands)

    Massaro, D.

    2013-01-01

    This paper derives a general New Keynesian framework with heterogeneous expectations by explicitly solving the micro-foundations underpinning the model. The resulting reduced form is analytically tractable and encompasses the representative rational agent benchmark as a special case. We specify a

  16. An empirically tractable model of optimal oil spills prevention in Russian sea harbours

    Energy Technology Data Exchange (ETDEWEB)

    Deissenberg, C. [CEFI-CNRS, Les Milles (France); Gurman, V.; Tsirlin, A. [RAS, Program Systems Inst., Pereslavl-Zalessky (Russian Federation); Ryumina, E. [Russian Academy of Sciences, Moscow (Russian Federation). Inst. of Economic Market Problems

    2001-07-01

    Based on previous theoretical work by Gottinger (1997, 1998), we propose a simple model of optimal monitoring of oil-related activities in harbour areas that is suitable for empirical estimation within the Russian-Ukrainian context, in spite of the poor availability of data in these countries. Specifically, the model indicates how to best allocate at the steady state a given monitoring budget between different monitoring activities. An approximate analytical solution to the optimization problem is derived, and a simple procedure for estimating the model on the basis of the actually available data is suggested. An application using data obtained for several harbours of the Black and Baltic Seas is given. It suggests that the current Russian monitoring practice could be much improved by better allocating the available monitoring resources. (Author)

  17. Comprehensive solutions to the Bloch equations and dynamical models for open two-level systems

    Science.gov (United States)

    Skinner, Thomas E.

    2018-01-01

    The Bloch equation and its variants constitute the fundamental dynamical model for arbitrary two-level systems. Many important processes, including those in more complicated systems, can be modeled and understood through the two-level approximation. It is therefore of widespread relevance, especially as it relates to understanding dissipative processes in current cutting-edge applications of quantum mechanics. Although the Bloch equation has been the subject of considerable analysis in the 70 years since its inception, there is still, perhaps surprisingly, significant work that can be done. This paper extends the scope of previous analyses. It provides a framework for more fully understanding the dynamics of dissipative two-level systems. A solution is derived that is compact, tractable, and completely general, in contrast to previous results. Any solution of the Bloch equation depends on three roots of a cubic polynomial that are crucial to the time dependence of the system. The roots are typically only sketched out qualitatively, with no indication of their dependence on the physical parameters of the problem. Degenerate roots, which modify the solutions, have been ignored altogether. Here the roots are obtained explicitly in terms of a single real-valued root that is expressed as a simple function of the system parameters. For the conventional Bloch equation, a simple graphical representation of this root is presented that makes evident the explicit time dependence of the system for each point in the parameter space. Several intuitive, visual models of system dynamics are developed. A Euclidean coordinate system is identified in which any generalized Bloch equation is separable, i.e., the sum of commuting rotation and relaxation operators. The time evolution in this frame is simply a rotation followed by relaxation at modified rates that play a role similar to the standard longitudinal and transverse rates. These rates are functions of the applied field, which

  18. Quantum lattice model solver HΦ

    Science.gov (United States)

    Kawamura, Mitsuaki; Yoshimi, Kazuyoshi; Misawa, Takahiro; Yamaji, Youhei; Todo, Synge; Kawashima, Naoki

    2017-08-01

    HΦ [aitch-phi ] is a program package based on the Lanczos-type eigenvalue solution applicable to a broad range of quantum lattice models, i.e., arbitrary quantum lattice models with two-body interactions, including the Heisenberg model, the Kitaev model, the Hubbard model and the Kondo-lattice model. While it works well on PCs and PC-clusters, HΦ also runs efficiently on massively parallel computers, which considerably extends the tractable range of the system size. In addition, unlike most existing packages, HΦ supports finite-temperature calculations through the method of thermal pure quantum (TPQ) states. In this paper, we explain theoretical background and user-interface of HΦ. We also show the benchmark results of HΦ on supercomputers such as the K computer at RIKEN Advanced Institute for Computational Science (AICS) and SGI ICE XA (Sekirei) at the Institute for the Solid State Physics (ISSP).

  19. The Drosophila melanogaster host model

    Science.gov (United States)

    Igboin, Christina O.; Griffen, Ann L.; Leys, Eugene J.

    2012-01-01

    The deleterious and sometimes fatal outcomes of bacterial infectious diseases are the net result of the interactions between the pathogen and the host, and the genetically tractable fruit fly, Drosophila melanogaster, has emerged as a valuable tool for modeling the pathogen–host interactions of a wide variety of bacteria. These studies have revealed that there is a remarkable conservation of bacterial pathogenesis and host defence mechanisms between higher host organisms and Drosophila. This review presents an in-depth discussion of the Drosophila immune response, the Drosophila killing model, and the use of the model to examine bacterial–host interactions. The recent introduction of the Drosophila model into the oral microbiology field is discussed, specifically the use of the model to examine Porphyromonas gingivalis–host interactions, and finally the potential uses of this powerful model system to further elucidate oral bacterial-host interactions are addressed. PMID:22368770

  20. The Drosophila melanogaster host model

    Directory of Open Access Journals (Sweden)

    Christina O. Igboin

    2012-02-01

    Full Text Available The deleterious and sometimes fatal outcomes of bacterial infectious diseases are the net result of the interactions between the pathogen and the host, and the genetically tractable fruit fly, Drosophila melanogaster, has emerged as a valuable tool for modeling the pathogen–host interactions of a wide variety of bacteria. These studies have revealed that there is a remarkable conservation of bacterial pathogenesis and host defence mechanisms between higher host organisms and Drosophila. This review presents an in-depth discussion of the Drosophila immune response, the Drosophila killing model, and the use of the model to examine bacterial–host interactions. The recent introduction of the Drosophila model into the oral microbiology field is discussed, specifically the use of the model to examine Porphyromonas gingivalis–host interactions, and finally the potential uses of this powerful model system to further elucidate oral bacterial-host interactions are addressed.

  1. The Drosophila melanogaster host model.

    Science.gov (United States)

    Igboin, Christina O; Griffen, Ann L; Leys, Eugene J

    2012-01-01

    The deleterious and sometimes fatal outcomes of bacterial infectious diseases are the net result of the interactions between the pathogen and the host, and the genetically tractable fruit fly, Drosophila melanogaster, has emerged as a valuable tool for modeling the pathogen-host interactions of a wide variety of bacteria. These studies have revealed that there is a remarkable conservation of bacterial pathogenesis and host defence mechanisms between higher host organisms and Drosophila. This review presents an in-depth discussion of the Drosophila immune response, the Drosophila killing model, and the use of the model to examine bacterial-host interactions. The recent introduction of the Drosophila model into the oral microbiology field is discussed, specifically the use of the model to examine Porphyromonas gingivalis-host interactions, and finally the potential uses of this powerful model system to further elucidate oral bacterial-host interactions are addressed.

  2. Thermodynamics of the hexagonal close-packed iron-nitrogen system from first-principles

    DEFF Research Database (Denmark)

    Bakkedal, Morten Bjørn

    to hexagonal systems and a numerically tractable extended equation of state is developed to describe thermody-namic equilibrium properties at finite temperature.The model is applied to ε-Fe3N specifically. Through the versatility of the model, equi-librium lattice parameters, the bulk modulus, and the thermal......First-principles thermodynamic models are developed for the hexagonal close-packed ε-Fe-N system. The system can be considered as a hexagonal close-packed host lattice of iron atoms and with the nitrogen atoms residing on a sublattice formed by the octahedral interstices. The iron host lattice...... is assumed fixed.The models are developed entirely from first-principles calculations based on fundamen-tal quantum mechanical calculation through the density functional theory approach with the atomic numbers and crystal structures as the only input parameters. A complete thermody-namic description should...

  3. A two-state stochastic model for nanoparticle self-assembly: theory, computer simulations and applications

    International Nuclear Information System (INIS)

    Schwen, E M; Mazilu, I; Mazilu, D A

    2015-01-01

    We introduce a stochastic cooperative model for particle deposition and evaporation relevant to ionic self-assembly of nanoparticles with applications in surface fabrication and nanomedicine, and present a method for mapping our model onto the Ising model. The mapping process allows us to use the established results for the Ising model to describe the steady-state properties of our system. After completing the mapping process, we investigate the time dependence of particle density using the mean field approximation. We complement this theoretical analysis with Monte Carlo simulations that support our model. These techniques, which can be used separately or in combination, are useful as pedagogical tools because they are tractable mathematically and they apply equally well to many other physical systems with nearest-neighbour interactions including voter and epidemic models. (paper)

  4. Load-aware modeling for uplink cellular networks in a multi-channel environment

    KAUST Repository

    Alammouri, Ahmad; Elsawy, Hesham; Alouini, Mohamed-Slim

    2014-01-01

    We exploit tools from stochastic geometry to develop a tractable analytical approach for modeling uplink cellular networks. The developed model is load aware and accounts for per-user power control as well as the limited transmit power constraint

  5. Tissue P Systems With Channel States Working in the Flat Maximally Parallel Way.

    Science.gov (United States)

    Song, Bosheng; Perez-Jimenez, Mario J; Paun, Gheorghe; Pan, Linqiang

    2016-10-01

    Tissue P systems with channel states are a class of bio-inspired parallel computational models, where rules are used in a sequential manner (on each channel, at most one rule can be used at each step). In this work, tissue P systems with channel states working in a flat maximally parallel way are considered, where at each step, on each channel, a maximal set of applicable rules that pass from a given state to a unique next state, is chosen and each rule in the set is applied once. The computational power of such P systems is investigated. Specifically, it is proved that tissue P systems with channel states and antiport rules of length two are able to compute Parikh sets of finite languages, and such P systems with one cell and noncooperative symport rules can compute at least all Parikh sets of matrix languages. Some Turing universality results are also provided. Moreover, the NP-complete problem SAT is solved by tissue P systems with channel states, cell division and noncooperative symport rules working in the flat maximally parallel way; nevertheless, if channel states are not used, then such P systems working in the flat maximally parallel way can solve only tractable problems. These results show that channel states provide a frontier of tractability between efficiency and non-efficiency in the framework of tissue P systems with cell division (assuming P ≠ NP ).

  6. Structured spatio-temporal shot-noise Cox point process models, with a view to modelling forest fires

    DEFF Research Database (Denmark)

    Møller, Jesper; Diaz-Avalos, Carlos

    Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...... dataset consisting of 2796 days and 5834 spatial locations of fires. The model is compared with a spatio-temporal log-Gaussian Cox point process model, and likelihood-based methods are discussed to some extent....

  7. Structured Spatio-temporal shot-noise Cox point process models, with a view to modelling forest fires

    DEFF Research Database (Denmark)

    Møller, Jesper; Diaz-Avalos, Carlos

    2010-01-01

    Spatio-temporal Cox point process models with a multiplicative structure for the driving random intensity, incorporating covariate information into temporal and spatial components, and with a residual term modelled by a shot-noise process, are considered. Such models are flexible and tractable fo...... data set consisting of 2796 days and 5834 spatial locations of fires. The model is compared with a spatio-temporal log-Gaussian Cox point process model, and likelihood-based methods are discussed to some extent....

  8. Dynamical systems on networks a tutorial

    CERN Document Server

    Porter, Mason A

    2016-01-01

    This volume is a tutorial for the study of dynamical systems on networks. It discusses both methodology and models, including spreading models for social and biological contagions. The authors focus especially on “simple” situations that are analytically tractable, because they are insightful and provide useful springboards for the study of more complicated scenarios. This tutorial, which also includes key pointers to the literature, should be helpful for junior and senior undergraduate students, graduate students, and researchers from mathematics, physics, and engineering who seek to study dynamical systems on networks but who may not have prior experience with graph theory or networks. Mason A. Porter is Professor of Nonlinear and Complex Systems at the Oxford Centre for Industrial and Applied Mathematics, Mathematical Institute, University of Oxford, UK. He is also a member of the CABDyN Complexity Centre and a Tutorial Fellow of Somerville College. James P. Gleeson is Professor of Industrial and Appli...

  9. Unified Tractable Model for Large-Scale Networks Using Stochastic Geometry: Analysis and Design

    KAUST Repository

    Afify, Laila H.

    2016-01-01

    about the interferers symbols can be approximated via the Gaussian signaling approach. The developed mathematical model presents twofold analysis unification for uplink and downlink cellular networks literature. It aligns the tangible decoding error

  10. Time Delay Systems Methods, Applications and New Trends

    CERN Document Server

    Vyhlídal, Tomáš; Niculescu, Silviu-Iulian; Pepe, Pierdomenico

    2012-01-01

    This volume is concerned with the control and dynamics of time delay systems; a research field with at least six-decade long history that has been very active especially in the past two decades. In parallel to the new challenges emerging from engineering, physics, mathematics, and economics, the volume covers several new directions including topology induced stability, large-scale interconnected systems, roles of networks in stability, and new trends in predictor-based control and consensus dynamics. The associated applications/problems are described by highly complex models, and require solving inverse problems as well as the development of new theories, mathematical tools, numerically-tractable algorithms for real-time control. The volume, which is targeted to present these developments in this rapidly evolving field, captures a careful selection of the most recent papers contributed by experts and collected under five parts: (i) Methodology: From Retarded to Neutral Continuous Delay Models, (ii) Systems, S...

  11. A Stochastic Geometry Model for Multi-hop Highway Vehicular Communication

    KAUST Repository

    Farooq, Muhammad Junaid; Elsawy, Hesham; Alouini, Mohamed-Slim

    2015-01-01

    dissemination. This paper exploits stochastic geometry to develop a tractable and accurate modeling framework to characterize the multi-hop transmissions for vehicular networks in a multi-lane highway setup. In particular, we study the tradeoffs between per

  12. Beyond GLMs: a generative mixture modeling approach to neural system identification.

    Directory of Open Access Journals (Sweden)

    Lucas Theis

    Full Text Available Generalized linear models (GLMs represent a popular choice for the probabilistic characterization of neural spike responses. While GLMs are attractive for their computational tractability, they also impose strong assumptions and thus only allow for a limited range of stimulus-response relationships to be discovered. Alternative approaches exist that make only very weak assumptions but scale poorly to high-dimensional stimulus spaces. Here we seek an approach which can gracefully interpolate between the two extremes. We extend two frequently used special cases of the GLM-a linear and a quadratic model-by assuming that the spike-triggered and non-spike-triggered distributions can be adequately represented using Gaussian mixtures. Because we derive the model from a generative perspective, its components are easy to interpret as they correspond to, for example, the spike-triggered distribution and the interspike interval distribution. The model is able to capture complex dependencies on high-dimensional stimuli with far fewer parameters than other approaches such as histogram-based methods. The added flexibility comes at the cost of a non-concave log-likelihood. We show that in practice this does not have to be an issue and the mixture-based model is able to outperform generalized linear and quadratic models.

  13. On modeling animal movements using Brownian motion with measurement error.

    Science.gov (United States)

    Pozdnyakov, Vladimir; Meyer, Thomas; Wang, Yu-Bo; Yan, Jun

    2014-02-01

    Modeling animal movements with Brownian motion (or more generally by a Gaussian process) has a long tradition in ecological studies. The recent Brownian bridge movement model (BBMM), which incorporates measurement errors, has been quickly adopted by ecologists because of its simplicity and tractability. We discuss some nontrivial properties of the discrete-time stochastic process that results from observing a Brownian motion with added normal noise at discrete times. In particular, we demonstrate that the observed sequence of random variables is not Markov. Consequently the expected occupation time between two successively observed locations does not depend on just those two observations; the whole path must be taken into account. Nonetheless, the exact likelihood function of the observed time series remains tractable; it requires only sparse matrix computations. The likelihood-based estimation procedure is described in detail and compared to the BBMM estimation.

  14. Exploring the tractability of the capped hose model

    NARCIS (Netherlands)

    Bosman, T. (Thomas); N.K. Olver (Neil)

    2017-01-01

    textabstractRobust network design concerns the design of networks to support uncertain or varying traffic patterns. An especially important case is the VPN problem, where the total traffic emanating from any node is bounded, but there are no further constraints on the traffic pattern. Recently,

  15. Organoid Models of Human and Mouse Ductal Pancreatic Cancer

    NARCIS (Netherlands)

    Boj, Sylvia F.; Hwang, Chang-Il; Baker, Lindsey A.; Chio, Iok In Christine; Engle, Dannielle D.; Corbo, Vincenzo; Jager, Myrthe; Ponz-Sarvise, Mariano; Tiriac, Herve; Spector, Mona S.; Gracanin, Ana; Oni, Tobiloba; Yu, Kenneth H.; van Boxtel, Ruben; Huch, Meritxell; Rivera, Keith D.; Wilson, John P.; Feigin, Michael E.; Oehlund, Daniel; Handly-Santana, Abram; Ardito-Abraham, Christine M.; Ludwig, Michael; Elyada, Ela; Alagesan, Brinda; Biffi, Giulia; Yordanov, Georgi N.; Delcuze, Bethany; Creighton, Brianna; Wright, Kevin; Park, Youngkyu; Morsink, Folkert H. M.; Molenaar, IQ; Borel Rinkes, Inne H.; Cuppen, Edwin; Hao, Yuan; Jin, Ying; Nijman, Isaac J.; Iacobuzio-Donahue, Christine; Leach, Steven D.; Pappin, Darryl J.; Hammell, Molly; Klimstra, David S.; Basturk, Olca; Hruban, Ralph H.; Offerhaus, George Johan; Vries, Robert G. J.; Clevers, Hans; Tuveson, David A.

    2015-01-01

    Pancreatic cancer is one of the most lethal malignancies due to its late diagnosis and limited response to treatment. Tractable methods to identify and interrogate pathways involved in pancreatic tumorigenesis are urgently needed. We established organoid models from normal and neoplastic murine and

  16. Long-Term Adult Feline Liver Organoid Cultures for Disease Modeling of Hepatic Steatosis

    NARCIS (Netherlands)

    Kruitwagen, H.S. (Hedwig S.); Oosterhoff, L.A. (Loes A.); Vernooij, I.G.W.H. (Ingrid G.W.H.); Schrall, I.M. (Ingrid M.); M.E. van Wolferen (Monique); Bannink, F. (Farah); Roesch, C. (Camille); van Uden, L. (Lisa); Molenaar, M.R. (Martijn R.); J.B. Helms (J. Bernd); G.C.M. Grinwis (Guy C.); M.M.A. Verstegen (Monique); L.J.W. van der Laan (Luc); M. Huch (Meritxell); N. Geijsen (Niels); R.G.J. Vries (Robert); H.C. Clevers (Hans); J. Rothuizen (J.); B.A. Schotanus (Baukje A.); C. Penning (Corine); B. Spee (B.)

    2017-01-01

    textabstractHepatic steatosis is a highly prevalent liver disease, yet research is hampered by the lack of tractable cellular and animal models. Steatosis also occurs in cats, where it can cause severe hepatic failure. Previous studies demonstrate the potential of liver organoids for modeling

  17. A simple toroidal shell model for the study of feedback stabilization of resistive wall modes in a tokamak plasma

    International Nuclear Information System (INIS)

    Jhang, Hogun

    2008-01-01

    A study is conducted on the feedback stabilization of resistive wall modes (RWMs) in a tokamak plasma using a toroidal shell model. An analytically tractable form of the RWM dispersion relation is derived in the presence of a set of discrete feedback coil currents. A parametric study is carried out to optimize the feedback system configuration. It is shown that the total toroidal angle of a resistive wall spanned by the feedback coils and the poloidal angular extent of a feedback coil are crucial parameters to determine the efficacy of the feedback system

  18. Modeling cellular networks in fading environments with dominant specular components

    KAUST Repository

    Alammouri, Ahmad; Elsawy, Hesham; Salem, Ahmed Sultan; Di Renzo, Marco; Alouini, Mohamed-Slim

    2016-01-01

    to the Nakagami-m fading in some special cases. However, neither the Rayleigh nor the Nakagami-m accounts for dominant specular components (DSCs) which may appear in realistic fading channels. In this paper, we present a tractable model for cellular networks

  19. Three mechanical models for blebbing and multi-blebbing

    KAUST Repository

    Woolley, T. E.

    2014-06-17

    Membrane protrusions known as blebs play important roles in many cellular phenomena. Here we present three mathematical models of the bleb formation, which use biological insights to produce phenotypically accurate pressure-driven expansions. First, we introduce a recently suggested solid mechanics framework that is able to create blebs through stretching the membrane. This framework is then extended to include reference state reconfigurations, which models membrane growth. Finally, the stretching and reconfiguring mechanical models are compared with a much simpler geometrically constrained solution. This allows us to demonstrate that simpler systems are able to capture much of the biological complexity despite more restrictive assumptions. Moreover, the simplicity of the spherical model allows us to consider multiple blebs in a tractable framework. © 2014 The authors 2014. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.

  20. Fluid Dynamic Models for Bhattacharyya-Based Discriminant Analysis.

    Science.gov (United States)

    Noh, Yung-Kyun; Hamm, Jihun; Park, Frank Chongwoo; Zhang, Byoung-Tak; Lee, Daniel D

    2018-01-01

    Classical discriminant analysis attempts to discover a low-dimensional subspace where class label information is maximally preserved under projection. Canonical methods for estimating the subspace optimize an information-theoretic criterion that measures the separation between the class-conditional distributions. Unfortunately, direct optimization of the information-theoretic criteria is generally non-convex and intractable in high-dimensional spaces. In this work, we propose a novel, tractable algorithm for discriminant analysis that considers the class-conditional densities as interacting fluids in the high-dimensional embedding space. We use the Bhattacharyya criterion as a potential function that generates forces between the interacting fluids, and derive a computationally tractable method for finding the low-dimensional subspace that optimally constrains the resulting fluid flow. We show that this model properly reduces to the optimal solution for homoscedastic data as well as for heteroscedastic Gaussian distributions with equal means. We also extend this model to discover optimal filters for discriminating Gaussian processes and provide experimental results and comparisons on a number of datasets.

  1. Random regret minimization : Exploration of a new choice model for environmental and resource economics

    NARCIS (Netherlands)

    Thiene, M.; Boeri, M.; Chorus, C.G.

    2011-01-01

    This paper introduces the discrete choice model-paradigm of Random Regret Minimization (RRM) to the field of environmental and resource economics. The RRM-approach has been very recently developed in the context of travel demand modelling and presents a tractable, regret-based alternative to the

  2. Long-Term Adult Feline Liver Organoid Cultures for Disease Modeling of Hepatic Steatosis

    NARCIS (Netherlands)

    Kruitwagen, Hedwig S.; Oosterhoff, Loes A.; Vernooij, Ingrid G.W.H.; Schrall, Ingrid M.; van Wolferen, Monique E.; Bannink, Farah; Roesch, Camille; van Uden, Lisa; Molenaar, Martijn R.; Helms, J. Bernd; Grinwis, Guy C.M.; Verstegen, Monique M.A.; van der Laan, Luc J.W.; Huch, Meritxell; Geijsen, Niels; Vries, Robert G.; Clevers, Hans; Rothuizen, Jan; Schotanus, Baukje A.; Penning, Louis C.; Spee, Bart

    2017-01-01

    Hepatic steatosis is a highly prevalent liver disease, yet research is hampered by the lack of tractable cellular and animal models. Steatosis also occurs in cats, where it can cause severe hepatic failure. Previous studies demonstrate the potential of liver organoids for modeling genetic diseases.

  3. Predicting ecosystem functioning from plant traits: Results from a multi-scale ecophsiological modeling approach

    NARCIS (Netherlands)

    Wijk, van M.T.

    2007-01-01

    Ecosystem functioning is the result of processes working at a hierarchy of scales. The representation of these processes in a model that is mathematically tractable and ecologically meaningful is a big challenge. In this paper I describe an individual based model (PLACO¿PLAnt COmpetition) that

  4. Structural elements regulating amyloidogenesis: a cholinesterase model system.

    Directory of Open Access Journals (Sweden)

    Létitia Jean

    2008-03-01

    Full Text Available Polymerization into amyloid fibrils is a crucial step in the pathogenesis of neurodegenerative syndromes. Amyloid assembly is governed by properties of the sequence backbone and specific side-chain interactions, since fibrils from unrelated sequences possess similar structures and morphologies. Therefore, characterization of the structural determinants driving amyloid aggregation is of fundamental importance. We investigated the forces involved in the amyloid assembly of a model peptide derived from the oligomerization domain of acetylcholinesterase (AChE, AChE(586-599, through the effect of single point mutations on beta-sheet propensity, conformation, fibrilization, surfactant activity, oligomerization and fibril morphology. AChE(586-599 was chosen due to its fibrilization tractability and AChE involvement in Alzheimer's disease. The results revealed how specific regions and residues can control AChE(586-599 assembly. Hydrophobic and/or aromatic residues were crucial for maintaining a high beta-strand propensity, for the conformational transition to beta-sheet, and for the first stage of aggregation. We also demonstrated that positively charged side-chains might be involved in electrostatic interactions, which could control the transition to beta-sheet, the oligomerization and assembly stability. Further interactions were also found to participate in the assembly. We showed that some residues were important for AChE(586-599 surfactant activity and that amyloid assembly might preferentially occur at an air-water interface. Consistently with the experimental observations and assembly models for other amyloid systems, we propose a model for AChE(586-599 assembly in which a steric-zipper formed through specific interactions (hydrophobic, electrostatic, cation-pi, SH-aromatic, metal chelation and polar-polar would maintain the beta-sheets together. We also propose that the stacking between the strands in the beta-sheets along the fiber axis could

  5. Neurophysiology of Drosophila models of Parkinson's disease.

    Science.gov (United States)

    West, Ryan J H; Furmston, Rebecca; Williams, Charles A C; Elliott, Christopher J H

    2015-01-01

    We provide an insight into the role Drosophila has played in elucidating neurophysiological perturbations associated with Parkinson's disease- (PD-) related genes. Synaptic signalling deficits are observed in motor, central, and sensory systems. Given the neurological impact of disease causing mutations within these same genes in humans the phenotypes observed in fly are of significant interest. As such we observe four unique opportunities provided by fly nervous system models of Parkinson's disease. Firstly, Drosophila models are instrumental in exploring the mechanisms of neurodegeneration, with several PD-related mutations eliciting related phenotypes including sensitivity to energy supply and vesicular deformities. These are leading to the identification of plausible cellular mechanisms, which may be specific to (dopaminergic) neurons and synapses rather than general cellular phenotypes. Secondly, models show noncell autonomous signalling within the nervous system, offering the opportunity to develop our understanding of the way pathogenic signalling propagates, resembling Braak's scheme of spreading pathology in PD. Thirdly, the models link physiological deficits to changes in synaptic structure. While the structure-function relationship is complex, the genetic tractability of Drosophila offers the chance to separate fundamental changes from downstream consequences. Finally, the strong neuronal phenotypes permit relevant first in vivo drug testing.

  6. On stochastic geometry modeling of cellular uplink transmission with truncated channel inversion power control

    KAUST Repository

    Elsawy, Hesham; Hossain, Ekram

    2014-01-01

    Using stochastic geometry, we develop a tractable uplink modeling paradigm for outage probability and spectral efficiency in both single and multi-tier cellular wireless networks. The analysis accounts for per user equipment (UE) power control

  7. Learning-based stochastic object models for characterizing anatomical variations

    Science.gov (United States)

    Dolly, Steven R.; Lou, Yang; Anastasio, Mark A.; Li, Hua

    2018-03-01

    It is widely known that the optimization of imaging systems based on objective, task-based measures of image quality via computer-simulation requires the use of a stochastic object model (SOM). However, the development of computationally tractable SOMs that can accurately model the statistical variations in human anatomy within a specified ensemble of patients remains a challenging task. Previously reported numerical anatomic models lack the ability to accurately model inter-patient and inter-organ variations in human anatomy among a broad patient population, mainly because they are established on image data corresponding to a few of patients and individual anatomic organs. This may introduce phantom-specific bias into computer-simulation studies, where the study result is heavily dependent on which phantom is used. In certain applications, however, databases of high-quality volumetric images and organ contours are available that can facilitate this SOM development. In this work, a novel and tractable methodology for learning a SOM and generating numerical phantoms from a set of volumetric training images is developed. The proposed methodology learns geometric attribute distributions (GAD) of human anatomic organs from a broad patient population, which characterize both centroid relationships between neighboring organs and anatomic shape similarity of individual organs among patients. By randomly sampling the learned centroid and shape GADs with the constraints of the respective principal attribute variations learned from the training data, an ensemble of stochastic objects can be created. The randomness in organ shape and position reflects the learned variability of human anatomy. To demonstrate the methodology, a SOM of an adult male pelvis is computed and examples of corresponding numerical phantoms are created.

  8. Quadratic Term Structure Models in Discrete Time

    OpenAIRE

    Marco Realdon

    2006-01-01

    This paper extends the results on quadratic term structure models in continuos time to the discrete time setting. The continuos time setting can be seen as a special case of the discrete time one. Recursive closed form solutions for zero coupon bonds are provided even in the presence of multiple correlated underlying factors. Pricing bond options requires simple integration. Model parameters may well be time dependent without scuppering such tractability. Model estimation does not require a r...

  9. On use of image quality metrics for perceptual blur modeling: image/video compression case

    Science.gov (United States)

    Cha, Jae H.; Olson, Jeffrey T.; Preece, Bradley L.; Espinola, Richard L.; Abbott, A. Lynn

    2018-02-01

    Linear system theory is employed to make target acquisition performance predictions for electro-optical/infrared imaging systems where the modulation transfer function (MTF) may be imposed from a nonlinear degradation process. Previous research relying on image quality metrics (IQM) methods, which heuristically estimate perceived MTF has supported that an average perceived MTF can be used to model some types of degradation such as image compression. Here, we discuss the validity of the IQM approach by mathematically analyzing the associated heuristics from the perspective of reliability, robustness, and tractability. Experiments with standard images compressed by x.264 encoding suggest that the compression degradation can be estimated by a perceived MTF within boundaries defined by well-behaved curves with marginal error. Our results confirm that the IQM linearizer methodology provides a credible tool for sensor performance modeling.

  10. Spatial-Temporal Correlation Properties of the 3GPP Spatial Channel Model and the Kronecker MIMO Channel Model

    Directory of Open Access Journals (Sweden)

    Cheng-Xiang Wang

    2007-02-01

    Full Text Available The performance of multiple-input multiple-output (MIMO systems is greatly influenced by the spatial-temporal correlation properties of the underlying MIMO channels. This paper investigates the spatial-temporal correlation characteristics of the spatial channel model (SCM in the Third Generation Partnership Project (3GPP and the Kronecker-based stochastic model (KBSM at three levels, namely, the cluster level, link level, and system level. The KBSM has both the spatial separability and spatial-temporal separability at all the three levels. The spatial-temporal separability is observed for the SCM only at the system level, but not at the cluster and link levels. The SCM shows the spatial separability at the link and system levels, but not at the cluster level since its spatial correlation is related to the joint distribution of the angle of arrival (AoA and angle of departure (AoD. The KBSM with the Gaussian-shaped power azimuth spectrum (PAS is found to fit best the 3GPP SCM in terms of the spatial correlations. Despite its simplicity and analytical tractability, the KBSM is restricted to model only the average spatial-temporal behavior of MIMO channels. The SCM provides more insights of the variations of different MIMO channel realizations, but the implementation complexity is relatively high.

  11. Stochastic modeling of virus capsid assembly pathways

    Science.gov (United States)

    Schwartz, Russell

    2009-03-01

    Virus capsids have become a key model system for understanding self-assembly due to their high complexity, robust and efficient assembly processes, and experimental tractability. Our ability to directly examine and manipulate capsid assembly kinetics in detail nonetheless remains limited, creating a need for computer models that can infer experimentally inaccessible features of the assembly process and explore the effects of hypothetical manipulations on assembly trajectories. We have developed novel algorithms for stochastic simulation of capsid assembly [1,2] that allow us to model capsid assembly over broad parameter spaces [3]. We apply these methods to study the nature of assembly pathway control in virus capsids as well as their sensitivity to assembly conditions and possible experimental interventions. [4pt] [1] F. Jamalyaria, R. Rohlfs, and R. Schwartz. J Comp Phys 204, 100 (2005). [0pt] [2] N. Misra and R. Schwartz. J Chem Phys 129, in press (2008). [0pt] [3] B. Sweeney, T. Zhang, and R. Schwartz. Biophys J 94, 772 (2008).

  12. Dynamic state estimation techniques for large-scale electric power systems

    International Nuclear Information System (INIS)

    Rousseaux, P.; Pavella, M.

    1991-01-01

    This paper presents the use of dynamic type state estimators for energy management in electric power systems. Various dynamic type estimators have been developed, but have never been implemented. This is primarily because of dimensionality problems posed by the conjunction of an extended Kalman filter with a large scale power system. This paper precisely focuses on how to circumvent the high dimensionality, especially prohibitive in the filtering step, by using a decomposition-aggregation hierarchical scheme; to appropriately model the power system dynamics, the authors introduce new state variables in the prediction step and rely on a load forecasting method. The combination of these two techniques succeeds in solving the overall dynamic state estimation problem not only in a tractable and realistic way, but also in compliance with real-time computational requirements. Further improvements are also suggested, bound to the specifics of the high voltage electric transmission systems

  13. Mouse Models of Breast Cancer: Platforms for Discovering Precision Imaging Diagnostics and Future Cancer Medicine.

    Science.gov (United States)

    Manning, H Charles; Buck, Jason R; Cook, Rebecca S

    2016-02-01

    Representing an enormous health care and socioeconomic challenge, breast cancer is the second most common cancer in the world and the second most common cause of cancer-related death. Although many of the challenges associated with preventing, treating, and ultimately curing breast cancer are addressable in the laboratory, successful translation of groundbreaking research to clinical populations remains an important barrier. Particularly when compared with research on other types of solid tumors, breast cancer research is hampered by a lack of tractable in vivo model systems that accurately recapitulate the relevant clinical features of the disease. A primary objective of this article was to provide a generalizable overview of the types of in vivo model systems, with an emphasis primarily on murine models, that are widely deployed in preclinical breast cancer research. Major opportunities to advance precision cancer medicine facilitated by molecular imaging of preclinical breast cancer models are discussed. © 2016 by the Society of Nuclear Medicine and Molecular Imaging, Inc.

  14. Estimating Ambiguity Preferences and Perceptions in Multiple Prior Models: Evidence from the Field

    NARCIS (Netherlands)

    S.G. Dimmock (Stephen); R.R.P. Kouwenberg (Roy); O.S. Mitchell (Olivia); K. Peijnenburg (Kim)

    2015-01-01

    markdownabstractWe develop a tractable method to estimate multiple prior models of decision-making under ambiguity. In a representative sample of the U.S. population, we measure ambiguity attitudes in the gain and loss domains. We find that ambiguity aversion is common for uncertain events of

  15. Weather Derivatives and Stochastic Modelling of Temperature

    Directory of Open Access Journals (Sweden)

    Fred Espen Benth

    2011-01-01

    Full Text Available We propose a continuous-time autoregressive model for the temperature dynamics with volatility being the product of a seasonal function and a stochastic process. We use the Barndorff-Nielsen and Shephard model for the stochastic volatility. The proposed temperature dynamics is flexible enough to model temperature data accurately, and at the same time being analytically tractable. Futures prices for commonly traded contracts at the Chicago Mercantile Exchange on indices like cooling- and heating-degree days and cumulative average temperatures are computed, as well as option prices on them.

  16. Adaptive control using neural networks and approximate models.

    Science.gov (United States)

    Narendra, K S; Mukhopadhyay, S

    1997-01-01

    The NARMA model is an exact representation of the input-output behavior of finite-dimensional nonlinear discrete-time dynamical systems in a neighborhood of the equilibrium state. However, it is not convenient for purposes of adaptive control using neural networks due to its nonlinear dependence on the control input. Hence, quite often, approximate methods are used for realizing the neural controllers to overcome computational complexity. In this paper, we introduce two classes of models which are approximations to the NARMA model, and which are linear in the control input. The latter fact substantially simplifies both the theoretical analysis as well as the practical implementation of the controller. Extensive simulation studies have shown that the neural controllers designed using the proposed approximate models perform very well, and in many cases even better than an approximate controller designed using the exact NARMA model. In view of their mathematical tractability as well as their success in simulation studies, a case is made in this paper that such approximate input-output models warrant a detailed study in their own right.

  17. Neurophysiology of Drosophila Models of Parkinson’s Disease

    Directory of Open Access Journals (Sweden)

    Ryan J. H. West

    2015-01-01

    Full Text Available We provide an insight into the role Drosophila has played in elucidating neurophysiological perturbations associated with Parkinson’s disease- (PD- related genes. Synaptic signalling deficits are observed in motor, central, and sensory systems. Given the neurological impact of disease causing mutations within these same genes in humans the phenotypes observed in fly are of significant interest. As such we observe four unique opportunities provided by fly nervous system models of Parkinson’s disease. Firstly, Drosophila models are instrumental in exploring the mechanisms of neurodegeneration, with several PD-related mutations eliciting related phenotypes including sensitivity to energy supply and vesicular deformities. These are leading to the identification of plausible cellular mechanisms, which may be specific to (dopaminergic neurons and synapses rather than general cellular phenotypes. Secondly, models show noncell autonomous signalling within the nervous system, offering the opportunity to develop our understanding of the way pathogenic signalling propagates, resembling Braak’s scheme of spreading pathology in PD. Thirdly, the models link physiological deficits to changes in synaptic structure. While the structure-function relationship is complex, the genetic tractability of Drosophila offers the chance to separate fundamental changes from downstream consequences. Finally, the strong neuronal phenotypes permit relevant first in vivo drug testing.

  18. Viability and resilience of complex systems concepts, methods and case studies from ecology and society

    CERN Document Server

    Deffuant, Guillaume

    2011-01-01

    One common characteristic of a complex system is its ability to withstand major disturbances and the capacity to rebuild itself. Understanding how such systems demonstrate resilience by absorbing or recovering from major external perturbations requires both quantitative foundations and a multidisciplinary view of the topic. This book demonstrates how new methods can be used to identify the actions favouring the recovery from perturbations on a variety of examples including the dynamics of bacterial biofilms, grassland savannahs, language competition and Internet social networking sites. The reader is taken through an introduction to the idea of resilience and viability and shown the mathematical basis of the techniques used to analyse systems. The idea of individual or agent-based modelling of complex systems is introduced and related to analytically tractable approximations of such models. A set of case studies illustrates the use of the techniques in real applications, and the final section describes how on...

  19. Projected Dipole Model for Quantum Plasmonics

    DEFF Research Database (Denmark)

    Yan, Wei; Wubs, Martijn; Mortensen, N. Asger

    2015-01-01

    of classical electrodynamics, while quantum properties are described accurately through an infinitely thin layer of dipoles oriented normally to the metal surface. The nonlocal polarizability of the dipole layer-the only introduced parameter-is mapped from the free-electron distribution near the metal surface...... as obtained with 1D quantum calculations, such as time-dependent density-functional theory (TDDFT), and is determined once and for all. The model can be applied in two and three dimensions to any system size that is tractable within classical electrodynamics, while capturing quantum plasmonic aspects......Quantum effects of plasmonic phenomena have been explored through ab initio studies, but only for exceedingly small metallic nanostructures, leaving most experimentally relevant structures too large to handle. We propose instead an effective description with the computationally appealing features...

  20. Stochastic Modeling and Generation of Partially Polarized or Partially Coherent Electromagnetic Waves

    Science.gov (United States)

    Davis, Brynmor; Kim, Edward; Piepmeier, Jeffrey; Hildebrand, Peter H. (Technical Monitor)

    2001-01-01

    Many new Earth remote-sensing instruments are embracing both the advantages and added complexity that result from interferometric or fully polarimetric operation. To increase instrument understanding and functionality a model of the signals these instruments measure is presented. A stochastic model is used as it recognizes the non-deterministic nature of any real-world measurements while also providing a tractable mathematical framework. A stationary, Gaussian-distributed model structure is proposed. Temporal and spectral correlation measures provide a statistical description of the physical properties of coherence and polarization-state. From this relationship the model is mathematically defined. The model is shown to be unique for any set of physical parameters. A method of realizing the model (necessary for applications such as synthetic calibration-signal generation) is given and computer simulation results are presented. The signals are constructed using the output of a multi-input multi-output linear filter system, driven with white noise.

  1. Multimodal sensorimotor system in unicellular zoospores of a fungus.

    Science.gov (United States)

    Swafford, Andrew J M; Oakley, Todd H

    2018-01-19

    Complex sensory systems often underlie critical behaviors, including avoiding predators and locating prey, mates and shelter. Multisensory systems that control motor behavior even appear in unicellular eukaryotes, such as Chlamydomonas , which are important laboratory models for sensory biology. However, we know of no unicellular opisthokonts that control motor behavior using a multimodal sensory system. Therefore, existing single-celled models for multimodal sensorimotor integration are very distantly related to animals. Here, we describe a multisensory system that controls the motor function of unicellular fungal zoospores. We found that zoospores of Allomyces arbusculus exhibit both phototaxis and chemotaxis. Furthermore, we report that closely related Allomyces species respond to either the chemical or the light stimuli presented in this study, not both, and likely do not share this multisensory system. This diversity of sensory systems within Allomyces provides a rare example of a comparative framework that can be used to examine the evolution of sensory systems following the gain/loss of available sensory modalities. The tractability of Allomyces and related fungi as laboratory organisms will facilitate detailed mechanistic investigations into the genetic underpinnings of novel photosensory systems, and how multisensory systems may have functioned in early opisthokonts before multicellularity allowed for the evolution of specialized cell types. © 2018. Published by The Company of Biologists Ltd.

  2. A New Simple Model for Underwater Wireless Optical Channels in the Presence of Air Bubbles

    KAUST Repository

    Zedini, Emna

    2018-01-15

    A novel statistical model is proposed to characterize turbulence-induced fading in underwater wireless optical channels in the presence of air bubbles for fresh and salty waters, based on experimental data. In this model, the channel irradiance fluctuations are characterized by the mixture Exponential-Gamma distribution. We use the expectation maximization (EM) algorithm to obtain the maximum likelihood parameter estimation of the new model. Interestingly, the proposed model is shown to provide a perfect fit with the measured data under all the channel conditions for both types of water. The major advantage of the new model is that it has a simple mathematical form making it attractive from a performance analysis point of view. Indeed, the application of the Exponential-Gamma model leads to closed-form and analytically tractable expressions for key system performance metrics such as the outage probability and the average bit-error rate.

  3. A New Simple Model for Underwater Wireless Optical Channels in the Presence of Air Bubbles

    KAUST Repository

    Zedini, Emna; Oubei, Hassan M.; Kammoun, Abla; Hamdi, Mounir; Ooi, Boon S.; Alouini, Mohamed-Slim

    2018-01-01

    A novel statistical model is proposed to characterize turbulence-induced fading in underwater wireless optical channels in the presence of air bubbles for fresh and salty waters, based on experimental data. In this model, the channel irradiance fluctuations are characterized by the mixture Exponential-Gamma distribution. We use the expectation maximization (EM) algorithm to obtain the maximum likelihood parameter estimation of the new model. Interestingly, the proposed model is shown to provide a perfect fit with the measured data under all the channel conditions for both types of water. The major advantage of the new model is that it has a simple mathematical form making it attractive from a performance analysis point of view. Indeed, the application of the Exponential-Gamma model leads to closed-form and analytically tractable expressions for key system performance metrics such as the outage probability and the average bit-error rate.

  4. Building Systems from Scratch: an Exploratory Study of Students Learning About Climate Change

    Science.gov (United States)

    Puttick, Gillian; Tucker-Raymond, Eli

    2018-01-01

    Science and computational practices such as modeling and abstraction are critical to understanding the complex systems that are integral to climate science. Given the demonstrated affordances of game design in supporting such practices, we implemented a free 4-day intensive workshop for middle school girls that focused on using the visual programming environment, Scratch, to design games to teach others about climate change. The experience was carefully constructed so that girls of widely differing levels of experience were able to engage in a cycle of game design. This qualitative study aimed to explore the representational choices the girls made as they took up aspects of climate change systems and modeled them in their games. Evidence points to the ways in which designing games about climate science fostered emergent systems thinking and engagement in modeling practices as learners chose what to represent in their games, grappled with the realism of their respective representations, and modeled interactions among systems components. Given the girls' levels of programming skill, parts of systems were more tractable to create than others. The educational purpose of the games was important to the girls' overall design experience, since it influenced their choice of topic, and challenged their emergent understanding of climate change as a systems problem.

  5. The reduced kinome of Ostreococcus tauri: core eukaryotic signalling components in a tractable model species.

    Science.gov (United States)

    Hindle, Matthew M; Martin, Sarah F; Noordally, Zeenat B; van Ooijen, Gerben; Barrios-Llerena, Martin E; Simpson, T Ian; Le Bihan, Thierry; Millar, Andrew J

    2014-08-02

    The current knowledge of eukaryote signalling originates from phenotypically diverse organisms. There is a pressing need to identify conserved signalling components among eukaryotes, which will lead to the transfer of knowledge across kingdoms. Two useful properties of a eukaryote model for signalling are (1) reduced signalling complexity, and (2) conservation of signalling components. The alga Ostreococcus tauri is described as the smallest free-living eukaryote. With less than 8,000 genes, it represents a highly constrained genomic palette. Our survey revealed 133 protein kinases and 34 protein phosphatases (1.7% and 0.4% of the proteome). We conducted phosphoproteomic experiments and constructed domain structures and phylogenies for the catalytic protein-kinases. For each of the major kinases families we review the completeness and divergence of O. tauri representatives in comparison to the well-studied kinomes of the laboratory models Arabidopsis thaliana and Saccharomyces cerevisiae, and of Homo sapiens. Many kinase clades in O. tauri were reduced to a single member, in preference to the loss of family diversity, whereas TKL and ABC1 clades were expanded. We also identified kinases that have been lost in A. thaliana but retained in O. tauri. For three, contrasting eukaryotic pathways - TOR, MAPK, and the circadian clock - we established the subset of conserved components and demonstrate conserved sites of substrate phosphorylation and kinase motifs. We conclude that O. tauri satisfies our two central requirements. Several of its kinases are more closely related to H. sapiens orthologs than S. cerevisiae is to H. sapiens. The greatly reduced kinome of O. tauri is therefore a suitable model for signalling in free-living eukaryotes.

  6. Dependent defaults and losses with factor copula models

    Directory of Open Access Journals (Sweden)

    Ackerer Damien

    2017-12-01

    Full Text Available We present a class of flexible and tractable static factor models for the term structure of joint default probabilities, the factor copula models. These high-dimensional models remain parsimonious with paircopula constructions, and nest many standard models as special cases. The loss distribution of a portfolio of contingent claims can be exactly and efficiently computed when individual losses are discretely supported on a finite grid. Numerical examples study the key features affecting the loss distribution and multi-name credit derivatives prices. An empirical exercise illustrates the flexibility of our approach by fitting credit index tranche prices.

  7. Performance Analysis of Multi-Hop Heterodyne FSO Systems over Malaga Turbulent Channels with Pointing Error Using Mixture Gamma Distribution

    KAUST Repository

    Alheadary, Wael Ghazy

    2017-11-16

    This work investigates the end-to-end performance of a free space optical amplify-and-forward relaying system using heterodyne detection over Malaga turbulence channels at the presence of pointing error. In order to overcome the analytical difficulties of the proposed composite channel model, we employed the mixture Gamma (MG) distribution. The proposed model shows a high accurate and tractable approximation just by adjusting some parameters. More specifically, we derived new closed-form expression for average bit error rate employing rectangular quadrature amplitude modulation in term of MG distribution and generalized power series of the Meijer\\'s G- function. The closed-form has been validated numerically and asymptotically at high signal to noise ratio.

  8. Viral persistence, liver disease and host response in Hepatitis C-like virus rat model

    DEFF Research Database (Denmark)

    Trivedi, Sheetal; Murthy, Satyapramod; Sharma, Himanshu

    2018-01-01

    The lack of a relevant, tractable, and immunocompetent animal model for hepatitis C virus (HCV) has severely impeded investigations of viral persistence, immunity and pathogenesis. In the absence of immunocompetent models with robust HCV infection, homolog hepaciviruses in their natural host could...... potentially provide useful surrogate models. We isolated a rodent hepacivirus (RHV) from wild rats (Rattus norvegicus), RHV-rn1, acquired the complete viral genome sequence and developed an infectious reverse genetics system. RHV-rn1 resembles HCV in genomic features including the pattern of polyprotein...... cleavage sites and secondary structures in the viral 5' and 3' UTRs. We used site-directed and random mutagenesis to determine that only the first of the two miR-122 seed sites in viral 5'UTR is required for viral replication and persistence in rats. Next, we used the clone derived virus progeny to infect...

  9. Upside-Down but Headed in the Right Direction: Review of the Highly Versatile Cassiopea xamachana System

    Directory of Open Access Journals (Sweden)

    Aki H. Ohdera

    2018-04-01

    Full Text Available The upside-down jellyfish Cassiopea xamachana (Scyphozoa: Rhizostomeae has been predominantly studied to understand its interaction with the endosymbiotic dinoflagellate algae Symbiodinium. As an easily culturable and tractable cnidarian model, it is an attractive alternative to stony corals to understanding the mechanisms driving establishment and maintenance of symbiosis. Cassiopea is also unique in requiring the symbiont in order to complete its transition to the adult stage, thereby providing an excellent model to understand symbiosis-driven development and evolution. Recently, the Cassiopea research system has gained interest beyond symbiosis in fields related to embryology, climate ecology, behavior, and more. With these developments, resources including genomes, transcriptomes, and laboratory protocols are steadily increasing. This review provides an overview of the broad range of interdisciplinary research that has utilized the Cassiopea model and highlights the advantages of using the model for future research.

  10. Cellular Clocks : Coupled Circadian Dispatch and Cell Division Cycles

    NARCIS (Netherlands)

    Merrow, Martha; Roenneberg, Till

    2004-01-01

    Gating of cell division by the circadian clock is well known, yet its mechanism is little understood. Genetically tractable model systems have led to new hypotheses and questions concerning the coupling of these two cellular cycles.

  11. A Stochastic Geometry Framework for LOS/NLOS Propagation in Dense Small Cell Networks

    DEFF Research Database (Denmark)

    Galiotto, Carlo; Kiilerich Pratas, Nuno; Marchetti, Nicola

    2015-01-01

    The need to carry out analytical studies of wireless systems often motivates the usage of simplified models which, despite their tractability, can easily lead to an overestimation of the achievable performance. In the case of dense small cells networks, the standard single slope path-loss model h...

  12. Thermodynamic laws in isolated systems.

    Science.gov (United States)

    Hilbert, Stefan; Hänggi, Peter; Dunkel, Jörn

    2014-12-01

    The recent experimental realization of exotic matter states in isolated quantum systems and the ensuing controversy about the existence of negative absolute temperatures demand a careful analysis of the conceptual foundations underlying microcanonical thermostatistics. Here we provide a detailed comparison of the most commonly considered microcanonical entropy definitions, focusing specifically on whether they satisfy or violate the zeroth, first, and second laws of thermodynamics. Our analysis shows that, for a broad class of systems that includes all standard classical Hamiltonian systems, only the Gibbs volume entropy fulfills all three laws simultaneously. To avoid ambiguities, the discussion is restricted to exact results and analytically tractable examples.

  13. The ESX system in Bacillus subtilis mediates protein secretion.

    Directory of Open Access Journals (Sweden)

    Laura A Huppert

    Full Text Available Esat-6 protein secretion systems (ESX or Ess are required for the virulence of several human pathogens, most notably Mycobacterium tuberculosis and Staphylococcus aureus. These secretion systems are defined by a conserved FtsK/SpoIIIE family ATPase and one or more WXG100 family secreted substrates. Gene clusters coding for ESX systems have been identified amongst many organisms including the highly tractable model system, Bacillus subtilis. In this study, we demonstrate that the B. subtilis yuk/yue locus codes for a nonessential ESX secretion system. We develop a functional secretion assay to demonstrate that each of the locus gene products is specifically required for secretion of the WXG100 virulence factor homolog, YukE. We then employ an unbiased approach to search for additional secreted substrates. By quantitative profiling of culture supernatants, we find that YukE may be the sole substrate that depends on the FtsK/SpoIIIE family ATPase for secretion. We discuss potential functional implications for secretion of a unique substrate.

  14. Random blebbing motion: A simple model linking cell structural properties to migration characteristics

    Science.gov (United States)

    Woolley, Thomas E.; Gaffney, Eamonn A.; Goriely, Alain

    2017-07-01

    If the plasma membrane of a cell is able to delaminate locally from its actin cortex, a cellular bleb can be produced. Blebs are pressure-driven protrusions, which are noteworthy for their ability to produce cellular motion. Starting from a general continuum mechanics description, we restrict ourselves to considering cell and bleb shapes that maintain approximately spherical forms. From this assumption, we obtain a tractable algebraic system for bleb formation. By including cell-substrate adhesions, we can model blebbing cell motility. Further, by considering mechanically isolated blebbing events, which are randomly distributed over the cell, we can derive equations linking the macroscopic migration characteristics to the microscopic structural parameters of the cell. This multiscale modeling framework is then used to provide parameter estimates, which are in agreement with current experimental data. In summary, the construction of the mathematical model provides testable relationships between the bleb size and cell motility.

  15. Random blebbing motion: A simple model linking cell structural properties to migration characteristics.

    Science.gov (United States)

    Woolley, Thomas E; Gaffney, Eamonn A; Goriely, Alain

    2017-07-01

    If the plasma membrane of a cell is able to delaminate locally from its actin cortex, a cellular bleb can be produced. Blebs are pressure-driven protrusions, which are noteworthy for their ability to produce cellular motion. Starting from a general continuum mechanics description, we restrict ourselves to considering cell and bleb shapes that maintain approximately spherical forms. From this assumption, we obtain a tractable algebraic system for bleb formation. By including cell-substrate adhesions, we can model blebbing cell motility. Further, by considering mechanically isolated blebbing events, which are randomly distributed over the cell, we can derive equations linking the macroscopic migration characteristics to the microscopic structural parameters of the cell. This multiscale modeling framework is then used to provide parameter estimates, which are in agreement with current experimental data. In summary, the construction of the mathematical model provides testable relationships between the bleb size and cell motility.

  16. Regional Community Climate Simulations with variable resolution meshes in the Community Earth System Model

    Science.gov (United States)

    Zarzycki, C. M.; Gettelman, A.; Callaghan, P.

    2017-12-01

    Accurately predicting weather extremes such as precipitation (floods and droughts) and temperature (heat waves) requires high resolution to resolve mesoscale dynamics and topography at horizontal scales of 10-30km. Simulating such resolutions globally for climate scales (years to decades) remains computationally impractical. Simulating only a small region of the planet is more tractable at these scales for climate applications. This work describes global simulations using variable-resolution static meshes with multiple dynamical cores that target the continental United States using developmental versions of the Community Earth System Model version 2 (CESM2). CESM2 is tested in idealized, aquaplanet and full physics configurations to evaluate variable mesh simulations against uniform high and uniform low resolution simulations at resolutions down to 15km. Different physical parameterization suites are also evaluated to gauge their sensitivity to resolution. Idealized variable-resolution mesh cases compare well to high resolution tests. More recent versions of the atmospheric physics, including cloud schemes for CESM2, are more stable with respect to changes in horizontal resolution. Most of the sensitivity is due to sensitivity to timestep and interactions between deep convection and large scale condensation, expected from the closure methods. The resulting full physics model produces a comparable climate to the global low resolution mesh and similar high frequency statistics in the high resolution region. Some biases are reduced (orographic precipitation in the western United States), but biases do not necessarily go away at high resolution (e.g. summertime JJA surface Temp). The simulations are able to reproduce uniform high resolution results, making them an effective tool for regional climate studies and are available in CESM2.

  17. Yeast as a Heterologous Model System to Uncover Type III Effector Function.

    Directory of Open Access Journals (Sweden)

    Crina Popa

    2016-02-01

    Full Text Available Type III effectors (T3E are key virulence proteins that are injected by bacterial pathogens inside the cells of their host to subvert cellular processes and contribute to disease. The budding yeast Saccharomyces cerevisiae represents an important heterologous system for the functional characterisation of T3E proteins in a eukaryotic environment. Importantly, yeast contains eukaryotic processes with low redundancy and are devoid of immunity mechanisms that counteract T3Es and mask their function. Expression in yeast of effectors from both plant and animal pathogens that perturb conserved cellular processes often resulted in robust phenotypes that were exploited to elucidate effector functions, biochemical properties, and host targets. The genetic tractability of yeast and its amenability for high-throughput functional studies contributed to the success of this system that, in recent years, has been used to study over 100 effectors. Here, we provide a critical view on this body of work and describe advantages and limitations inherent to the use of yeast in T3E research. "Favourite" targets of T3Es in yeast are cytoskeleton components and small GTPases of the Rho family. We describe how mitogen-activated protein kinase (MAPK signalling, vesicle trafficking, membrane structures, and programmed cell death are also often altered by T3Es in yeast and how this reflects their function in the natural host. We describe how effector structure-function studies and analysis of candidate targeted processes or pathways can be carried out in yeast. We critically analyse technologies that have been used in yeast to assign biochemical functions to T3Es, including transcriptomics and proteomics, as well as suppressor, gain-of-function, or synthetic lethality screens. We also describe how yeast can be used to select for molecules that block T3E function in search of new antibacterial drugs with medical applications. Finally, we provide our opinion on the limitations

  18. A Report on Simulation-Driven Reliability and Failure Analysis of Large-Scale Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Wan, Lipeng [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Wang, Feiyi [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Oral, H. Sarp [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Vazhkudai, Sudharshan S. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Cao, Qing [Univ. of Tennessee, Knoxville, TN (United States)

    2014-11-01

    High-performance computing (HPC) storage systems provide data availability and reliability using various hardware and software fault tolerance techniques. Usually, reliability and availability are calculated at the subsystem or component level using limited metrics such as, mean time to failure (MTTF) or mean time to data loss (MTTDL). This often means settling on simple and disconnected failure models (such as exponential failure rate) to achieve tractable and close-formed solutions. However, such models have been shown to be insufficient in assessing end-to-end storage system reliability and availability. We propose a generic simulation framework aimed at analyzing the reliability and availability of storage systems at scale, and investigating what-if scenarios. The framework is designed for an end-to-end storage system, accommodating the various components and subsystems, their interconnections, failure patterns and propagation, and performs dependency analysis to capture a wide-range of failure cases. We evaluate the framework against a large-scale storage system that is in production and analyze its failure projections toward and beyond the end of lifecycle. We also examine the potential operational impact by studying how different types of components affect the overall system reliability and availability, and present the preliminary results

  19. Dynamic Modeling of Cell-Free Biochemical Networks Using Effective Kinetic Models

    Directory of Open Access Journals (Sweden)

    Joseph A. Wayman

    2015-03-01

    Full Text Available Cell-free systems offer many advantages for the study, manipulation and modeling of metabolism compared to in vivo processes. Many of the challenges confronting genome-scale kinetic modeling can potentially be overcome in a cell-free system. For example, there is no complex transcriptional regulation to consider, transient metabolic measurements are easier to obtain, and we no longer have to consider cell growth. Thus, cell-free operation holds several significant advantages for model development, identification and validation. Theoretically, genome-scale cell-free kinetic models may be possible for industrially important organisms, such as E. coli, if a simple, tractable framework for integrating allosteric regulation with enzyme kinetics can be formulated. Toward this unmet need, we present an effective biochemical network modeling framework for building dynamic cell-free metabolic models. The key innovation of our approach is the integration of simple effective rules encoding complex allosteric regulation with traditional kinetic pathway modeling. We tested our approach by modeling the time evolution of several hypothetical cell-free metabolic networks. We found that simple effective rules, when integrated with traditional enzyme kinetic expressions, captured complex allosteric patterns such as ultrasensitivity or non-competitive inhibition in the absence of mechanistic information. Second, when integrated into network models, these rules captured classic regulatory patterns such as product-induced feedback inhibition. Lastly, we showed, at least for the network architectures considered here, that we could simultaneously estimate kinetic parameters and allosteric connectivity from synthetic data starting from an unbiased collection of possible allosteric structures using particle swarm optimization. However, when starting with an initial population that was heavily enriched with incorrect structures, our particle swarm approach could converge

  20. Computational Nonlinear Morphology with Emphasis on Semitic Languages. Studies in Natural Language Processing.

    Science.gov (United States)

    Kiraz, George Anton

    This book presents a tractable computational model that can cope with complex morphological operations, especially in Semitic languages, and less complex morphological systems present in Western languages. It outlines a new generalized regular rewrite rule system that uses multiple finite-state automata to cater to root-and-pattern morphology,…

  1. Organoid Models of Human and Mouse Ductal Pancreatic Cancer

    Science.gov (United States)

    Boj, Sylvia F.; Hwang, Chang-Il; Baker, Lindsey A.; Chio, Iok In Christine; Engle, Dannielle D.; Corbo, Vincenzo; Jager, Myrthe; Ponz-Sarvise, Mariano; Tiriac, Hervé; Spector, Mona S.; Gracanin, Ana; Oni, Tobiloba; Yu, Kenneth H.; van Boxtel, Ruben; Huch, Meritxell; Rivera, Keith D.; Wilson, John P.; Feigin, Michael E.; Öhlund, Daniel; Handly-Santana, Abram; Ardito-Abraham, Christine M.; Ludwig, Michael; Elyada, Ela; Alagesan, Brinda; Biffi, Giulia; Yordanov, Georgi N.; Delcuze, Bethany; Creighton, Brianna; Wright, Kevin; Park, Youngkyu; Morsink, Folkert H.M.; Molenaar, I. Quintus; Borel Rinkes, Inne H.; Cuppen, Edwin; Hao, Yuan; Jin, Ying; Nijman, Isaac J.; Iacobuzio-Donahue, Christine; Leach, Steven D.; Pappin, Darryl J.; Hammell, Molly; Klimstra, David S.; Basturk, Olca; Hruban, Ralph H.; Offerhaus, George Johan; Vries, Robert G.J.; Clevers, Hans; Tuveson, David A.

    2015-01-01

    SUMMARY Pancreatic cancer is one of the most lethal malignancies due to its late diagnosis and limited response to treatment. Tractable methods to identify and interrogate pathways involved in pancreatic tumorigenesis are urgently needed. We established organoid models from normal and neoplastic murine and human pancreas tissues. Pancreatic organoids can be rapidly generated from resected tumors and biopsies, survive cryopreservation and exhibit ductal- and disease stage-specific characteristics. Orthotopically transplanted neoplastic organoids recapitulate the full spectrum of tumor development by forming early-grade neoplasms that progress to locally invasive and metastatic carcinomas. Due to their ability to be genetically manipulated, organoids are a platform to probe genetic cooperation. Comprehensive transcriptional and proteomic analyses of murine pancreatic organoids revealed genes and pathways altered during disease progression. The confirmation of many of these protein changes in human tissues demonstrates that organoids are a facile model system to discover characteristics of this deadly malignancy. PMID:25557080

  2. Synergistic effects in threshold models on networks

    Science.gov (United States)

    Juul, Jonas S.; Porter, Mason A.

    2018-01-01

    Network structure can have a significant impact on the propagation of diseases, memes, and information on social networks. Different types of spreading processes (and other dynamical processes) are affected by network architecture in different ways, and it is important to develop tractable models of spreading processes on networks to explore such issues. In this paper, we incorporate the idea of synergy into a two-state ("active" or "passive") threshold model of social influence on networks. Our model's update rule is deterministic, and the influence of each meme-carrying (i.e., active) neighbor can—depending on a parameter—either be enhanced or inhibited by an amount that depends on the number of active neighbors of a node. Such a synergistic system models social behavior in which the willingness to adopt either accelerates or saturates in a way that depends on the number of neighbors who have adopted that behavior. We illustrate that our model's synergy parameter has a crucial effect on system dynamics, as it determines whether degree-k nodes are possible or impossible to activate. We simulate synergistic meme spreading on both random-graph models and networks constructed from empirical data. Using a heterogeneous mean-field approximation, which we derive under the assumption that a network is locally tree-like, we are able to determine which synergy-parameter values allow degree-k nodes to be activated for many networks and for a broad family of synergistic models.

  3. Residence-time framework for modeling multicomponent reactive transport in stream hyporheic zones

    Science.gov (United States)

    Painter, S. L.; Coon, E. T.; Brooks, S. C.

    2017-12-01

    Process-based models for transport and transformation of nutrients and contaminants in streams require tractable representations of solute exchange between the stream channel and biogeochemically active hyporheic zones. Residence-time based formulations provide an alternative to detailed three-dimensional simulations and have had good success in representing hyporheic exchange of non-reacting solutes. We extend the residence-time formulation for hyporheic transport to accommodate general multicomponent reactive transport. To that end, the integro-differential form of previous residence time models is replaced by an equivalent formulation based on a one-dimensional advection dispersion equation along the channel coupled at each channel location to a one-dimensional transport model in Lagrangian travel-time form. With the channel discretized for numerical solution, the associated Lagrangian model becomes a subgrid model representing an ensemble of streamlines that are diverted into the hyporheic zone before returning to the channel. In contrast to the previous integro-differential forms of the residence-time based models, the hyporheic flowpaths have semi-explicit spatial representation (parameterized by travel time), thus allowing coupling to general biogeochemical models. The approach has been implemented as a stream-corridor subgrid model in the open-source integrated surface/subsurface modeling software ATS. We use bedform-driven flow coupled to a biogeochemical model with explicit microbial biomass dynamics as an example to show that the subgrid representation is able to represent redox zonation in sediments and resulting effects on metal biogeochemical dynamics in a tractable manner that can be scaled to reach scales.

  4. On fault propagation in deterioration of multi-component systems

    International Nuclear Information System (INIS)

    Liang, Zhenglin; Parlikad, Ajith Kumar; Srinivasan, Rengarajan; Rasmekomen, Nipat

    2017-01-01

    In extant literature, deterioration dependence among components can be modelled as inherent dependence and induced dependence. We find that the two types of dependence may co-exist and interact with each other in one multi-component system. We refer to this phenomenon as fault propagation. In practice, a fault induced by the malfunction of a non-critical component may further propagate through the dependence amongst critical components. Such fault propagation scenario happens in industrial assets or systems (bridge deck, and heat exchanging system). In this paper, a multi-layered vector-valued continuous-time Markov chain is developed to capture the characteristics of fault propagation. To obtain the mathematical tractability, we derive a partitioning rule to aggregate states with the same characteristics while keeping the overall aging behaviour of the multi-component system. Although the detailed information of components is masked by aggregated states, lumpability is attainable with the partitioning rule. It means that the aggregated process is stochastically equivalent to the original one and retains the Markov property. We apply this model on a heat exchanging system in oil refinery company. The results show that fault propagation has a more significant impact on the system's lifetime comparing with inherent dependence and induced dependence. - Highlights: • We develop a vector value continuous-time Markov chain to model the meta-dependent characteristic of fault propagation. • A partitioning rule is derived to reduce the state space and attain lumpability. • The model is applied on analysing the impact of fault propagation in a heat exchanging system.

  5. Software reliability growth models with normal failure time distributions

    International Nuclear Information System (INIS)

    Okamura, Hiroyuki; Dohi, Tadashi; Osaki, Shunji

    2013-01-01

    This paper proposes software reliability growth models (SRGM) where the software failure time follows a normal distribution. The proposed model is mathematically tractable and has sufficient ability of fitting to the software failure data. In particular, we consider the parameter estimation algorithm for the SRGM with normal distribution. The developed algorithm is based on an EM (expectation-maximization) algorithm and is quite simple for implementation as software application. Numerical experiment is devoted to investigating the fitting ability of the SRGMs with normal distribution through 16 types of failure time data collected in real software projects

  6. Modeling intelligent adversaries for terrorism risk assessment: some necessary conditions for adversary models.

    Science.gov (United States)

    Guikema, Seth

    2012-07-01

    Intelligent adversary modeling has become increasingly important for risk analysis, and a number of different approaches have been proposed for incorporating intelligent adversaries in risk analysis models. However, these approaches are based on a range of often-implicit assumptions about the desirable properties of intelligent adversary models. This "Perspective" paper aims to further risk analysis for situations involving intelligent adversaries by fostering a discussion of the desirable properties for these models. A set of four basic necessary conditions for intelligent adversary models is proposed and discussed. These are: (1) behavioral accuracy to the degree possible, (2) computational tractability to support decision making, (3) explicit consideration of uncertainty, and (4) ability to gain confidence in the model. It is hoped that these suggested necessary conditions foster discussion about the goals and assumptions underlying intelligent adversary modeling in risk analysis. © 2011 Society for Risk Analysis.

  7. Effector diversification within compartments of the Leptosphaeria maculans genome affected by repeat induced point mutations

    NARCIS (Netherlands)

    Rouxel, T.; Grandaubert, J.; Hane, J.K.; Hoede, C.; Wouw, A.; Couloux, A.; Dominguez, V.; Anthouard, V.; Bally, P.; Bourras, S.; Cozijnsen, A.J.; Ciuffetti, L.M.; Degrave, A.; Dilmaghani, A.; Duret, L.; Fudal, L.; Goodwin, S.B.; Gout, L.; Glaser, N.; Linglin, J.; Kema, G.H.J.; Lapalu, N.; Lawrence, C.B.; May, K.; Meyer, M.; Ollivier, B.; Poulain, J.; Schoch, C.L.; Simon, A.; Spatafora, J.W.; Stachowiak, A.; Turgeon, B.G.; Tyler, B.M.; Vincent, D.; Weissenbach, J.; Amselem, J.; Quesneville, H.; Oliver, R.P.; Wincker, P.; Balesdent, M.H.; Howlett, B.J.

    2011-01-01

    Fungi are of primary ecological, biotechnological and economic importance. Many fundamental biological processes that are shared by animals and fungi are studied in fungi due to their experimental tractability. Many fungi are pathogens or mutualists and are model systems to analyse effector genes

  8. Computation of a Reference Model for Robust Fault Detection and Isolation Residual Generation

    Directory of Open Access Journals (Sweden)

    Emmanuel Mazars

    2008-01-01

    Full Text Available This paper considers matrix inequality procedures to address the robust fault detection and isolation (FDI problem for linear time-invariant systems subject to disturbances, faults, and polytopic or norm-bounded uncertainties. We propose a design procedure for an FDI filter that aims to minimize a weighted combination of the sensitivity of the residual signal to disturbances and modeling errors, and the deviation of the faults to residual dynamics from a fault to residual reference model, using the ℋ∞-norm as a measure. A key step in our procedure is the design of an optimal fault reference model. We show that the optimal design requires the solution of a quadratic matrix inequality (QMI optimization problem. Since the solution of the optimal problem is intractable, we propose a linearization technique to derive a numerically tractable suboptimal design procedure that requires the solution of a linear matrix inequality (LMI optimization. A jet engine example is employed to demonstrate the effectiveness of the proposed approach.

  9. Evolution and Function of Thioester-Containing Proteins and the Complement System in the Innate Immune Response

    Directory of Open Access Journals (Sweden)

    Upasana Shokal

    2017-06-01

    Full Text Available The innate immune response is evolutionary conserved among organisms. The complement system forms an important and efficient immune defense mechanism. It consists of plasma proteins that participate in microbial detection, which ultimately results in the production of various molecules with antimicrobial activity. Thioester-containing proteins (TEPs are a superfamily of secreted effector proteins. In vertebrates, certain TEPs act in the innate immune response by promoting recruitment of immune cells, phagocytosis, and direct lysis of microbial invaders. Insects are excellent models for dissecting the molecular basis of innate immune recognition and response to a wide range of microbial infections. Impressive progress in recent years has generated crucial information on the role of TEPs in the antibacterial and antiparasite response of the tractable model insect Drosophila melanogaster and the mosquito malaria vector Anopheles gambiae. This knowledge is critical for better understanding the evolution of TEPs and their involvement in the regulation of the host innate immune system.

  10. Simplified Eigen-structure decomposition solver for the simulation of two-phase flow systems

    International Nuclear Information System (INIS)

    Kumbaro, Anela

    2012-01-01

    This paper discusses the development of a new solver for a system of first-order non-linear differential equations that model the dynamics of compressible two-phase flow. The solver presents a lower-complexity alternative to Roe-type solvers because it only makes use of a partial Eigen-structure information while maintaining its accuracy: the outcome is hence a good complexity-tractability trade-off to consider as relevant in a large number of situations in the scope of two-phase flow numerical simulation. A number of numerical and physical benchmarks are presented to assess the solver. Comparison between the computational results from the simplified Eigen-structure decomposition solver and the conventional Roe-type solver gives insight upon the issues of accuracy, robustness and efficiency. (authors)

  11. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Kokholm, Thomas

    to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...... on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across...

  12. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Cont, Rama; Kokholm, Thomas

    2013-01-01

    to be priced consistently, while allowing for jumps in volatility and returns. An affine specification using Lévy processes as building blocks leads to analytically tractable pricing formulas for volatility derivatives, such as VIX options, as well as efficient numerical methods for pricing of European options...... on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options on S&P 500 across...

  13. A General Nonlinear Fluid Model for Reacting Plasma-Neutral Mixtures

    Energy Technology Data Exchange (ETDEWEB)

    Meier, E T; Shumlak, U

    2012-04-06

    A generalized, computationally tractable fluid model for capturing the effects of neutral particles in plasmas is derived. The model derivation begins with Boltzmann equations for singly charged ions, electrons, and a single neutral species. Electron-impact ionization, radiative recombination, and resonant charge exchange reactions are included. Moments of the reaction collision terms are detailed. Moments of the Boltzmann equations for electron, ion, and neutral species are combined to yield a two-component plasma-neutral fluid model. Separate density, momentum, and energy equations, each including reaction transfer terms, are produced for the plasma and neutral equations. The required closures for the plasma-neutral model are discussed.

  14. Fitting and interpreting continuous-time latent Markov models for panel data.

    Science.gov (United States)

    Lange, Jane M; Minin, Vladimir N

    2013-11-20

    Multistate models characterize disease processes within an individual. Clinical studies often observe the disease status of individuals at discrete time points, making exact times of transitions between disease states unknown. Such panel data pose considerable modeling challenges. Assuming the disease process progresses accordingly, a standard continuous-time Markov chain (CTMC) yields tractable likelihoods, but the assumption of exponential sojourn time distributions is typically unrealistic. More flexible semi-Markov models permit generic sojourn distributions yet yield intractable likelihoods for panel data in the presence of reversible transitions. One attractive alternative is to assume that the disease process is characterized by an underlying latent CTMC, with multiple latent states mapping to each disease state. These models retain analytic tractability due to the CTMC framework but allow for flexible, duration-dependent disease state sojourn distributions. We have developed a robust and efficient expectation-maximization algorithm in this context. Our complete data state space consists of the observed data and the underlying latent trajectory, yielding computationally efficient expectation and maximization steps. Our algorithm outperforms alternative methods measured in terms of time to convergence and robustness. We also examine the frequentist performance of latent CTMC point and interval estimates of disease process functionals based on simulated data. The performance of estimates depends on time, functional, and data-generating scenario. Finally, we illustrate the interpretive power of latent CTMC models for describing disease processes on a dataset of lung transplant patients. We hope our work will encourage wider use of these models in the biomedical setting. Copyright © 2013 John Wiley & Sons, Ltd.

  15. Impulsive response of an automatic transmission system with multiple clearances: Formulation, simulation and experiment

    Science.gov (United States)

    Crowther, Ashley R.; Singh, Rajendra; Zhang, Nong; Chapman, Chris

    2007-10-01

    Impulsive responses in geared systems with multiple clearances are studied when the mean torque excitation and system load change abruptly, with application to a vehicle driveline with an automatic transmission. First, torsional lumped-mass models of the planetary and differential gear sets are formulated using matrix elements. The model is then reduced to address tractable nonlinear problems while successfully retaining the main modes of interest. Second, numerical simulations for the nonlinear model are performed for transient conditions and a typical driving situation that induces an impulsive behaviour simulated. However, initial conditions and excitation and load profiles have to be carefully defined before the model can be numerically solved. It is shown that the impacts within the planetary or differential gears may occur under combinations of engine, braking and vehicle load transients. Our analysis shows that the shaping of the engine transient by the torque converter before reaching the clearance locations is more critical. Third, a free vibration experiment is developed for an analogous driveline with multiple clearances and three experiments that excite different response regimes have been carried out. Good correlations validate the proposed methodology.

  16. Why the dimension matters in ecological models? ¿Por qué importa la dimensión en modelos ecológicos?

    Directory of Open Access Journals (Sweden)

    Rodrigo Ramos-Jiliberto

    2004-12-01

    Full Text Available In this work we discuss the ecological and mathematical significance of system's dimension in continuous-time population dynamics models. We show how the system's dimension reflects the ecological assumptions and affects both the spectrum of dynamic output and mathematical tractability of the models. We stress that the model dimension is not always the same as the number of state-variables, and we also present conditions under which the system's dimension is alteredEn este trabajo discutimos la significación ecológica y matemática de la dimensión del sistema en modelos de dinámica poblacional en tiempo continuo. Mostramos cómo la dimensión del sistema refleja los supuestos ecológicos y afecta el espectro de resultados dinámicos así como la tratabilidad matemática de los modelos. Acentuamos que la dimensión de un modelo no es siempre equivalente al número de variables de estado, y presentamos condiciones bajo las cuales la dimensión del sistema es alterada.

  17. Substitution models for overlapping technologies - an application to fast reactor deployment

    International Nuclear Information System (INIS)

    Lehtinen, R.; Silvennoinen, P.; Vira, J.

    1982-01-01

    In this paper market penetration models are discussed in the context of interacting technologies. An increased confidence credit is proposed for a technology that can draw on other overlapping technologies. The model is also reduced to a numerically tractable form. As an application, scenarios of fast reactor deployment are derived under different assumptions on the uranium and fast reactor investment costs and by varying model parameters for the penetration of fusion and solar technologies. The market share of fast reactors in electricity generation is expected to lie between zero and 40 per cent in 2050 depending on the market parameters. (orig.) [de

  18. Protein engineering of the chemokine CCL20 prevents psoriasiform dermatitis in an IL-23-dependent murine model

    DEFF Research Database (Denmark)

    Getschman, A E; Imai, Y; Larsen, O

    2017-01-01

    signaling. When given in an IL-23-dependent mouse model for psoriasis, CCL20 S64C prevented psoriatic inflammation and the up-regulation of IL-17A and IL-22. Our results validate CCR6 as a tractable therapeutic target for psoriasis and demonstrate the value of CCL20 S64C as a lead compound....

  19. Annual Rainfall Forecasting by Using Mamdani Fuzzy Inference System

    Science.gov (United States)

    Fallah-Ghalhary, G.-A.; Habibi Nokhandan, M.; Mousavi Baygi, M.

    2009-04-01

    Long-term rainfall prediction is very important to countries thriving on agro-based economy. In general, climate and rainfall are highly non-linear phenomena in nature giving rise to what is known as "butterfly effect". The parameters that are required to predict the rainfall are enormous even for a short period. Soft computing is an innovative approach to construct computationally intelligent systems that are supposed to possess humanlike expertise within a specific domain, adapt themselves and learn to do better in changing environments, and explain how they make decisions. Unlike conventional artificial intelligence techniques the guiding principle of soft computing is to exploit tolerance for imprecision, uncertainty, robustness, partial truth to achieve tractability, and better rapport with reality. In this paper, 33 years of rainfall data analyzed in khorasan state, the northeastern part of Iran situated at latitude-longitude pairs (31°-38°N, 74°- 80°E). this research attempted to train Fuzzy Inference System (FIS) based prediction models with 33 years of rainfall data. For performance evaluation, the model predicted outputs were compared with the actual rainfall data. Simulation results reveal that soft computing techniques are promising and efficient. The test results using by FIS model showed that the RMSE was obtained 52 millimeter.

  20. Quantum simulation of transverse Ising models with Rydberg atoms

    Science.gov (United States)

    Schauss, Peter

    2018-04-01

    Quantum Ising models are canonical models for the study of quantum phase transitions (Sachdev 1999 Quantum Phase Transitions (Cambridge: Cambridge University Press)) and are the underlying concept for many analogue quantum computing and quantum annealing ideas (Tanaka et al Quantum Spin Glasses, Annealing and Computation (Cambridge: Cambridge University Press)). Here we focus on the implementation of finite-range interacting Ising spin models, which are barely tractable numerically. Recent experiments with cold atoms have reached the interaction-dominated regime in quantum Ising magnets via optical coupling of trapped neutral atoms to Rydberg states. This approach allows for the tunability of all relevant terms in an Ising spin Hamiltonian with 1/{r}6 interactions in transverse and longitudinal fields. This review summarizes the recent progress of these implementations in Rydberg lattices with site-resolved detection. Strong correlations in quantum Ising models have been observed in several experiments, starting from a single excitation in the superatom regime up to the point of crystallization. The rapid progress in this field makes spin systems based on Rydberg atoms a promising platform for quantum simulation because of the unmatched flexibility and strength of interactions combined with high control and good isolation from the environment.

  1. Yield curve event tree construction for multi stage stochastic programming models

    DEFF Research Database (Denmark)

    Rasmussen, Kourosh Marjani; Poulsen, Rolf

    Dynamic stochastic programming (DSP) provides an intuitive framework for modelling of financial portfolio choice problems where market frictions are present and dynamic re--balancing has a significant effect on initial decisions. The application of these models in practice, however, is limited....... Indeed defining a universal and tractable framework for fully ``appropriate'' event trees is in our opinion an impossible task. A problem specific approach to designing such event trees is the way ahead. In this paper we propose a number of desirable properties which should be present in an event tree...

  2. Econophysics of agent-based models

    CERN Document Server

    Aoyama, Hideaki; Chakrabarti, Bikas; Chakraborti, Anirban; Ghosh, Asim

    2014-01-01

    The primary goal of this book is to present the research findings and conclusions of physicists, economists, mathematicians and financial engineers working in the field of "Econophysics" who have undertaken agent-based modelling, comparison with empirical studies and related investigations. Most standard economic models assume the existence of the representative agent, who is “perfectly rational” and applies the utility maximization principle when taking action. One reason for this is the desire to keep models mathematically tractable: no tools are available to economists for solving non-linear models of heterogeneous adaptive agents without explicit optimization. In contrast, multi-agent models, which originated from statistical physics considerations, allow us to go beyond the prototype theories of traditional economics involving the representative agent. This book is based on the Econophys-Kolkata VII Workshop, at which many such modelling efforts were presented. In the book, leading researchers in the...

  3. The Validation of a Beta-Binomial Model for Overdispersed Binomial Data.

    Science.gov (United States)

    Kim, Jongphil; Lee, Ji-Hyun

    2017-01-01

    The beta-binomial model has been widely used as an analytically tractable alternative that captures the overdispersion of an intra-correlated, binomial random variable, X . However, the model validation for X has been rarely investigated. As a beta-binomial mass function takes on a few different shapes, the model validation is examined for each of the classified shapes in this paper. Further, the mean square error (MSE) is illustrated for each shape by the maximum likelihood estimator (MLE) based on a beta-binomial model approach and the method of moments estimator (MME) in order to gauge when and how much the MLE is biased.

  4. Overcoming the sign problem at finite temperature: Quantum tensor network for the orbital eg model on an infinite square lattice

    Science.gov (United States)

    Czarnik, Piotr; Dziarmaga, Jacek; Oleś, Andrzej M.

    2017-07-01

    The variational tensor network renormalization approach to two-dimensional (2D) quantum systems at finite temperature is applied to a model suffering the notorious quantum Monte Carlo sign problem—the orbital eg model with spatially highly anisotropic orbital interactions. Coarse graining of the tensor network along the inverse temperature β yields a numerically tractable 2D tensor network representing the Gibbs state. Its bond dimension D —limiting the amount of entanglement—is a natural refinement parameter. Increasing D we obtain a converged order parameter and its linear susceptibility close to the critical point. They confirm the existence of finite order parameter below the critical temperature Tc, provide a numerically exact estimate of Tc, and give the critical exponents within 1 % of the 2D Ising universality class.

  5. Different Parameters Support Generalization and Discrimination Learning in "Drosophila" at the Flight Simulator

    Science.gov (United States)

    Brembs, Bjorn; de Ibarra, Natalie Hempel

    2006-01-01

    We have used a genetically tractable model system, the fruit fly "Drosophila melanogaster" to study the interdependence between sensory processing and associative processing on learning performance. We investigated the influence of variations in the physical and predictive properties of color stimuli in several different operant-conditioning…

  6. Expectation Consistent Approximate Inference

    DEFF Research Database (Denmark)

    Opper, Manfred; Winther, Ole

    2005-01-01

    We propose a novel framework for approximations to intractable probabilistic models which is based on a free energy formulation. The approximation can be understood from replacing an average over the original intractable distribution with a tractable one. It requires two tractable probability dis...

  7. Analytical model spectrum for electrostatic turbulence in tokamaks

    International Nuclear Information System (INIS)

    Fiedler-Ferrari, N.; Misguich, J.H.

    1990-04-01

    In this work we present an analytical model spectrum, for three-dimensional electrostatic turbulence (homogeneous, stationary and locally isotropic in the plane perpendicular to the magnetic field), constructed by using experimental results from TFR and TEXT Tokamaks, and satisfying basic symmetry and parity conditions. The proposed spectrum seems to be tractable for explicit analytical calculations of transport processes, and consistent with experimental data. Additional experimental measurements in the bulk plasma remain however necessary in order to determine some unknown spectral properties of parallel propagation

  8. Systemic resilience model

    International Nuclear Information System (INIS)

    Lundberg, Jonas; Johansson, Björn JE

    2015-01-01

    It has been realized that resilience as a concept involves several contradictory definitions, both for instance resilience as agile adjustment and as robust resistance to situations. Our analysis of resilience concepts and models suggest that beyond simplistic definitions, it is possible to draw up a systemic resilience model (SyRes) that maintains these opposing characteristics without contradiction. We outline six functions in a systemic model, drawing primarily on resilience engineering, and disaster response: anticipation, monitoring, response, recovery, learning, and self-monitoring. The model consists of four areas: Event-based constraints, Functional Dependencies, Adaptive Capacity and Strategy. The paper describes dependencies between constraints, functions and strategies. We argue that models such as SyRes should be useful both for envisioning new resilience methods and metrics, as well as for engineering and evaluating resilient systems. - Highlights: • The SyRes model resolves contradictions between previous resilience definitions. • SyRes is a core model for envisioning and evaluating resilience metrics and models. • SyRes describes six functions in a systemic model. • They are anticipation, monitoring, response, recovery, learning, self-monitoring. • The model describes dependencies between constraints, functions and strategies

  9. Cardiac Electromechanical Models: From Cell to Organ

    Directory of Open Access Journals (Sweden)

    Natalia A Trayanova

    2011-08-01

    Full Text Available The heart is a multiphysics and multiscale system that has driven the development of the most sophisticated mathematical models at the frontiers of computation physiology and medicine. This review focuses on electromechanical (EM models of the heart from the molecular level of myofilaments to anatomical models of the organ. Because of the coupling in terms of function and emergent behaviors at each level of biological hierarchy, separation of behaviors at a given scale is difficult. Here, a separation is drawn at the cell level so that the first half addresses subcellular/single cell models and the second half addresses organ models. At the subcelluar level, myofilament models represent actin-myosin interaction and Ca-based activation. Myofilament models and their refinements represent an overview of the development in the field. The discussion of specific models emphasizes the roles of cooperative mechanisms and sarcomere length dependence of contraction force, considered the cellular basis of the Frank-Starling law. A model of electrophysiology and Ca handling can be coupled to a myofilament model to produce an EM cell model, and representative examples are summarized to provide an overview of the progression of field. The second half of the review covers organ-level models that require solution of the electrical component as a reaction-diffusion system and the mechanical component, in which active tension generated by the myocytes produces deformation of the organ as described by the equations of continuum mechanics. As outlined in the review, different organ-level models have chosen to use different ionic and myofilament models depending on the specific application; this choice has been largely dictated by compromises between model complexity and computational tractability. The review also addresses application areas of EM models such as cardiac resynchronization therapy and the role of mechano-electric coupling in arrhythmias and

  10. On Modelling Long Term Stock Returns with Ergodic Diffusion Processes: Arbitrage and Arbitrage-Free Specifications

    Directory of Open Access Journals (Sweden)

    Bernard Wong

    2009-01-01

    martingale component is based on an ergodic diffusion with a specified stationary distribution. These models are particularly useful for long horizon asset-liability management as they allow the modelling of long term stock returns with heavy tail ergodic diffusions, with tractable, time homogeneous dynamics, and which moreover admit a complete financial market, leading to unique pricing and hedging strategies. Unfortunately the standard specifications of these models in literature admit arbitrage opportunities. We investigate in detail the features of the existing model specifications which create these arbitrage opportunities and consequently construct a modification that is arbitrage free.

  11. Analytical modeling of the structureborne noise path on a small twin-engine aircraft

    Science.gov (United States)

    Cole, J. E., III; Stokes, A. Westagard; Garrelick, J. M.; Martini, K. F.

    1988-01-01

    The structureborne noise path of a six passenger twin-engine aircraft is analyzed. Models of the wing and fuselage structures as well as the interior acoustic space of the cabin are developed and used to evaluate sensitivity to structural and acoustic parameters. Different modeling approaches are used to examine aspects of the structureborne path. These approaches are guided by a number of considerations including the geometry of the structures, the frequency range of interest, and the tractability of the computations. Results of these approaches are compared with experimental data.

  12. RSMASS system model development

    International Nuclear Information System (INIS)

    Marshall, A.C.; Gallup, D.R.

    1998-01-01

    RSMASS system mass models have been used for more than a decade to make rapid estimates of space reactor power system masses. This paper reviews the evolution of the RSMASS models and summarizes present capabilities. RSMASS has evolved from a simple model used to make rough estimates of space reactor and shield masses to a versatile space reactor power system model. RSMASS uses unique reactor and shield models that permit rapid mass optimization calculations for a variety of space reactor power and propulsion systems. The RSMASS-D upgrade of the original model includes algorithms for the balance of the power system, a number of reactor and shield modeling improvements, and an automatic mass optimization scheme. The RSMASS-D suite of codes cover a very broad range of reactor and power conversion system options as well as propulsion and bimodal reactor systems. Reactor choices include in-core and ex-core thermionic reactors, liquid metal cooled reactors, particle bed reactors, and prismatic configuration reactors. Power conversion options include thermoelectric, thermionic, Stirling, Brayton, and Rankine approaches. Program output includes all major component masses and dimensions, efficiencies, and a description of the design parameters for a mass optimized system. In the past, RSMASS has been used as an aid to identify and select promising concepts for space power applications. The RSMASS modeling approach has been demonstrated to be a valuable tool for guiding optimization of the power system design; consequently, the model is useful during system design and development as well as during the selection process. An improved in-core thermionic reactor system model RSMASS-T is now under development. The current development of the RSMASS-T code represents the next evolutionary stage of the RSMASS models. RSMASS-T includes many modeling improvements and is planned to be more user-friendly. RSMASS-T will be released as a fully documented, certified code at the end of

  13. Capturing the complex behavior of hydraulic fracture stimulation through multi-physics modeling, field-based constraints, and model reduction

    Science.gov (United States)

    Johnson, S.; Chiaramonte, L.; Cruz, L.; Izadi, G.

    2016-12-01

    Advances in the accuracy and fidelity of numerical methods have significantly improved our understanding of coupled processes in unconventional reservoirs. However, such multi-physics models are typically characterized by many parameters and require exceptional computational resources to evaluate systems of practical importance, making these models difficult to use for field analyses or uncertainty quantification. One approach to remove these limitations is through targeted complexity reduction and field data constrained parameterization. For the latter, a variety of field data streams may be available to engineers and asset teams, including micro-seismicity from proximate sites, well logs, and 3D surveys, which can constrain possible states of the reservoir as well as the distributions of parameters. We describe one such workflow, using the Argos multi-physics code and requisite geomechanical analysis to parameterize the underlying models. We illustrate with a field study involving a constraint analysis of various field data and details of the numerical optimizations and model reduction to demonstrate how complex models can be applied to operation design in hydraulic fracturing operations, including selection of controllable completion and fluid injection design properties. The implication of this work is that numerical methods are mature and computationally tractable enough to enable complex engineering analysis and deterministic field estimates and to advance research into stochastic analyses for uncertainty quantification and value of information applications.

  14. Discourse and tractable morality

    NARCIS (Netherlands)

    de Graaf, G.; Lütge, C.

    2013-01-01

    When managerial decisions are examined, somehow the business context must be included in the analysis. In this chapter, causalities that transcend individuals are promoted as unit of analysis in empirical moral research, namely, discourse. Studying managerial decisions in their discursive context is

  15. Population genetics models of local ancestry.

    Science.gov (United States)

    Gravel, Simon

    2012-06-01

    Migrations have played an important role in shaping the genetic diversity of human populations. Understanding genomic data thus requires careful modeling of historical gene flow. Here we consider the effect of relatively recent population structure and gene flow and interpret genomes of individuals that have ancestry from multiple source populations as mosaics of segments originating from each population. This article describes general and tractable models for local ancestry patterns with a focus on the length distribution of continuous ancestry tracts and the variance in total ancestry proportions among individuals. The models offer improved agreement with Wright-Fisher simulation data when compared to the state-of-the art and can be used to infer time-dependent migration rates from multiple populations. Considering HapMap African-American (ASW) data, we find that a model with two distinct phases of "European" gene flow significantly improves the modeling of both tract lengths and ancestry variances.

  16. Modelling the smart farm

    Directory of Open Access Journals (Sweden)

    Michael J. O'Grady

    2017-09-01

    Full Text Available Smart farming envisages the harnessing of Information and Communication Technologies as an enabler of more efficient, productive, and profitable farming enterprises. Such technologies do not suffice on their own; rather they must be judiciously combined to deliver meaningful information in near real-time. Decision-support tools incorporating models of disparate farming activities, either on their own or in combination with other models, offer one popular approach; exemplars include GPFARM, APSIM, GRAZPLAN amongst many others. Such models tend to be generic in nature and their adoption by individual farmers is minimal. Smart technologies offer an opportunity to remedy this situation; farm-specific models that can reflect near real-time events become tractable using such technologies. Research on the development, and application of farm-specific models is at a very early stage. This paper thus presents an overview of models within the farming enterprise; it then reviews the state-of the art in smart technologies that promise to enable a new generation of enterprise-specific models that will underpin future smart farming enterprises.

  17. The algebraic collective model

    International Nuclear Information System (INIS)

    Rowe, D.J.; Turner, P.S.

    2005-01-01

    A recently proposed computationally tractable version of the Bohr collective model is developed to the extent that we are now justified in describing it as an algebraic collective model. The model has an SU(1,1)xSO(5) algebraic structure and a continuous set of exactly solvable limits. Moreover, it provides bases for mixed symmetry collective model calculations. However, unlike the standard realization of SU(1,1), used for computing beta wave functions and their matrix elements in a spherical basis, the algebraic collective model makes use of an SU(1,1) algebra that generates wave functions appropriate for deformed nuclei with intrinsic quadrupole moments ranging from zero to any large value. A previous paper focused on the SO(5) wave functions, as SO(5) (hyper-)spherical harmonics, and computation of their matrix elements. This paper gives analytical expressions for the beta matrix elements needed in applications of the model and illustrative results to show the remarkable gain in efficiency that is achieved by using such a basis in collective model calculations for deformed nuclei

  18. The Earth System Model

    Science.gov (United States)

    Schoeberl, Mark; Rood, Richard B.; Hildebrand, Peter; Raymond, Carol

    2003-01-01

    The Earth System Model is the natural evolution of current climate models and will be the ultimate embodiment of our geophysical understanding of the planet. These models are constructed from components - atmosphere, ocean, ice, land, chemistry, solid earth, etc. models and merged together through a coupling program which is responsible for the exchange of data from the components. Climate models and future earth system models will have standardized modules, and these standards are now being developed by the ESMF project funded by NASA. The Earth System Model will have a variety of uses beyond climate prediction. The model can be used to build climate data records making it the core of an assimilation system, and it can be used in OSSE experiments to evaluate. The computing and storage requirements for the ESM appear to be daunting. However, the Japanese ES theoretical computing capability is already within 20% of the minimum requirements needed for some 2010 climate model applications. Thus it seems very possible that a focused effort to build an Earth System Model will achieve succcss.

  19. A three-dimensional relaxation model for calculation of atomic mixing and topography changes induces by ion beams

    International Nuclear Information System (INIS)

    Collins, R.; Perez-Martin, A.M.C.; Dominguez-Vazquez, J.; Jimenez-Rodriguez, J.J.

    1994-01-01

    A simple model for three-dimensional material relaxation associated with atomic mixing is presented. The relaxation of the solid to accommodate the extra effective displacement volume Ω of an implanted or relocated atom is modelled by treating the surrounding solid as an incompressible medium. This leads to a tractable general formalism which can be used to predict implant distribution and changes in surface topography induced by ion beams, both in monatomic and multicomponent targets. The two-component case is discussed in detail. (orig.)

  20. How Action Understanding can be Rational, Bayesian and Tractable

    NARCIS (Netherlands)

    Blokpoel, M.; Kwisthout, J.H.P.; Weide, Th.P. van der; Rooij, I.J.E.I. van; Ohlsson, S.; Catrambone, R.

    2010-01-01

    An important aspect of human sociality is our ability to understand the actions of others as being goal-directed. Recently, the now classic rational approach to explaining this ability has been given a formal incarnation in the Bayesian Inverse Planning (BIP) model of Baker, Saxe, and Tenenbaum

  1. Linear filtering of systems with memory and application to finance

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available We study the linear filtering problem for systems driven by continuous Gaussian processes V ( 1 and V ( 2 with memory described by two parameters. The processes V ( j have the virtue that they possess stationary increments and simple semimartingale representations simultaneously. They allow for straightforward parameter estimations. After giving the semimartingale representations of V ( j by innovation theory, we derive Kalman-Bucy-type filtering equations for the systems. We apply the result to the optimal portfolio problem for an investor with partial observations. We illustrate the tractability of the filtering algorithm by numerical implementations.

  2. A Neuronal Culture System to Detect Prion Synaptotoxicity.

    Directory of Open Access Journals (Sweden)

    Cheng Fang

    2016-05-01

    Full Text Available Synaptic pathology is an early feature of prion as well as other neurodegenerative diseases. Although the self-templating process by which prions propagate is well established, the mechanisms by which prions cause synaptotoxicity are poorly understood, due largely to the absence of experimentally tractable cell culture models. Here, we report that exposure of cultured hippocampal neurons to PrPSc, the infectious isoform of the prion protein, results in rapid retraction of dendritic spines. This effect is entirely dependent on expression of the cellular prion protein, PrPC, by target neurons, and on the presence of a nine-amino acid, polybasic region at the N-terminus of the PrPC molecule. Both protease-resistant and protease-sensitive forms of PrPSc cause dendritic loss. This system provides new insights into the mechanisms responsible for prion neurotoxicity, and it provides a platform for characterizing different pathogenic forms of PrPSc and testing potential therapeutic agents.

  3. Modelling Stochastic Route Choice Behaviours with a Closed-Form Mixed Logit Model

    Directory of Open Access Journals (Sweden)

    Xinjun Lai

    2015-01-01

    Full Text Available A closed-form mixed Logit approach is proposed to model the stochastic route choice behaviours. It combines both the advantages of Probit and Logit to provide a flexible form in alternatives correlation and a tractable form in expression; besides, the heterogeneity in alternative variance can also be addressed. Paths are compared by pairs where the superiority of the binary Probit can be fully used. The Probit-based aggregation is also used for a nested Logit structure. Case studies on both numerical and empirical examples demonstrate that the new method is valid and practical. This paper thus provides an operational solution to incorporate the normal distribution in route choice with an analytical expression.

  4. Novel polyglutamine model uncouples proteotoxicity from aging.

    Science.gov (United States)

    Christie, Nakeirah T M; Lee, Amy L; Fay, Hannah G; Gray, Amelia A; Kikis, Elise A

    2014-01-01

    Polyglutamine expansions in certain proteins are the genetic determinants for nine distinct progressive neurodegenerative disorders and resultant age-related dementia. In these cases, neurodegeneration is due to the aggregation propensity and resultant toxic properties of the polyglutamine-containing proteins. We are interested in elucidating the underlying mechanisms of toxicity of the protein ataxin-3, in which a polyglutamine expansion is the genetic determinant for Machado-Joseph Disease (MJD), also referred to as spinocerebellar ataxia 3 (SCA3). To this end, we have developed a novel model for ataxin-3 protein aggregation, by expressing a disease-related polyglutamine-containing fragment of ataxin-3 in the genetically tractable body wall muscle cells of the model system C. elegans. Here, we demonstrate that this ataxin-3 fragment aggregates in a polyQ length-dependent manner in C. elegans muscle cells and that this aggregation is associated with cellular dysfunction. However, surprisingly, this aggregation and resultant toxicity was not influenced by aging. This is in contrast to polyglutamine peptides alone whose aggregation/toxicity is highly dependent on age. Thus, the data presented here not only describe a new polyglutamine model, but also suggest that protein context likely influences the cellular interactions of the polyglutamine-containing protein and thereby modulates its toxic properties.

  5. Novel polyglutamine model uncouples proteotoxicity from aging.

    Directory of Open Access Journals (Sweden)

    Nakeirah T M Christie

    Full Text Available Polyglutamine expansions in certain proteins are the genetic determinants for nine distinct progressive neurodegenerative disorders and resultant age-related dementia. In these cases, neurodegeneration is due to the aggregation propensity and resultant toxic properties of the polyglutamine-containing proteins. We are interested in elucidating the underlying mechanisms of toxicity of the protein ataxin-3, in which a polyglutamine expansion is the genetic determinant for Machado-Joseph Disease (MJD, also referred to as spinocerebellar ataxia 3 (SCA3. To this end, we have developed a novel model for ataxin-3 protein aggregation, by expressing a disease-related polyglutamine-containing fragment of ataxin-3 in the genetically tractable body wall muscle cells of the model system C. elegans. Here, we demonstrate that this ataxin-3 fragment aggregates in a polyQ length-dependent manner in C. elegans muscle cells and that this aggregation is associated with cellular dysfunction. However, surprisingly, this aggregation and resultant toxicity was not influenced by aging. This is in contrast to polyglutamine peptides alone whose aggregation/toxicity is highly dependent on age. Thus, the data presented here not only describe a new polyglutamine model, but also suggest that protein context likely influences the cellular interactions of the polyglutamine-containing protein and thereby modulates its toxic properties.

  6. Five challenges in modelling interacting strain dynamics

    Directory of Open Access Journals (Sweden)

    Paul S. Wikramaratna

    2015-03-01

    Full Text Available Population epidemiological models where hosts can be infected sequentially by different strains have the potential to help us understand many important diseases. Researchers have in recent years started to develop and use such models, but the extra layer of complexity from multiple strains brings with it many technical challenges. It is therefore hard to build models which have realistic assumptions yet are tractable. Here we outline some of the main challenges in this area. First we begin with the fundamental question of how to translate from complex small-scale dynamics within a host to useful population models. Next we consider the nature of so-called “strain space”. We describe two key types of host heterogeneities, and explain how models could help generate a better understanding of their effects. Finally, for diseases with many strains, we consider the challenge of modelling how immunity accumulates over multiple exposures.

  7. Transcriptional delay stabilizes bistable gene networks.

    Science.gov (United States)

    Gupta, Chinmaya; López, José Manuel; Ott, William; Josić, Krešimir; Bennett, Matthew R

    2013-08-02

    Transcriptional delay can significantly impact the dynamics of gene networks. Here we examine how such delay affects bistable systems. We investigate several stochastic models of bistable gene networks and find that increasing delay dramatically increases the mean residence times near stable states. To explain this, we introduce a non-Markovian, analytically tractable reduced model. The model shows that stabilization is the consequence of an increased number of failed transitions between stable states. Each of the bistable systems that we simulate behaves in this manner.

  8. Programming strategy for efficient modeling of dynamics in a population of heterogeneous cells.

    Science.gov (United States)

    Hald, Bjørn Olav; Garkier Hendriksen, Morten; Sørensen, Preben Graae

    2013-05-15

    Heterogeneity is a ubiquitous property of biological systems. Even in a genetically identical population of a single cell type, cell-to-cell differences are observed. Although the functional behavior of a given population is generally robust, the consequences of heterogeneity are fairly unpredictable. In heterogeneous populations, synchronization of events becomes a cardinal problem-particularly for phase coherence in oscillating systems. The present article presents a novel strategy for construction of large-scale simulation programs of heterogeneous biological entities. The strategy is designed to be tractable, to handle heterogeneity and to handle computational cost issues simultaneously, primarily by writing a generator of the 'model to be simulated'. We apply the strategy to model glycolytic oscillations among thousands of yeast cells coupled through the extracellular medium. The usefulness is illustrated through (i) benchmarking, showing an almost linear relationship between model size and run time, and (ii) analysis of the resulting simulations, showing that contrary to the experimental situation, synchronous oscillations are surprisingly hard to achieve, underpinning the need for tools to study heterogeneity. Thus, we present an efficient strategy to model the biological heterogeneity, neglected by ordinary mean-field models. This tool is well posed to facilitate the elucidation of the physiologically vital problem of synchronization. The complete python code is available as Supplementary Information. bjornhald@gmail.com or pgs@kiku.dk Supplementary data are available at Bioinformatics online.

  9. Catastrophe Insurance Modeled by Shot-Noise Processes

    Directory of Open Access Journals (Sweden)

    Thorsten Schmidt

    2014-02-01

    Full Text Available Shot-noise processes generalize compound Poisson processes in the following way: a jump (the shot is followed by a decline (noise. This constitutes a useful model for insurance claims in many circumstances; claims due to natural disasters or self-exciting processes exhibit similar features. We give a general account of shot-noise processes with time-inhomogeneous drivers inspired by recent results in credit risk. Moreover, we derive a number of useful results for modeling and pricing with shot-noise processes. Besides this, we obtain some highly tractable examples and constitute a useful modeling tool for dynamic claims processes. The results can in particular be used for pricing Catastrophe Bonds (CAT bonds, a traded risk-linked security. Additionally, current results regarding the estimation of shot-noise processes are reviewed.

  10. Numerical Simulation of a Tumor Growth Dynamics Model Using Particle Swarm Optimization.

    Science.gov (United States)

    Wang, Zhijun; Wang, Qing

    Tumor cell growth models involve high-dimensional parameter spaces that require computationally tractable methods to solve. To address a proposed tumor growth dynamics mathematical model, an instance of the particle swarm optimization method was implemented to speed up the search process in the multi-dimensional parameter space to find optimal parameter values that fit experimental data from mice cancel cells. The fitness function, which measures the difference between calculated results and experimental data, was minimized in the numerical simulation process. The results and search efficiency of the particle swarm optimization method were compared to those from other evolutional methods such as genetic algorithms.

  11. Human performance modeling for system of systems analytics.

    Energy Technology Data Exchange (ETDEWEB)

    Dixon, Kevin R.; Lawton, Craig R.; Basilico, Justin Derrick; Longsine, Dennis E. (INTERA, Inc., Austin, TX); Forsythe, James Chris; Gauthier, John Henry; Le, Hai D.

    2008-10-01

    A Laboratory-Directed Research and Development project was initiated in 2005 to investigate Human Performance Modeling in a System of Systems analytic environment. SAND2006-6569 and SAND2006-7911 document interim results from this effort; this report documents the final results. The problem is difficult because of the number of humans involved in a System of Systems environment and the generally poorly defined nature of the tasks that each human must perform. A two-pronged strategy was followed: one prong was to develop human models using a probability-based method similar to that first developed for relatively well-understood probability based performance modeling; another prong was to investigate more state-of-art human cognition models. The probability-based modeling resulted in a comprehensive addition of human-modeling capability to the existing SoSAT computer program. The cognitive modeling resulted in an increased understanding of what is necessary to incorporate cognition-based models to a System of Systems analytic environment.

  12. Entropy Evolution and Uncertainty Estimation with Dynamical Systems

    Directory of Open Access Journals (Sweden)

    X. San Liang

    2014-06-01

    Full Text Available This paper presents a comprehensive introduction and systematic derivation of the evolutionary equations for absolute entropy H and relative entropy D, some of which exist sporadically in the literature in different forms under different subjects, within the framework of dynamical systems. In general, both H and D are dissipated, and the dissipation bears a form reminiscent of the Fisher information; in the absence of stochasticity, dH/dt is connected to the rate of phase space expansion, and D stays invariant, i.e., the separation of two probability density functions is always conserved. These formulas are validated with linear systems, and put to application with the Lorenz system and a large-dimensional stochastic quasi-geostrophic flow problem. In the Lorenz case, H falls at a constant rate with time, implying that H will eventually become negative, a situation beyond the capability of the commonly used computational technique like coarse-graining and bin counting. For the stochastic flow problem, it is first reduced to a computationally tractable low-dimensional system, using a reduced model approach, and then handled through ensemble prediction. Both the Lorenz system and the stochastic flow system are examples of self-organization in the light of uncertainty reduction. The latter particularly shows that, sometimes stochasticity may actually enhance the self-organization process.

  13. Accelerated maximum likelihood parameter estimation for stochastic biochemical systems

    Directory of Open Access Journals (Sweden)

    Daigle Bernie J

    2012-05-01

    Full Text Available Abstract Background A prerequisite for the mechanistic simulation of a biochemical system is detailed knowledge of its kinetic parameters. Despite recent experimental advances, the estimation of unknown parameter values from observed data is still a bottleneck for obtaining accurate simulation results. Many methods exist for parameter estimation in deterministic biochemical systems; methods for discrete stochastic systems are less well developed. Given the probabilistic nature of stochastic biochemical models, a natural approach is to choose parameter values that maximize the probability of the observed data with respect to the unknown parameters, a.k.a. the maximum likelihood parameter estimates (MLEs. MLE computation for all but the simplest models requires the simulation of many system trajectories that are consistent with experimental data. For models with unknown parameters, this presents a computational challenge, as the generation of consistent trajectories can be an extremely rare occurrence. Results We have developed Monte Carlo Expectation-Maximization with Modified Cross-Entropy Method (MCEM2: an accelerated method for calculating MLEs that combines advances in rare event simulation with a computationally efficient version of the Monte Carlo expectation-maximization (MCEM algorithm. Our method requires no prior knowledge regarding parameter values, and it automatically provides a multivariate parameter uncertainty estimate. We applied the method to five stochastic systems of increasing complexity, progressing from an analytically tractable pure-birth model to a computationally demanding model of yeast-polarization. Our results demonstrate that MCEM2 substantially accelerates MLE computation on all tested models when compared to a stand-alone version of MCEM. Additionally, we show how our method identifies parameter values for certain classes of models more accurately than two recently proposed computationally efficient methods

  14. Optimization of large-scale heterogeneous system-of-systems models.

    Energy Technology Data Exchange (ETDEWEB)

    Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Lee, Herbert K. H. (University of California, Santa Cruz, Santa Cruz, CA); Hart, William Eugene; Gray, Genetha Anne (Sandia National Laboratories, Livermore, CA); Woodruff, David L. (University of California, Davis, Davis, CA)

    2012-01-01

    Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.

  15. Robust Management of Combined Heat and Power Systems via Linear Decision Rules

    DEFF Research Database (Denmark)

    Zugno, Marco; Morales González, Juan Miguel; Madsen, Henrik

    2014-01-01

    The heat and power outputs of Combined Heat and Power (CHP) units are jointly constrained. Hence, the optimal management of systems including CHP units is a multicommodity optimization problem. Problems of this type are stochastic, owing to the uncertainty inherent both in the demand for heat and...... linear decision rules to guarantee both tractability and a correct representation of the dynamic aspects of the problem. Numerical results from an illustrative example confirm the value of the proposed approach....

  16. Efficient Work Team Scheduling: Using Psychological Models of Knowledge Retention to Improve Code Writing Efficiency

    Directory of Open Access Journals (Sweden)

    Michael J. Pelosi

    2014-12-01

    Full Text Available Development teams and programmers must retain critical information about their work during work intervals and gaps in order to improve future performance when work resumes. Despite time lapses, project managers want to maximize coding efficiency and effectiveness. By developing a mathematically justified, practically useful, and computationally tractable quantitative and cognitive model of learning and memory retention, this study establishes calculations designed to maximize scheduling payoff and optimize developer efficiency and effectiveness.

  17. Selected System Models

    Science.gov (United States)

    Schmidt-Eisenlohr, F.; Puñal, O.; Klagges, K.; Kirsche, M.

    Apart from the general issue of modeling the channel, the PHY and the MAC of wireless networks, there are specific modeling assumptions that are considered for different systems. In this chapter we consider three specific wireless standards and highlight modeling options for them. These are IEEE 802.11 (as example for wireless local area networks), IEEE 802.16 (as example for wireless metropolitan networks) and IEEE 802.15 (as example for body area networks). Each section on these three systems discusses also at the end a set of model implementations that are available today.

  18. Detection of Subtle Context-Dependent Model Inaccuracies in High-Dimensional Robot Domains.

    Science.gov (United States)

    Mendoza, Juan Pablo; Simmons, Reid; Veloso, Manuela

    2016-12-01

    Autonomous robots often rely on models of their sensing and actions for intelligent decision making. However, when operating in unconstrained environments, the complexity of the world makes it infeasible to create models that are accurate in every situation. This article addresses the problem of using potentially large and high-dimensional sets of robot execution data to detect situations in which a robot model is inaccurate-that is, detecting context-dependent model inaccuracies in a high-dimensional context space. To find inaccuracies tractably, the robot conducts an informed search through low-dimensional projections of execution data to find parametric Regions of Inaccurate Modeling (RIMs). Empirical evidence from two robot domains shows that this approach significantly enhances the detection power of existing RIM-detection algorithms in high-dimensional spaces.

  19. Vacation model for Markov machine repair problem with two heterogeneous unreliable servers and threshold recovery

    Science.gov (United States)

    Jain, Madhu; Meena, Rakesh Kumar

    2018-03-01

    Markov model of multi-component machining system comprising two unreliable heterogeneous servers and mixed type of standby support has been studied. The repair job of broken down machines is done on the basis of bi-level threshold policy for the activation of the servers. The server returns back to render repair job when the pre-specified workload of failed machines is build up. The first (second) repairman turns on only when the work load of N1 (N2) failed machines is accumulated in the system. The both servers may go for vacation in case when all the machines are in good condition and there are no pending repair jobs for the repairmen. Runge-Kutta method is implemented to solve the set of governing equations used to formulate the Markov model. Various system metrics including the mean queue length, machine availability, throughput, etc., are derived to determine the performance of the machining system. To provide the computational tractability of the present investigation, a numerical illustration is provided. A cost function is also constructed to determine the optimal repair rate of the server by minimizing the expected cost incurred on the system. The hybrid soft computing method is considered to develop the adaptive neuro-fuzzy inference system (ANFIS). The validation of the numerical results obtained by Runge-Kutta approach is also facilitated by computational results generated by ANFIS.

  20. Methods of orbit correction system optimization

    International Nuclear Information System (INIS)

    Chao, Yu-Chiu.

    1997-01-01

    Extracting optimal performance out of an orbit correction system is an important component of accelerator design and evaluation. The question of effectiveness vs. economy, however, is not always easily tractable. This is especially true in cases where betatron function magnitude and phase advance do not have smooth or periodic dependencies on the physical distance. In this report a program is presented using linear algebraic techniques to address this problem. A systematic recipe is given, supported with quantitative criteria, for arriving at an orbit correction system design with the optimal balance between performance and economy. The orbit referred to in this context can be generalized to include angle, path length, orbit effects on the optical transfer matrix, and simultaneous effects on multiple pass orbits

  1. Ensemble Sampling

    OpenAIRE

    Lu, Xiuyuan; Van Roy, Benjamin

    2017-01-01

    Thompson sampling has emerged as an effective heuristic for a broad range of online decision problems. In its basic form, the algorithm requires computing and sampling from a posterior distribution over models, which is tractable only for simple special cases. This paper develops ensemble sampling, which aims to approximate Thompson sampling while maintaining tractability even in the face of complex models such as neural networks. Ensemble sampling dramatically expands on the range of applica...

  2. Integrated Assessment of Energy Policies: A Decomposition of Top-Down and Bottom-Up

    Energy Technology Data Exchange (ETDEWEB)

    Boehringer, Christoph (Univ. of Oldenburg (Germany)); Rutherford, Thomas F. (ETH Zuerich (Switzerland))

    2008-01-15

    The formulation of market equilibrium problems as mixed complementarity problems (MCP) permits integration of bottom-up programming models of the energy system into top-down general equilibrium models of the overall economy. Yet, in practise the MCP approach loses analytical tractability of income effects, when the energy system includes upper and lowrbounds on many decision variables . We therefore advocate the use of complementarity methods to solve only the top-down economic equilibrium model and employ quadratic programming to solve the underlying bottom-up energy supply model. A simple iterative procedure reconciles the equilibrium prices and quantities between both models.

  3. Modeling and Analysis of Remote, Off-grid Microgrids

    Science.gov (United States)

    Madathil, Sreenath Chalil

    Over the past century the electric power industry has evolved to support the delivery of power over long distances with highly interconnected transmission systems. Despite this evolution, some remote communities are not connected to these systems. These communities rely on small, disconnected distribution systems, i.e., microgrids, to deliver power. Power distribution in most of these remote communities often depend on a type of microgrid called "off-grid microgrids". However, as microgrids often are not held to the same reliability standards as transmission grids, remote communities can be at risk to experience extended blackouts. Recent trends have also shown an increased use of renewable energy resources in power systems for remote communities. The increased penetration of renewable resources in power generation will require complex decision making when designing a resilient power system. This is mainly due to the stochastic nature of renewable resources that can lead to loss of load or line overload during their operations. In the first part of this thesis, we develop an optimization model and accompanying solution algorithm for capacity planning and operating microgrids that include N-1 security and other practical modeling features (e.g., AC power flow physics, component efficiencies and thermal limits). We demonstrate the effectiveness of our model and solution approach on two test systems: a modified version of the IEEE 13 node test feeder and a model of a distribution system in a remote Alaskan community. Once a tractable algorithm was identified to solve the above problem, we develop a mathematical model that includes topology design of microgrids. The topology design includes building new lines, making redundant lines, and analyzing N-1 contingencies on generators and lines. We develop a rolling horizon algorithm to efficiently analyze the model and demonstrate the strength of our algorithm in the same network. Finally, we develop a stochastic model that

  4. Modeller af komplicerede systemer

    DEFF Research Database (Denmark)

    Mortensen, J.

    emphasizes their use in relation to technical systems. All the presented models, with the exception of the types presented in chapter 2, are non-theoretical non-formal conceptual network models. Two new model types are presented: 1) The System-Environment model, which describes the environments interaction...... with conceptual modeling in relation to process control. It´s purpose is to present classify and exemplify the use of a set of qualitative model types. Such model types are useful in the early phase of modeling, where no structured methods are at hand. Although the models are general in character, this thesis......This thesis, "Modeller af komplicerede systemer", represents part of the requirements for the Danish Ph.D.degree. Assisting professor John Nørgaard-Nielsen, M.Sc.E.E.Ph.D. has been principal supervisor and professor Morten Lind, M.Sc.E.E.Ph.D. has been assisting supervisor. The thesis is concerned...

  5. Modeling Novo Nordisk Production Systems

    DEFF Research Database (Denmark)

    Miller, Thomas Dedenroth

    1997-01-01

    This report describes attributes of models and systems, and how models can be used for description of production systems. There are special attention on the 'Theory of Domains'.......This report describes attributes of models and systems, and how models can be used for description of production systems. There are special attention on the 'Theory of Domains'....

  6. A two-dimensional kinetic model of the scrape-off layer

    International Nuclear Information System (INIS)

    Catto, P.J.; Hazeltine, R.D.

    1993-09-01

    A two-dimensional (radius and poloidal angle), analytically tractable kinetic model of the ion (or energetic electron) behavior in the scrape-off layer of a limiter or divertor plasma in a tokamak is presented. The model determines the boundary conditions on the core ion density and ion temperature gradients, the power load on the limiter or divertor plates, the energy carried per particle to the walls, and the effective flux limit. The self-consistent electrostatic potential in the quasi-neutral scrape-off layer is determined by using the ion kinetic model of the layer along with a Maxwell-Boltzmann electron response that occurs because most electrons are reflected by the Debye sheaths (assumed to be infinitely thin) at the limiter or divertor plates

  7. Modeling Complex Systems

    International Nuclear Information System (INIS)

    Schreckenberg, M

    2004-01-01

    This book by Nino Boccara presents a compilation of model systems commonly termed as 'complex'. It starts with a definition of the systems under consideration and how to build up a model to describe the complex dynamics. The subsequent chapters are devoted to various categories of mean-field type models (differential and recurrence equations, chaos) and of agent-based models (cellular automata, networks and power-law distributions). Each chapter is supplemented by a number of exercises and their solutions. The table of contents looks a little arbitrary but the author took the most prominent model systems investigated over the years (and up until now there has been no unified theory covering the various aspects of complex dynamics). The model systems are explained by looking at a number of applications in various fields. The book is written as a textbook for interested students as well as serving as a comprehensive reference for experts. It is an ideal source for topics to be presented in a lecture on dynamics of complex systems. This is the first book on this 'wide' topic and I have long awaited such a book (in fact I planned to write it myself but this is much better than I could ever have written it!). Only section 6 on cellular automata is a little too limited to the author's point of view and one would have expected more about the famous Domany-Kinzel model (and more accurate citation!). In my opinion this is one of the best textbooks published during the last decade and even experts can learn a lot from it. Hopefully there will be an actualization after, say, five years since this field is growing so quickly. The price is too high for students but this, unfortunately, is the normal case today. Nevertheless I think it will be a great success! (book review)

  8. Gradient models in molecular biophysics: progress, challenges, opportunities

    Science.gov (United States)

    Bardhan, Jaydeep P.

    2013-12-01

    In the interest of developing a bridge between researchers modeling materials and those modeling biological molecules, we survey recent progress in developing nonlocal-dielectric continuum models for studying the behavior of proteins and nucleic acids. As in other areas of science, continuum models are essential tools when atomistic simulations (e.g., molecular dynamics) are too expensive. Because biological molecules are essentially all nanoscale systems, the standard continuum model, involving local dielectric response, has basically always been dubious at best. The advanced continuum theories discussed here aim to remedy these shortcomings by adding nonlocal dielectric response. We begin by describing the central role of electrostatic interactions in biology at the molecular scale, and motivate the development of computationally tractable continuum models using applications in science and engineering. For context, we highlight some of the most important challenges that remain, and survey the diverse theoretical formalisms for their treatment, highlighting the rigorous statistical mechanics that support the use and improvement of continuum models. We then address the development and implementation of nonlocal dielectric models, an approach pioneered by Dogonadze, Kornyshev, and their collaborators almost 40 years ago. The simplest of these models is just a scalar form of gradient elasticity, and here we use ideas from gradient-based modeling to extend the electrostatic model to include additional length scales. The review concludes with a discussion of open questions for model development, highlighting the many opportunities for the materials community to leverage its physical, mathematical, and computational expertise to help solve one of the most challenging questions in molecular biology and biophysics.

  9. Dipole saturated absorption modeling in gas phase: Dealing with a Gaussian beam

    Science.gov (United States)

    Dupré, Patrick

    2018-01-01

    With the advent of new accurate and sensitive spectrometers, cf. combining optical cavities (for absorption enhancement), the requirement for reliable molecular transition modeling is becoming more pressing. Unfortunately, there is no trivial approach which can provide a definitive formalism allowing us to solve the coupled systems of equations associated with nonlinear absorption. Here, we propose a general approach to deal with any spectral shape of the electromagnetic field interacting with a molecular species under saturation conditions. The development is specifically applied to Gaussian-shaped beams. To make the analytical expressions tractable, approximations are proposed. Finally, two or three numerical integrations are required for describing the Lamb-dip profile. The implemented model allows us to describe the saturated absorption under low pressure conditions where the broadening by the transit-time may dominate the collision rates. The model is applied to two specific overtone transitions of the molecular acetylene. The simulated line shapes are discussed versus the collision and the transit-time rates. The specific collisional and collision-free regimes are illustrated, while the Rabi frequency controls the intermediate regime. We illustrate how to recover the input parameters by fitting the simulated profiles.

  10. Málaga statistical distribution: the new universal analytical propagation model for atmospheric optical communications

    DEFF Research Database (Denmark)

    Jurado-Navas, Antonio

    2015-01-01

    in homogeneous, isotropic turbulence. Málaga distribution was demonstrated to have the advantage of unifying most of the proposed statistical models derived until now in the scientific literature in a closed-form and mathematically-tractable expression. Furthermore, it unifies most of the proposed statistical...... models for the irradiance fluctuations derived in the bibliography providing, in addition, an excellent agreement with published plane wave and spherical wave simulation data over a wide range of turbulence conditions (weak to strong). In this communication, reviews of its different features...... scintillation in atmospheric optical communication links under any turbulence conditions...

  11. Three-dimensional electromagnetic model of the pulsed-power Z-pinch accelerator

    Directory of Open Access Journals (Sweden)

    D. V. Rose

    2010-01-01

    Full Text Available A three-dimensional, fully electromagnetic model of the principal pulsed-power components of the 26-MA ZR accelerator [D. H. McDaniel et al., in Proceedings of the 5th International Conference on Dense Z-Pinches (AIP, New York, 2002, p. 23] has been developed. This large-scale simulation model tracks the evolution of electromagnetic waves through the accelerator’s intermediate-storage capacitors, laser-triggered gas switches, pulse-forming lines, water switches, triplate transmission lines, and water convolute to the vacuum insulator stack. The insulator-stack electrodes are coupled to a transmission-line circuit model of the four-level magnetically insulated vacuum-transmission-line section and double-post-hole convolute. The vacuum-section circuit model is terminated by a one-dimensional self-consistent dynamic model of an imploding z-pinch load. The simulation results are compared with electrical measurements made throughout the ZR accelerator, and are in good agreement with the data, especially for times until peak load power. This modeling effort demonstrates that 3D electromagnetic models of large-scale, multiple-module, pulsed-power accelerators are now computationally tractable. This, in turn, presents new opportunities for simulating the operation of existing pulsed-power systems used in a variety of high-energy-density-physics and radiographic applications, as well as even higher-power next-generation accelerators before they are constructed.

  12. An online model composition tool for system biology models.

    Science.gov (United States)

    Coskun, Sarp A; Cicek, A Ercument; Lai, Nicola; Dash, Ranjan K; Ozsoyoglu, Z Meral; Ozsoyoglu, Gultekin

    2013-09-05

    There are multiple representation formats for Systems Biology computational models, and the Systems Biology Markup Language (SBML) is one of the most widely used. SBML is used to capture, store, and distribute computational models by Systems Biology data sources (e.g., the BioModels Database) and researchers. Therefore, there is a need for all-in-one web-based solutions that support advance SBML functionalities such as uploading, editing, composing, visualizing, simulating, querying, and browsing computational models. We present the design and implementation of the Model Composition Tool (Interface) within the PathCase-SB (PathCase Systems Biology) web portal. The tool helps users compose systems biology models to facilitate the complex process of merging systems biology models. We also present three tools that support the model composition tool, namely, (1) Model Simulation Interface that generates a visual plot of the simulation according to user's input, (2) iModel Tool as a platform for users to upload their own models to compose, and (3) SimCom Tool that provides a side by side comparison of models being composed in the same pathway. Finally, we provide a web site that hosts BioModels Database models and a separate web site that hosts SBML Test Suite models. Model composition tool (and the other three tools) can be used with little or no knowledge of the SBML document structure. For this reason, students or anyone who wants to learn about systems biology will benefit from the described functionalities. SBML Test Suite models will be a nice starting point for beginners. And, for more advanced purposes, users will able to access and employ models of the BioModels Database as well.

  13. Modeling strategic investment decisions in spatial markets

    International Nuclear Information System (INIS)

    Lorenczik, Stefan; Malischek, Raimund

    2014-01-01

    Markets for natural resources and commodities are often oligopolistic. In these markets, production capacities are key for strategic interaction between the oligopolists. We analyze how different market structures influence oligopolistic capacity investments and thereby affect supply, prices and rents in spatial natural resource markets using mathematical programing models. The models comprise an investment period and a supply period in which players compete in quantities. We compare three models, one perfect competition and two Cournot models, in which the product is either traded through long-term contracts or on spot markets in the supply period. Tractability and practicality of the approach are demonstrated in an application to the international metallurgical coal market. Results may vary substantially between the different models. The metallurgical coal market has recently made progress in moving away from long-term contracts and more towards spot market-based trade. Based on our results, we conclude that this regime switch is likely to raise consumer rents but lower producer rents. The total welfare differs only negligibly.

  14. Modeling strategic investment decisions in spatial markets

    Energy Technology Data Exchange (ETDEWEB)

    Lorenczik, Stefan; Malischek, Raimund [Koeln Univ. (Germany). Energiewirtschaftliches Inst.; Trueby, Johannes [International Energy Agency, 75 - Paris (France)

    2014-04-15

    Markets for natural resources and commodities are often oligopolistic. In these markets, production capacities are key for strategic interaction between the oligopolists. We analyze how different market structures influence oligopolistic capacity investments and thereby affect supply, prices and rents in spatial natural resource markets using mathematical programing models. The models comprise an investment period and a supply period in which players compete in quantities. We compare three models, one perfect competition and two Cournot models, in which the product is either traded through long-term contracts or on spot markets in the supply period. Tractability and practicality of the approach are demonstrated in an application to the international metallurgical coal market. Results may vary substantially between the different models. The metallurgical coal market has recently made progress in moving away from long-term contracts and more towards spot market-based trade. Based on our results, we conclude that this regime switch is likely to raise consumer rents but lower producer rents. The total welfare differs only negligibly.

  15. Introducing Model-Based System Engineering Transforming System Engineering through Model-Based Systems Engineering

    Science.gov (United States)

    2014-03-31

    Web  Presentation...Software  .....................................................  20   Figure  6.  Published   Web  Page  from  Data  Collection...the  term  Model  Based  Engineering  (MBE),  Model  Driven  Engineering  ( MDE ),  or  Model-­‐Based  Systems  

  16. Quantum dynamics in open quantum-classical systems.

    Science.gov (United States)

    Kapral, Raymond

    2015-02-25

    Often quantum systems are not isolated and interactions with their environments must be taken into account. In such open quantum systems these environmental interactions can lead to decoherence and dissipation, which have a marked influence on the properties of the quantum system. In many instances the environment is well-approximated by classical mechanics, so that one is led to consider the dynamics of open quantum-classical systems. Since a full quantum dynamical description of large many-body systems is not currently feasible, mixed quantum-classical methods can provide accurate and computationally tractable ways to follow the dynamics of both the system and its environment. This review focuses on quantum-classical Liouville dynamics, one of several quantum-classical descriptions, and discusses the problems that arise when one attempts to combine quantum and classical mechanics, coherence and decoherence in quantum-classical systems, nonadiabatic dynamics, surface-hopping and mean-field theories and their relation to quantum-classical Liouville dynamics, as well as methods for simulating the dynamics.

  17. Realized GARCH: A Complete Model of Returns and Realized Measures of Volatility

    DEFF Research Database (Denmark)

    Hansen, Peter Reinhard; Huang, Zhuo (Albert); Shek, Howard Howan

    GARCH models have been successful in modeling financial returns. Still, much is to be gained by incorporating a realized measure of volatility in these models. In this paper we introduce a new framework for the joint modeling of returns and realized measures of volatility. The Realized GARCH...... framework nests most GARCH models as special cases and is, in many ways, a natural extension of standard GARCH models. We pay special attention to linear and log-linear Realized GARCH specifications. This class of models has several attractive features. It retains the simplicity and tractability...... to latent volatility. This equation facilitates a simple modeling of the dependence between returns and future volatility that is commonly referred to as the leverage effect. An empirical application with DJIA stocks and an exchange traded index fund shows that a simple Realized GARCH structure leads...

  18. Entanglement, decoherence and thermal relaxation in exactly solvable models

    International Nuclear Information System (INIS)

    Lychkovskiy, Oleg

    2011-01-01

    Exactly solvable models provide an opportunity to study different aspects of reduced quantum dynamics in detail. We consider the reduced dynamics of a single spin in finite XX and XY spin 1/2 chains. First we introduce a general expression describing the evolution of the reduced density matrix. This expression proves to be tractable when the combined closed system (i.e. open system plus environment) is integrable. Then we focus on comparing decoherence and thermalization timescales in the XX chain. We find that for a single spin these timescales are comparable, in contrast to what should be expected for a macroscopic body. This indicates that the process of quantum relaxation of a system with few accessible states can not be separated in two distinct stages - decoherence and thermalization. Finally, we turn to finite-size effects in the time evolution of a single spin in the XY chain. We observe three consecutive stages of the evolution: regular evolution, partial revivals, irregular (apparently chaotic) evolution. The duration of the regular stage is proportional to the number of spins in the chain. We observe a 'quiet and cold period' in the end of the regular stage, which breaks up abruptly at some threshold time.

  19. Zebrafish neurotransmitter systems as potential pharmacological and toxicological targets.

    Science.gov (United States)

    Rico, E P; Rosemberg, D B; Seibt, K J; Capiotti, K M; Da Silva, R S; Bonan, C D

    2011-01-01

    Recent advances in neurobiology have emphasized the study of brain structure and function and its association with numerous pathological and toxicological events. Neurotransmitters are substances that relay, amplify, and modulate electrical signals between neurons and other cells. Neurotransmitter signaling mediates rapid intercellular communication by interacting with cell surface receptors, activating second messenger systems and regulating the activity of ion channels. Changes in the functional balance of neurotransmitters have been implicated in the failure of central nervous system function. In addition, abnormalities in neurotransmitter production or functioning can be induced by several toxicological compounds, many of which are found in the environment. The zebrafish has been increasingly used as an animal model for biomedical research, primarily due to its genetic tractability and ease of maintenance. These features make this species a versatile tool for pre-clinical drug discovery and toxicological investigations. Here, we present a review regarding the role of different excitatory and inhibitory neurotransmitter systems in zebrafish, such as dopaminergic, serotoninergic, cholinergic, purinergic, histaminergic, nitrergic, glutamatergic, glycinergic, and GABAergic systems, and emphasizing their features as pharmacological and toxicological targets. The increase in the global knowledge of neurotransmitter systems in zebrafish and the elucidation of their pharmacological and toxicological aspects may lead to new strategies and appropriate research priorities to offer insights for biomedical and environmental research. Copyright © 2011 Elsevier Inc. All rights reserved.

  20. Particle Tracking Model (PTM) with Coastal Modeling System (CMS)

    Science.gov (United States)

    2015-11-04

    Coastal Inlets Research Program Particle Tracking Model (PTM) with Coastal Modeling System ( CMS ) The Particle Tracking Model (PTM) is a Lagrangian...currents and waves. The Coastal Inlets Research Program (CIRP) supports the PTM with the Coastal Modeling System ( CMS ), which provides coupled wave...and current forcing for PTM simulations. CMS -PTM is implemented in the Surface-water Modeling System, a GUI environment for input development

  1. Distinct Bacterial Communities Associated with the Coral Model Aiptasia in Aposymbiotic and Symbiotic States with Symbiodinium

    KAUST Repository

    Röthig, Till

    2016-11-18

    Coral reefs are in decline. The basic functional unit of coral reefs is the coral metaorganism or holobiont consisting of the cnidarian host animal, symbiotic algae of the genus Symbiodinium, and a specific consortium of bacteria (among others), but research is slow due to the difficulty of working with corals. Aiptasia has proven to be a tractable model system to elucidate the intricacies of cnidarian-dinoflagellate symbioses, but characterization of the associated bacterial microbiome is required to provide a complete and integrated understanding of holobiont function. In this work, we characterize and analyze the microbiome of aposymbiotic and symbiotic Aiptasia and show that bacterial associates are distinct in both conditions. We further show that key microbial associates can be cultured without their cnidarian host. Our results suggest that bacteria play an important role in the symbiosis of Aiptasia with Symbiodinium, a finding that underlines the power of the Aiptasia model system where cnidarian hosts can be analyzed in aposymbiotic and symbiotic states. The characterization of the native microbiome and the ability to retrieve culturable isolates contributes to the resources available for the Aiptasia model system. This provides an opportunity to comparatively analyze cnidarian metaorganisms as collective functional holobionts and as separated member species. We hope that this will accelerate research into understanding the intricacies of coral biology, which is urgently needed to develop strategies to mitigate the effects of environmental change.

  2. Using the Model Coupling Toolkit to couple earth system models

    Science.gov (United States)

    Warner, J.C.; Perlin, N.; Skyllingstad, E.D.

    2008-01-01

    Continued advances in computational resources are providing the opportunity to operate more sophisticated numerical models. Additionally, there is an increasing demand for multidisciplinary studies that include interactions between different physical processes. Therefore there is a strong desire to develop coupled modeling systems that utilize existing models and allow efficient data exchange and model control. The basic system would entail model "1" running on "M" processors and model "2" running on "N" processors, with efficient exchange of model fields at predetermined synchronization intervals. Here we demonstrate two coupled systems: the coupling of the ocean circulation model Regional Ocean Modeling System (ROMS) to the surface wave model Simulating WAves Nearshore (SWAN), and the coupling of ROMS to the atmospheric model Coupled Ocean Atmosphere Prediction System (COAMPS). Both coupled systems use the Model Coupling Toolkit (MCT) as a mechanism for operation control and inter-model distributed memory transfer of model variables. In this paper we describe requirements and other options for model coupling, explain the MCT library, ROMS, SWAN and COAMPS models, methods for grid decomposition and sparse matrix interpolation, and provide an example from each coupled system. Methods presented in this paper are clearly applicable for coupling of other types of models. ?? 2008 Elsevier Ltd. All rights reserved.

  3. Modelling Creativity: Identifying Key Components through a Corpus-Based Approach.

    Science.gov (United States)

    Jordanous, Anna; Keller, Bill

    2016-01-01

    Creativity is a complex, multi-faceted concept encompassing a variety of related aspects, abilities, properties and behaviours. If we wish to study creativity scientifically, then a tractable and well-articulated model of creativity is required. Such a model would be of great value to researchers investigating the nature of creativity and in particular, those concerned with the evaluation of creative practice. This paper describes a unique approach to developing a suitable model of how creative behaviour emerges that is based on the words people use to describe the concept. Using techniques from the field of statistical natural language processing, we identify a collection of fourteen key components of creativity through an analysis of a corpus of academic papers on the topic. Words are identified which appear significantly often in connection with discussions of the concept. Using a measure of lexical similarity to help cluster these words, a number of distinct themes emerge, which collectively contribute to a comprehensive and multi-perspective model of creativity. The components provide an ontology of creativity: a set of building blocks which can be used to model creative practice in a variety of domains. The components have been employed in two case studies to evaluate the creativity of computational systems and have proven useful in articulating achievements of this work and directions for further research.

  4. Turn-based evolution in a simplified model of artistic creative process

    DEFF Research Database (Denmark)

    Dahlstedt, Palle

    2015-01-01

    Evolutionary computation has often been presented as a possible model for creativity in computers. In this paper, evolution is discussed in the light of a theoretical model of human artistic process, recently presented by the author. Some crucial differences between human artistic creativity......, and the results of initial experiments are presented and discussed. Artistic creativity is here modeled as an iterated turn-based process, alternating between a conceptual representation and a material representation of the work-to-be. Evolutionary computation is proposed as a heuristic solution to the principal...... and natural evolution are observed and discussed, also in the light of other creative processes occurring in nature. As a tractable way to overcome these limitations, a new kind of evolutionary implementation of creativity is proposed, based on a simplified version of the previously presented model...

  5. A stream-based mathematical model for distributed information processing systems - SysLab system model

    OpenAIRE

    Klein, Cornel; Rumpe, Bernhard; Broy, Manfred

    2014-01-01

    In the SysLab project we develop a software engineering method based on a mathematical foundation. The SysLab system model serves as an abstract mathematical model for information systems and their components. It is used to formalize the semantics of all used description techniques such as object diagrams state automata sequence charts or data-flow diagrams. Based on the requirements for such a reference model, we define the system model including its different views and their relationships.

  6. A computational model predicting disruption of blood vessel development.

    Directory of Open Access Journals (Sweden)

    Nicole Kleinstreuer

    2013-04-01

    Full Text Available Vascular development is a complex process regulated by dynamic biological networks that vary in topology and state across different tissues and developmental stages. Signals regulating de novo blood vessel formation (vasculogenesis and remodeling (angiogenesis come from a variety of biological pathways linked to endothelial cell (EC behavior, extracellular matrix (ECM remodeling and the local generation of chemokines and growth factors. Simulating these interactions at a systems level requires sufficient biological detail about the relevant molecular pathways and associated cellular behaviors, and tractable computational models that offset mathematical and biological complexity. Here, we describe a novel multicellular agent-based model of vasculogenesis using the CompuCell3D (http://www.compucell3d.org/ modeling environment supplemented with semi-automatic knowledgebase creation. The model incorporates vascular endothelial growth factor signals, pro- and anti-angiogenic inflammatory chemokine signals, and the plasminogen activating system of enzymes and proteases linked to ECM interactions, to simulate nascent EC organization, growth and remodeling. The model was shown to recapitulate stereotypical capillary plexus formation and structural emergence of non-coded cellular behaviors, such as a heterologous bridging phenomenon linking endothelial tip cells together during formation of polygonal endothelial cords. Molecular targets in the computational model were mapped to signatures of vascular disruption derived from in vitro chemical profiling using the EPA's ToxCast high-throughput screening (HTS dataset. Simulating the HTS data with the cell-agent based model of vascular development predicted adverse effects of a reference anti-angiogenic thalidomide analog, 5HPP-33, on in vitro angiogenesis with respect to both concentration-response and morphological consequences. These findings support the utility of cell agent-based models for simulating a

  7. Modeling aluminum-air battery systems

    Science.gov (United States)

    Savinell, R. F.; Willis, M. S.

    The performance of a complete aluminum-air battery system was studied with a flowsheet model built from unit models of each battery system component. A plug flow model for heat transfer was used to estimate the amount of heat transferred from the electrolyte to the air stream. The effect of shunt currents on battery performance was found to be insignificant. Using the flowsheet simulator to analyze a 100 cell battery system now under development demonstrated that load current, aluminate concentration, and electrolyte temperature are dominant variables controlling system performance. System efficiency was found to decrease as both load current and aluminate concentration increases. The flowsheet model illustrates the interdependence of separate units on overall system performance.

  8. Modeling Complex Systems

    CERN Document Server

    Boccara, Nino

    2010-01-01

    Modeling Complex Systems, 2nd Edition, explores the process of modeling complex systems, providing examples from such diverse fields as ecology, epidemiology, sociology, seismology, and economics. It illustrates how models of complex systems are built and provides indispensable mathematical tools for studying their dynamics. This vital introductory text is useful for advanced undergraduate students in various scientific disciplines, and serves as an important reference book for graduate students and young researchers. This enhanced second edition includes: . -recent research results and bibliographic references -extra footnotes which provide biographical information on cited scientists who have made significant contributions to the field -new and improved worked-out examples to aid a student’s comprehension of the content -exercises to challenge the reader and complement the material Nino Boccara is also the author of Essentials of Mathematica: With Applications to Mathematics and Physics (Springer, 2007).

  9. Task-based data-acquisition optimization for sparse image reconstruction systems

    Science.gov (United States)

    Chen, Yujia; Lou, Yang; Kupinski, Matthew A.; Anastasio, Mark A.

    2017-03-01

    Conventional wisdom dictates that imaging hardware should be optimized by use of an ideal observer (IO) that exploits full statistical knowledge of the class of objects to be imaged, without consideration of the reconstruction method to be employed. However, accurate and tractable models of the complete object statistics are often difficult to determine in practice. Moreover, in imaging systems that employ compressive sensing concepts, imaging hardware and (sparse) image reconstruction are innately coupled technologies. We have previously proposed a sparsity-driven ideal observer (SDIO) that can be employed to optimize hardware by use of a stochastic object model that describes object sparsity. The SDIO and sparse reconstruction method can therefore be "matched" in the sense that they both utilize the same statistical information regarding the class of objects to be imaged. To efficiently compute SDIO performance, the posterior distribution is estimated by use of computational tools developed recently for variational Bayesian inference. Subsequently, the SDIO test statistic can be computed semi-analytically. The advantages of employing the SDIO instead of a Hotelling observer are systematically demonstrated in case studies in which magnetic resonance imaging (MRI) data acquisition schemes are optimized for signal detection tasks.

  10. Multi-stage IT project evaluation: The flexibility value obtained by implementing and resolving Berk, Green and Naik (2004) model

    Science.gov (United States)

    Abid, Fathi; Guermazi, Dorra

    2009-11-01

    In this paper, we evaluate a multi-stage information technology investment project, by implementing and resolving Berk, Green and Naik's (2004) model, which takes into account specific features of IT projects and considers the real option to suspend investment at each stage. We present a particular case of the model where the project value is the solution of an optimal control problem with a single state variable. In this case, the model is more intuitive and tractable. The case study confirms the practical potential of the model and highlights the importance of the real-option approach compared to classical discounted cash flow techniques in the valuation of IT projects.

  11. TP-model transformation-based-control design frameworks

    CERN Document Server

    Baranyi, Péter

    2016-01-01

    This book covers new aspects and frameworks of control, design, and optimization based on the TP model transformation and its various extensions. The author outlines the three main steps of polytopic and LMI based control design: 1) development of the qLPV state-space model, 2) generation of the polytopic model; and 3) application of LMI to derive controller and observer. He goes on to describe why literature has extensively studied LMI design, but has not focused much on the second step, in part because the generation and manipulation of the polytopic form was not tractable in many cases. The author then shows how the TP model transformation facilitates this second step and hence reveals new directions, leading to powerful design procedures and the formulation of new questions. The chapters of this book, and the complex dynamical control tasks which they cover, are organized so as to present and analyze the beneficial aspect of the family of approaches (control, design, and optimization). Additionally, the b...

  12. Watershed System Model: The Essentials to Model Complex Human-Nature System at the River Basin Scale

    Science.gov (United States)

    Li, Xin; Cheng, Guodong; Lin, Hui; Cai, Ximing; Fang, Miao; Ge, Yingchun; Hu, Xiaoli; Chen, Min; Li, Weiyue

    2018-03-01

    Watershed system models are urgently needed to understand complex watershed systems and to support integrated river basin management. Early watershed modeling efforts focused on the representation of hydrologic processes, while the next-generation watershed models should represent the coevolution of the water-land-air-plant-human nexus in a watershed and provide capability of decision-making support. We propose a new modeling framework and discuss the know-how approach to incorporate emerging knowledge into integrated models through data exchange interfaces. We argue that the modeling environment is a useful tool to enable effective model integration, as well as create domain-specific models of river basin systems. The grand challenges in developing next-generation watershed system models include but are not limited to providing an overarching framework for linking natural and social sciences, building a scientifically based decision support system, quantifying and controlling uncertainties, and taking advantage of new technologies and new findings in the various disciplines of watershed science. The eventual goal is to build transdisciplinary, scientifically sound, and scale-explicit watershed system models that are to be codesigned by multidisciplinary communities.

  13. A Consistent Pricing Model for Index Options and Volatility Derivatives

    DEFF Research Database (Denmark)

    Cont, Rama; Kokholm, Thomas

    observed properties of variance swap dynamics and allows for jumps in volatility and returns. An affine specification using L´evy processes as building blocks leads to analytically tractable pricing formulas for options on variance swaps as well as efficient numerical methods for pricing of European......We propose and study a flexible modeling framework for the joint dynamics of an index and a set of forward variance swap rates written on this index, allowing options on forward variance swaps and options on the underlying index to be priced consistently. Our model reproduces various empirically...... options on the underlying asset. The model has the convenient feature of decoupling the vanilla skews from spot/volatility correlations and allowing for different conditional correlations in large and small spot/volatility moves. We show that our model can simultaneously fit prices of European options...

  14. Modeling the earth system

    Energy Technology Data Exchange (ETDEWEB)

    Ojima, D. [ed.

    1992-12-31

    The 1990 Global Change Institute (GCI) on Earth System Modeling is the third of a series organized by the Office for Interdisciplinary Earth Studies to look in depth at particular issues critical to developing a better understanding of the earth system. The 1990 GCI on Earth System Modeling was organized around three themes: defining critical gaps in the knowledge of the earth system, developing simplified working models, and validating comprehensive system models. This book is divided into three sections that reflect these themes. Each section begins with a set of background papers offering a brief tutorial on the subject, followed by working group reports developed during the institute. These reports summarize the joint ideas and recommendations of the participants and bring to bear the interdisciplinary perspective that imbued the institute. Since the conclusion of the 1990 Global Change Institute, research programs, nationally and internationally, have moved forward to implement a number of the recommendations made at the institute, and many of the participants have maintained collegial interactions to develop research projects addressing the needs identified during the two weeks in Snowmass.

  15. Modeling cellular systems

    CERN Document Server

    Matthäus, Franziska; Pahle, Jürgen

    2017-01-01

    This contributed volume comprises research articles and reviews on topics connected to the mathematical modeling of cellular systems. These contributions cover signaling pathways, stochastic effects, cell motility and mechanics, pattern formation processes, as well as multi-scale approaches. All authors attended the workshop on "Modeling Cellular Systems" which took place in Heidelberg in October 2014. The target audience primarily comprises researchers and experts in the field, but the book may also be beneficial for graduate students.

  16. Probabilistic short-term forecasting of eruption rate at Kīlauea Volcano using a physics-based model

    Science.gov (United States)

    Anderson, K. R.

    2016-12-01

    Deterministic models of volcanic eruptions yield predictions of future activity conditioned on uncertainty in the current state of the system. Physics-based eruption models are well-suited for deterministic forecasting as they can relate magma physics with a wide range of observations. Yet, physics-based eruption forecasting is strongly limited by an inadequate understanding of volcanic systems, and the need for eruption models to be computationally tractable. At Kīlauea Volcano, Hawaii, episodic depressurization-pressurization cycles of the magma system generate correlated, quasi-exponential variations in ground deformation and surface height of the active summit lava lake. Deflations are associated with reductions in eruption rate, or even brief eruptive pauses, and thus partly control lava flow advance rates and associated hazard. Because of the relatively well-understood nature of Kīlauea's shallow magma plumbing system, and because more than 600 of these events have been recorded to date, they offer a unique opportunity to refine a physics-based effusive eruption forecasting approach and apply it to lava eruption rates over short (hours to days) time periods. A simple physical model of the volcano ascribes observed data to temporary reductions in magma supply to an elastic reservoir filled with compressible magma. This model can be used to predict the evolution of an ongoing event, but because the mechanism that triggers events is unknown, event durations are modeled stochastically from previous observations. A Bayesian approach incorporates diverse data sets and prior information to simultaneously estimate uncertain model parameters and future states of the system. Forecasts take the form of probability distributions for eruption rate or cumulative erupted volume at some future time. Results demonstrate the significant uncertainties that still remain even for short-term eruption forecasting at a well-monitored volcano - but also the value of a physics

  17. Validation of Embedded System Verification Models

    NARCIS (Netherlands)

    Marincic, J.; Mader, Angelika H.; Wieringa, Roelf J.

    The result of a model-based requirements verification shows that the model of a system satisfies (or not) formalised system requirements. The verification result is correct only if the model represents the system adequately. No matter what modelling technique we use, what precedes the model

  18. Spike solutions in Gierer#x2013;Meinhardt model with a time dependent anomaly exponent

    Science.gov (United States)

    Nec, Yana

    2018-01-01

    Experimental evidence of complex dispersion regimes in natural systems, where the growth of the mean square displacement in time cannot be characterised by a single power, has been accruing for the past two decades. In such processes the exponent γ(t) in ⟨r2⟩ ∼ tγ(t) at times might be approximated by a piecewise constant function, or it can be a continuous function. Variable order differential equations are an emerging mathematical tool with a strong potential to model these systems. However, variable order differential equations are not tractable by the classic differential equations theory. This contribution illustrates how a classic method can be adapted to gain insight into a system of this type. Herein a variable order Gierer-Meinhardt model is posed, a generic reaction- diffusion system of a chemical origin. With a fixed order this system possesses a solution in the form of a constellation of arbitrarily situated localised pulses, when the components' diffusivity ratio is asymptotically small. The pattern was shown to exist subject to multiple step-like transitions between normal diffusion and sub-diffusion, as well as between distinct sub-diffusive regimes. The analytical approximation obtained permits qualitative analysis of the impact thereof. Numerical solution for typical cross-over scenarios revealed such features as earlier equilibration and non-monotonic excursions before attainment of equilibrium. The method is general and allows for an approximate numerical solution with any reasonably behaved γ(t).

  19. Intestinal Stem Cell Niche Insights Gathered from Both In Vivo and Novel In Vitro Models

    Directory of Open Access Journals (Sweden)

    Nikolce Gjorevski

    2017-01-01

    Full Text Available Intestinal stem cells are located at the base of the crypts and are surrounded by a complex structure called niche. This environment is composed mainly of epithelial cells and stroma which provides signals that govern cell maintenance, proliferation, and differentiation. Understanding how the niche regulates stem cell fate by controlling developmental signaling pathways will help us to define how stem cells choose between self-renewal and differentiation and how they maintain their undifferentiated state. Tractable in vitro assay systems, which reflect the complexity of the in vivo situation but provide higher level of control, would likely be crucial in identifying new players and mechanisms controlling stem cell function. Knowledge of the intestinal stem cell niche gathered from both in vivo and novel in vitro models may help us improve therapies for tumorigenesis and intestinal damage and make autologous intestinal transplants a feasible clinical practice.

  20. Advancing coupled human-earth system models: The integrated Earth System Model Project

    Science.gov (United States)

    Thomson, A. M.; Edmonds, J. A.; Collins, W.; Thornton, P. E.; Hurtt, G. C.; Janetos, A. C.; Jones, A.; Mao, J.; Chini, L. P.; Calvin, K. V.; Bond-Lamberty, B. P.; Shi, X.

    2012-12-01

    As human and biogeophysical models develop, opportunities for connections between them evolve and can be used to advance our understanding of human-earth systems interaction in the context of a changing climate. One such integration is taking place with the Community Earth System Model (CESM) and the Global Change Assessment Model (GCAM). A multi-disciplinary, multi-institution team has succeeded in integrating the GCAM integrated assessment model of human activity into CESM to dynamically represent the feedbacks between changing climate and human decision making, in the context of greenhouse gas mitigation policies. The first applications of this capability have focused on the feedbacks between climate change impacts on terrestrial ecosystem productivity and human decisions affecting future land use change, which are in turn connected to human decisions about energy systems and bioenergy production. These experiments have been conducted in the context of the RCP4.5 scenario, one of four pathways of future radiative forcing being used in CMIP5, which constrains future human-induced greenhouse gas emissions from energy and land activities to stabilize radiative forcing at 4.5 W/m2 (~650 ppm CO2 -eq) by 2100. When this pathway is run in GCAM with the climate feedback on terrestrial productivity from CESM, there are implications for both the land use and energy system changes required for stabilization. Early findings indicate that traditional definitions of radiative forcing used in scenario development are missing a critical component of the biogeophysical consequences of land use change and their contribution to effective radiative forcing. Initial full coupling of the two global models has important implications for how climate impacts on terrestrial ecosystems changes the dynamics of future land use change for agriculture and forestry, particularly in the context of a climate mitigation policy designed to reduce emissions from land use as well as energy systems

  1. A central role for TOR signalling in a yeast model for juvenile CLN3 disease

    Directory of Open Access Journals (Sweden)

    Michael E. Bond

    2015-11-01

    Full Text Available Yeasts provide an excellent genetically tractable eukaryotic system for investigating the function of genes in their biological context, and are especially relevant for those conserved genes that cause disease. We study the role of btn1, the orthologue of a human gene that underlies an early onset neurodegenerative disease (juvenile CLN3 disease, neuronal ceroid lipofuscinosis (NCLs or Batten disease in the fission yeast Schizosaccharomyces pombe. A global screen for genetic interactions with btn1 highlighted a conserved key signalling hub in which multiple components functionally relate to this conserved disease gene. This signalling hub includes two major mitogen-activated protein kinase (MAPK cascades, and centers on the Tor kinase complexes TORC1 and TORC2. We confirmed that yeast cells modelling CLN3 disease exhibit features consistent with dysfunction in the TORC pathways, and showed that modulating TORC function leads to a comprehensive rescue of defects in this yeast disease model. The same pathways may be novel targets in the development of therapies for the NCLs and related diseases.

  2. Artificial activation of toxin-antitoxin systems as an antibacterial strategy.

    Science.gov (United States)

    Williams, Julia J; Hergenrother, Paul J

    2012-06-01

    Toxin-antitoxin (TA) systems are unique modules that effect plasmid stabilization via post-segregational killing of the bacterial host. The genes encoding TA systems also exist on bacterial chromosomes, and it has been speculated that these are involved in a variety of cellular processes. Interest in TA systems has increased dramatically over the past 5 years as the ubiquitous nature of TA genes on bacterial genomes has been revealed. The exploitation of TA systems as an antibacterial strategy via artificial activation of the toxin has been proposed and has considerable potential; however, efforts in this area remain in the early stages and several major questions remain. This review investigates the tractability of targeting TA systems to kill bacteria, including fundamental requirements for success, recent advances, and challenges associated with artificial toxin activation. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Gradient Models in Molecular Biophysics: Progress, Challenges, Opportunities.

    Science.gov (United States)

    Bardhan, Jaydeep P

    2013-12-01

    In the interest of developing a bridge between researchers modeling materials and those modeling biological molecules, we survey recent progress in developing nonlocal-dielectric continuum models for studying the behavior of proteins and nucleic acids. As in other areas of science, continuum models are essential tools when atomistic simulations (e.g. molecular dynamics) are too expensive. Because biological molecules are essentially all nanoscale systems, the standard continuum model, involving local dielectric response, has basically always been dubious at best. The advanced continuum theories discussed here aim to remedy these shortcomings by adding features such as nonlocal dielectric response, and nonlinearities resulting from dielectric saturation. We begin by describing the central role of electrostatic interactions in biology at the molecular scale, and motivate the development of computationally tractable continuum models using applications in science and engineering. For context, we highlight some of the most important challenges that remain and survey the diverse theoretical formalisms for their treatment, highlighting the rigorous statistical mechanics that support the use and improvement of continuum models. We then address the development and implementation of nonlocal dielectric models, an approach pioneered by Dogonadze, Kornyshev, and their collaborators almost forty years ago. The simplest of these models is just a scalar form of gradient elasticity, and here we use ideas from gradient-based modeling to extend the electrostatic model to include additional length scales. The paper concludes with a discussion of open questions for model development, highlighting the many opportunities for the materials community to leverage its physical, mathematical, and computational expertise to help solve one of the most challenging questions in molecular biology and biophysics.

  4. Gradient Models in Molecular Biophysics: Progress, Challenges, Opportunities

    Science.gov (United States)

    Bardhan, Jaydeep P.

    2014-01-01

    In the interest of developing a bridge between researchers modeling materials and those modeling biological molecules, we survey recent progress in developing nonlocal-dielectric continuum models for studying the behavior of proteins and nucleic acids. As in other areas of science, continuum models are essential tools when atomistic simulations (e.g. molecular dynamics) are too expensive. Because biological molecules are essentially all nanoscale systems, the standard continuum model, involving local dielectric response, has basically always been dubious at best. The advanced continuum theories discussed here aim to remedy these shortcomings by adding features such as nonlocal dielectric response, and nonlinearities resulting from dielectric saturation. We begin by describing the central role of electrostatic interactions in biology at the molecular scale, and motivate the development of computationally tractable continuum models using applications in science and engineering. For context, we highlight some of the most important challenges that remain and survey the diverse theoretical formalisms for their treatment, highlighting the rigorous statistical mechanics that support the use and improvement of continuum models. We then address the development and implementation of nonlocal dielectric models, an approach pioneered by Dogonadze, Kornyshev, and their collaborators almost forty years ago. The simplest of these models is just a scalar form of gradient elasticity, and here we use ideas from gradient-based modeling to extend the electrostatic model to include additional length scales. The paper concludes with a discussion of open questions for model development, highlighting the many opportunities for the materials community to leverage its physical, mathematical, and computational expertise to help solve one of the most challenging questions in molecular biology and biophysics. PMID:25505358

  5. Pembangunan Model Restaurant Management System

    OpenAIRE

    Fredy Jingga; Natalia Limantara

    2014-01-01

    Model design for Restaurant Management System aims to help in restaurant business process, where Restaurant Management System (RMS) help the waitress and chef could interact each other without paper limitation.  This Restaurant Management System Model develop using Agile Methodology and developed based on PHP Programming Langguage. The database management system is using MySQL. This web-based application model will enable the waitress and the chef to interact in realtime, from the time they a...

  6. A probabilistic model for estimating the waiting time until the simultaneous collapse of two contingencies

    International Nuclear Information System (INIS)

    Barnett, C.S.

    1991-06-01

    The Double Contingency Principle (DCP) is widely applied to criticality safety practice in the United States. Most practitioners base their application of the principle on qualitative, intuitive assessments. The recent trend toward probabilistic safety assessments provides a motive to search for a quantitative, probabilistic foundation for the DCP. A Markov model is tractable and leads to relatively simple results. The model yields estimates of mean time to simultaneous collapse of two contingencies as a function of estimates of mean failure times and mean recovery times of two independent contingencies. The model is a tool that can be used to supplement the qualitative methods now used to assess effectiveness of the DCP. 3 refs., 1 fig

  7. A probabilistic model for estimating the waiting time until the simultaneous collapse of two contingencies

    International Nuclear Information System (INIS)

    Barnett, C.S.

    1991-01-01

    The Double Contingency Principle (DCP) is widely applied to criticality safety practice in the United States. Most practitioners base their application of the principle on qualitative, intuitive assessments. The recent trend toward probabilistic safety assessments provides a motive to search for a quantitative, probabilistic foundation for the DCP. A Markov model is tractable and leads to relatively simple results. The model yields estimates of mean time to simultaneous collapse of two contingencies as a function of estimates of mean failure times and mean recovery times of two independent contingencies. The model is a tool that can be used to supplement the qualitative methods now used to assess effectiveness of the DCP. (Author)

  8. A probabilistic model for estimating the waiting time until the simultaneous collapse of two contingencies

    International Nuclear Information System (INIS)

    Barnett, C.S.

    1992-01-01

    The double contingency principle (DCP) is widely applied to criticality safety practice in the United States. Most practitioners base their application of the principle on qualitative and intuitive assessments. The recent trend toward probabilistic safety assessments provides a motive for a search for a quantitative and probabilistic foundation for the DCP. A Markov model is tractable and leads to relatively simple results. The model yields estimates of mean time to simultaneous collapse of two contingencies, as functions of estimates of mean failure times and mean recovery times of two independent contingencies. The model is a tool that can be used to supplement the qualitative methods now used to assess the effectiveness of the DCP. (Author)

  9. On Modelling an Immune System

    OpenAIRE

    Monroy, Raúl; Saab, Rosa; Godínez, Fernando

    2004-01-01

    Immune systems of live forms have been an abundant source of inspiration to contemporary computer scientists. Problem solving strategies, stemming from known immune system phenomena, have been successfully applied to challenging problems of modern computing. However, research in artificial immune systems has overlooked establishing a coherent model of known immune system behaviour. This paper aims reports on an preliminary computer model of an immune system, where each immune system component...

  10. Multiple system modelling of waste management

    International Nuclear Information System (INIS)

    Eriksson, Ola; Bisaillon, Mattias

    2011-01-01

    Highlights: → Linking of models will provide a more complete, correct and credible picture of the systems. → The linking procedure is easy to perform and also leads to activation of project partners. → The simulation procedure is a bit more complicated and calls for the ability to run both models. - Abstract: Due to increased environmental awareness, planning and performance of waste management has become more and more complex. Therefore waste management has early been subject to different types of modelling. Another field with long experience of modelling and systems perspective is energy systems. The two modelling traditions have developed side by side, but so far there are very few attempts to combine them. Waste management systems can be linked together with energy systems through incineration plants. The models for waste management can be modelled on a quite detailed level whereas surrounding systems are modelled in a more simplistic way. This is a problem, as previous studies have shown that assumptions on the surrounding system often tend to be important for the conclusions. In this paper it is shown how two models, one for the district heating system (MARTES) and another one for the waste management system (ORWARE), can be linked together. The strengths and weaknesses with model linking are discussed when compared to simplistic assumptions on effects in the energy and waste management systems. It is concluded that the linking of models will provide a more complete, correct and credible picture of the consequences of different simultaneous changes in the systems. The linking procedure is easy to perform and also leads to activation of project partners. However, the simulation procedure is a bit more complicated and calls for the ability to run both models.

  11. Transforming Graphical System Models to Graphical Attack Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, Rene Rydhof

    2016-01-01

    Manually identifying possible attacks on an organisation is a complex undertaking; many different factors must be considered, and the resulting attack scenarios can be complex and hard to maintain as the organisation changes. System models provide a systematic representation of organisations...... approach to transforming graphical system models to graphical attack models in the form of attack trees. Based on an asset in the model, our transformations result in an attack tree that represents attacks by all possible actors in the model, after which the actor in question has obtained the asset....

  12. Study on Developments in Accident Investigation Methods: A Survey of the 'State-of-the-Art'

    Energy Technology Data Exchange (ETDEWEB)

    Hollnagel, Erik; Speziali, Josephine (Ecole des Mines de Paris, F-06904 Sophia Antipolis (France))

    2008-01-15

    The objective of this project was to survey the main accident investigation methods that have been developed since the early or mid-1990s. The motivation was the increasing frequency of accidents that defy explanations in simple terms, for instance cause-effect chains or 'human error'. Whereas the complexity of socio-technical systems is steadily growing across all industrial domains, including nuclear power production, accident investigation methods are only updated when their inability to account for novel types of accidents and incidents becomes inescapable. Accident investigation methods therefore typically lag behind the socio-technological developments by 20 years or more. The project first compiled a set of methods from the recognised scientific literature and in major major research and development programs, excluding methods limited to risk assessment, technological malfunctions, human reliability, and safety management methods. An initial set of 21 methods was further reduced to seven by retaining only prima facie accident investigation methods and avoiding overlapping or highly similar methods. The second step was to develop a set of criteria used to characterise the methods. The starting point was Perrow's description of normal accidents in socio-technical systems, which used the dimensions of coupling, going from loose to tight, and interactions, going from linear to complex. For practical reasons, the second dimension was changed to that of tractability or how easy it is to describe the system, where the sub-criteria are the level of detail, the availability of an articulated model, and the system dynamics. On this basis the seven selected methods were characterised in terms of the systems - or conditions - they could account for, leading to the following four groups: methods suitable for systems that are loosely coupled and tractable, methods suitable for systems that are tightly coupled and tractable, methods suitable for systems that

  13. Study on Developments in Accident Investigation Methods: A Survey of the 'State-of-the-Art'

    International Nuclear Information System (INIS)

    Hollnagel, Erik; Speziali, Josephine

    2008-01-01

    The objective of this project was to survey the main accident investigation methods that have been developed since the early or mid-1990s. The motivation was the increasing frequency of accidents that defy explanations in simple terms, for instance cause-effect chains or 'human error'. Whereas the complexity of socio-technical systems is steadily growing across all industrial domains, including nuclear power production, accident investigation methods are only updated when their inability to account for novel types of accidents and incidents becomes inescapable. Accident investigation methods therefore typically lag behind the socio-technological developments by 20 years or more. The project first compiled a set of methods from the recognised scientific literature and in major major research and development programs, excluding methods limited to risk assessment, technological malfunctions, human reliability, and safety management methods. An initial set of 21 methods was further reduced to seven by retaining only prima facie accident investigation methods and avoiding overlapping or highly similar methods. The second step was to develop a set of criteria used to characterise the methods. The starting point was Perrow's description of normal accidents in socio-technical systems, which used the dimensions of coupling, going from loose to tight, and interactions, going from linear to complex. For practical reasons, the second dimension was changed to that of tractability or how easy it is to describe the system, where the sub-criteria are the level of detail, the availability of an articulated model, and the system dynamics. On this basis the seven selected methods were characterised in terms of the systems - or conditions - they could account for, leading to the following four groups: methods suitable for systems that are loosely coupled and tractable, methods suitable for systems that are tightly coupled and tractable, methods suitable for systems that are loosely

  14. A Two-Stage Rural Household Demand Analysis: Microdata Evidence from Jiangsu Province, China

    OpenAIRE

    X.M. Gao; Eric J. Wailes; Gail L. Cramer

    1996-01-01

    In this paper we evaluate economic and demographic effects on China's rural household demand for nine food commodities: vegetables, pork, beef and lamb, poultry, eggs, fish, sugar, fruit, and grain; and five nonfood commodity groups: clothing, fuel, stimulants, housing, and durables. A two-stage budgeting allocation procedure is used to obtain an empirically tractable amalgamative demand system for food commodities which combine an upper-level AIDS model and a lower-level GLES as a modeling f...

  15. A Model-Based Systems Engineering Methodology for Employing Architecture In System Analysis: Developing Simulation Models Using Systems Modeling Language Products to Link Architecture and Analysis

    Science.gov (United States)

    2016-06-01

    18 Figure 5 Spiral Model ...............................................................................................20 Figure 6...Memorandum No. 1. Tallahassee, FL: Florida Department of Transportation. 19 The spiral model of system development, first introduced in Boehm...system capabilities into the waterfall model would prove quite difficult, the spiral model assumes that available technologies will change over the

  16. Multiple-Active Multiple-Passive Antenna Systems and Applications

    DEFF Research Database (Denmark)

    Tsakalaki, Elpiniki

    2013-01-01

    -passive (MAMP) antenna topologies, as explained in Sect. 8.1. Then, Sect. 8.2 proposes MAMP antenna structures with application to reconfigurable MIMO transmission in the presence of antenna mutual coupling under poor scattering channel conditions. For this purpose, the section presents an adaptive MAMP antenna...... system capable of changing its transmission parameters via passive radiators attached to tunable loads, according to the structure of the RF propagation channel. The hybrid MAMP array structure can be tractably analyzed using the active element response vector (instead of the classical steering vector...... adaptive MAMP system can be limited to practical dimensions whereas the passive antennas require no extra RF hardware, thus meeting the cost, space, and power constrains of the users’ mobile terminals. The simulation results show that the adaptive MAMP system, thanks to its “adaptivity”, is able to achieve...

  17. The filamentous fungus Sordaria macrospora as a genetic model to study fruiting body development.

    Science.gov (United States)

    Teichert, Ines; Nowrousian, Minou; Pöggeler, Stefanie; Kück, Ulrich

    2014-01-01

    Filamentous fungi are excellent experimental systems due to their short life cycles as well as easy and safe manipulation in the laboratory. They form three-dimensional structures with numerous different cell types and have a long tradition as genetic model organisms used to unravel basic mechanisms underlying eukaryotic cell differentiation. The filamentous ascomycete Sordaria macrospora is a model system for sexual fruiting body (perithecia) formation. S. macrospora is homothallic, i.e., self-fertile, easily genetically tractable, and well suited for large-scale genomics, transcriptomics, and proteomics studies. Specific features of its life cycle and the availability of a developmental mutant library make it an excellent system for studying cellular differentiation at the molecular level. In this review, we focus on recent developments in identifying gene and protein regulatory networks governing perithecia formation. A number of tools have been developed to genetically analyze developmental mutants and dissect transcriptional profiles at different developmental stages. Protein interaction studies allowed us to identify a highly conserved eukaryotic multisubunit protein complex, the striatin-interacting phosphatase and kinase complex and its role in sexual development. We have further identified a number of proteins involved in chromatin remodeling and transcriptional regulation of fruiting body development. Furthermore, we review the involvement of metabolic processes from both primary and secondary metabolism, and the role of nutrient recycling by autophagy in perithecia formation. Our research has uncovered numerous players regulating multicellular development in S. macrospora. Future research will focus on mechanistically understanding how these players are orchestrated in this fungal model system. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Bacterial Communities of Diverse Drosophila Species: Ecological Context of a Host–Microbe Model System

    Science.gov (United States)

    Bhatnagar, Srijak; Eisen, Jonathan A.; Kopp, Artyom

    2011-01-01

    Drosophila melanogaster is emerging as an important model of non-pathogenic host–microbe interactions. The genetic and experimental tractability of Drosophila has led to significant gains in our understanding of animal–microbial symbiosis. However, the full implications of these results cannot be appreciated without the knowledge of the microbial communities associated with natural Drosophila populations. In particular, it is not clear whether laboratory cultures can serve as an accurate model of host–microbe interactions that occur in the wild, or those that have occurred over evolutionary time. To fill this gap, we characterized natural bacterial communities associated with 14 species of Drosophila and related genera collected from distant geographic locations. To represent the ecological diversity of Drosophilids, examined species included fruit-, flower-, mushroom-, and cactus-feeders. In parallel, wild host populations were compared to laboratory strains, and controlled experiments were performed to assess the importance of host species and diet in shaping bacterial microbiome composition. We find that Drosophilid flies have taxonomically restricted bacterial communities, with 85% of the natural bacterial microbiome composed of only four bacterial families. The dominant bacterial taxa are widespread and found in many different host species despite the taxonomic, ecological, and geographic diversity of their hosts. Both natural surveys and laboratory experiments indicate that host diet plays a major role in shaping the Drosophila bacterial microbiome. Despite this, the internal bacterial microbiome represents only a highly reduced subset of the external bacterial communities, suggesting that the host exercises some level of control over the bacteria that inhabit its digestive tract. Finally, we show that laboratory strains provide only a limited model of natural host–microbe interactions. Bacterial taxa used in experimental studies are rare or absent in

  19. Modeling Multi-Level Systems

    CERN Document Server

    Iordache, Octavian

    2011-01-01

    This book is devoted to modeling of multi-level complex systems, a challenging domain for engineers, researchers and entrepreneurs, confronted with the transition from learning and adaptability to evolvability and autonomy for technologies, devices and problem solving methods. Chapter 1 introduces the multi-scale and multi-level systems and highlights their presence in different domains of science and technology. Methodologies as, random systems, non-Archimedean analysis, category theory and specific techniques as model categorification and integrative closure, are presented in chapter 2. Chapters 3 and 4 describe polystochastic models, PSM, and their developments. Categorical formulation of integrative closure offers the general PSM framework which serves as a flexible guideline for a large variety of multi-level modeling problems. Focusing on chemical engineering, pharmaceutical and environmental case studies, the chapters 5 to 8 analyze mixing, turbulent dispersion and entropy production for multi-scale sy...

  20. Computing the non-Markovian coarse-grained interactions derived from the Mori-Zwanzig formalism in molecular systems: Application to polymer melts

    Science.gov (United States)

    Li, Zhen; Lee, Hee Sun; Darve, Eric; Karniadakis, George Em

    2017-01-01

    Memory effects are often introduced during coarse-graining of a complex dynamical system. In particular, a generalized Langevin equation (GLE) for the coarse-grained (CG) system arises in the context of Mori-Zwanzig formalism. Upon a pairwise decomposition, GLE can be reformulated into its pairwise version, i.e., non-Markovian dissipative particle dynamics (DPD). GLE models the dynamics of a single coarse particle, while DPD considers the dynamics of many interacting CG particles, with both CG systems governed by non-Markovian interactions. We compare two different methods for the practical implementation of the non-Markovian interactions in GLE and DPD systems. More specifically, a direct evaluation of the non-Markovian (NM) terms is performed in LE-NM and DPD-NM models, which requires the storage of historical information that significantly increases computational complexity. Alternatively, we use a few auxiliary variables in LE-AUX and DPD-AUX models to replace the non-Markovian dynamics with a Markovian dynamics in a higher dimensional space, leading to a much reduced memory footprint and computational cost. In our numerical benchmarks, the GLE and non-Markovian DPD models are constructed from molecular dynamics (MD) simulations of star-polymer melts. Results show that a Markovian dynamics with auxiliary variables successfully generates equivalent non-Markovian dynamics consistent with the reference MD system, while maintaining a tractable computational cost. Also, transient subdiffusion of the star-polymers observed in the MD system can be reproduced by the coarse-grained models. The non-interacting particle models, LE-NM/AUX, are computationally much cheaper than the interacting particle models, DPD-NM/AUX. However, the pairwise models with momentum conservation are more appropriate for correctly reproducing the long-time hydrodynamics characterised by an algebraic decay in the velocity autocorrelation function.

  1. Brief history of agricultural systems modeling.

    Science.gov (United States)

    Jones, James W; Antle, John M; Basso, Bruno; Boote, Kenneth J; Conant, Richard T; Foster, Ian; Godfray, H Charles J; Herrero, Mario; Howitt, Richard E; Janssen, Sander; Keating, Brian A; Munoz-Carpena, Rafael; Porter, Cheryl H; Rosenzweig, Cynthia; Wheeler, Tim R

    2017-07-01

    Agricultural systems science generates knowledge that allows researchers to consider complex problems or take informed agricultural decisions. The rich history of this science exemplifies the diversity of systems and scales over which they operate and have been studied. Modeling, an essential tool in agricultural systems science, has been accomplished by scientists from a wide range of disciplines, who have contributed concepts and tools over more than six decades. As agricultural scientists now consider the "next generation" models, data, and knowledge products needed to meet the increasingly complex systems problems faced by society, it is important to take stock of this history and its lessons to ensure that we avoid re-invention and strive to consider all dimensions of associated challenges. To this end, we summarize here the history of agricultural systems modeling and identify lessons learned that can help guide the design and development of next generation of agricultural system tools and methods. A number of past events combined with overall technological progress in other fields have strongly contributed to the evolution of agricultural system modeling, including development of process-based bio-physical models of crops and livestock, statistical models based on historical observations, and economic optimization and simulation models at household and regional to global scales. Characteristics of agricultural systems models have varied widely depending on the systems involved, their scales, and the wide range of purposes that motivated their development and use by researchers in different disciplines. Recent trends in broader collaboration across institutions, across disciplines, and between the public and private sectors suggest that the stage is set for the major advances in agricultural systems science that are needed for the next generation of models, databases, knowledge products and decision support systems. The lessons from history should be

  2. The systems integration modeling system

    International Nuclear Information System (INIS)

    Danker, W.J.; Williams, J.R.

    1990-01-01

    This paper discusses the systems integration modeling system (SIMS), an analysis tool for the detailed evaluation of the structure and related performance of the Federal Waste Management System (FWMS) and its interface with waste generators. It's use for evaluations in support of system-level decisions as to FWMS configurations, the allocation, sizing, balancing and integration of functions among elements, and the establishment of system-preferred waste selection and sequencing methods and other operating strategies is presented. SIMS includes major analysis submodels which quantify the detailed characteristics of individual waste items, loaded casks and waste packages, simulate the detailed logistics of handling and processing discrete waste items and packages, and perform detailed cost evaluations

  3. On the use of a standard spreadsheet to model physical systems in school teaching*

    Science.gov (United States)

    Quale, Andreas

    2012-05-01

    In the teaching of physics at upper secondary school level (K10-K12), the students are generally taught to solve problems analytically, i.e. using the dynamics describing a system (typically in the form of differential equations) to compute its evolution in time, e.g. the motion of a body along a straight line or in a plane. This reduces the scope of problems, i.e. the kind of problems that are within students' capabilities. To make the tasks mathematically solvable, one is restricted to very idealized situations; more realistic problems are too difficult (or even impossible) to handle analytically with the mathematical abilities that may be expected from students at this level. For instance, ordinary ballistic trajectories under the action of gravity, when air resistance is included, have been 'out of reach'; in school textbooks such trajectories are generally assumed to take place in a vacuum. Another example is that according to Newton's law of universal gravitation satellites will in general move around a large central body in elliptical orbits, but the students can only deal with the special case where the orbit is circular, thus precluding (for example) a verification and discussion of Kepler's laws. It is shown that standard spreadsheet software offers a tool that can handle many such realistic situations in a uniform way, and display the results both numerically and graphically on a computer screen, quite independently of whether the formal description of the physical system itself is 'mathematically tractable'. The method employed, which is readily accessible to high school students, is to perform a numerical integration of the equations of motion, exploiting the spreadsheet's capability of successive iterations. The software is used to model and study motion of bodies in external force fields; specifically, ballistic trajectories in a homogeneous gravity field with air resistance and satellite motion in a centrally symmetric gravitational field. The

  4. Service systems concepts, modeling, and programming

    CERN Document Server

    Cardoso, Jorge; Poels, Geert

    2014-01-01

    This SpringerBrief explores the internal workings of service systems. The authors propose a lightweight semantic model for an effective representation to capture the essence of service systems. Key topics include modeling frameworks, service descriptions and linked data, creating service instances, tool support, and applications in enterprises.Previous books on service system modeling and various streams of scientific developments used an external perspective to describe how systems can be integrated. This brief introduces the concept of white-box service system modeling as an approach to mo

  5. Stochastic Modelling of Energy Systems

    DEFF Research Database (Denmark)

    Andersen, Klaus Kaae

    2001-01-01

    is that the model structure has to be adequate for practical applications, such as system simulation, fault detection and diagnosis, and design of control strategies. This also reflects on the methods used for identification of the component models. The main result from this research is the identification......In this thesis dynamic models of typical components in Danish heating systems are considered. Emphasis is made on describing and evaluating mathematical methods for identification of such models, and on presentation of component models for practical applications. The thesis consists of seven...... research papers (case studies) together with a summary report. Each case study takes it's starting point in typical heating system components and both, the applied mathematical modelling methods and the application aspects, are considered. The summary report gives an introduction to the scope...

  6. A Mean-Variance Criterion for Economic Model Predictive Control of Stochastic Linear Systems

    DEFF Research Database (Denmark)

    Sokoler, Leo Emil; Dammann, Bernd; Madsen, Henrik

    2014-01-01

    , the tractability of the resulting optimal control problem is addressed. We use a power management case study to compare different variations of the mean-variance strategy with EMPC based on the certainty equivalence principle. The certainty equivalence strategy is much more computationally efficient than the mean......-variance strategies, but it does not account for the variance of the uncertain parameters. Openloop simulations suggest that a single-stage mean-variance approach yields a significantly lower operating cost than the certainty equivalence strategy. In closed-loop, the single-stage formulation is overly conservative...... be modified to perform almost as well as the two-stage mean-variance formulation. Nevertheless, we argue that the mean-variance approach can be used both as a strategy for evaluating less computational demanding methods such as the certainty equivalence method, and as an individual control strategy when...

  7. Test-driven modeling of embedded systems

    DEFF Research Database (Denmark)

    Munck, Allan; Madsen, Jan

    2015-01-01

    To benefit maximally from model-based systems engineering (MBSE) trustworthy high quality models are required. From the software disciplines it is known that test-driven development (TDD) can significantly increase the quality of the products. Using a test-driven approach with MBSE may have...... a similar positive effect on the quality of the system models and the resulting products and may therefore be desirable. To define a test-driven model-based systems engineering (TD-MBSE) approach, we must define this approach for numerous sub disciplines such as modeling of requirements, use cases...... suggest that our method provides a sound foundation for rapid development of high quality system models....

  8. Modeling on a PWR power conversion system with system program

    International Nuclear Information System (INIS)

    Gao Rui; Yang Yanhua; Lin Meng

    2007-01-01

    Based on the power conversion system of nuclear and conventional islands of Daya Bay Power Station, this paper models the thermal-hydraulic systems of primary and secondary loops for PWR by using the PWR best-estimate program-RELAP5. To simulate the full-scope power conversion system, not only the traditional basic system models of nuclear island, but also the major system models of conventional island are all considered and modeled. A comparison between the calculated results and the actual data of reactor demonstrates a fine match for Daya Bay Nuclear Power Station, and manifests the feasibility in simulating full-scope power conversion system of PWR by RELAP5 at the same time. (authors)

  9. Analytical models approximating individual processes: a validation method.

    Science.gov (United States)

    Favier, C; Degallier, N; Menkès, C E

    2010-12-01

    Upscaling population models from fine to coarse resolutions, in space, time and/or level of description, allows the derivation of fast and tractable models based on a thorough knowledge of individual processes. The validity of such approximations is generally tested only on a limited range of parameter sets. A more general validation test, over a range of parameters, is proposed; this would estimate the error induced by the approximation, using the original model's stochastic variability as a reference. This method is illustrated by three examples taken from the field of epidemics transmitted by vectors that bite in a temporally cyclical pattern, that illustrate the use of the method: to estimate if an approximation over- or under-fits the original model; to invalidate an approximation; to rank possible approximations for their qualities. As a result, the application of the validation method to this field emphasizes the need to account for the vectors' biology in epidemic prediction models and to validate these against finer scale models. Copyright © 2010 Elsevier Inc. All rights reserved.

  10. Formal heterogeneous system modeling with SystemC

    DEFF Research Database (Denmark)

    Niaki, Seyed Hosein Attarzadeh; Jakobsen, Mikkel Koefoed; Sulonen, Tero

    2012-01-01

    Electronic System Level (ESL) design of embedded systems proposes raising the abstraction level of the design entry to cope with the increasing complexity of such systems. To exploit the benefits of ESL, design languages should allow specification of models which are a) heterogeneous, to describe...

  11. Model Driven Development of Data Sensitive Systems

    DEFF Research Database (Denmark)

    Olsen, Petur

    2014-01-01

    storage systems, where the actual values of the data is not relevant for the behavior of the system. For many systems the values are important. For instance the control flow of the system can be dependent on the input values. We call this type of system data sensitive, as the execution is sensitive...... to the values of variables. This theses strives to improve model-driven development of such data-sensitive systems. This is done by addressing three research questions. In the first we combine state-based modeling and abstract interpretation, in order to ease modeling of data-sensitive systems, while allowing...... efficient model-checking and model-based testing. In the second we develop automatic abstraction learning used together with model learning, in order to allow fully automatic learning of data-sensitive systems to allow learning of larger systems. In the third we develop an approach for modeling and model-based...

  12. Mathematical Modeling Of Life-Support Systems

    Science.gov (United States)

    Seshan, Panchalam K.; Ganapathi, Balasubramanian; Jan, Darrell L.; Ferrall, Joseph F.; Rohatgi, Naresh K.

    1994-01-01

    Generic hierarchical model of life-support system developed to facilitate comparisons of options in design of system. Model represents combinations of interdependent subsystems supporting microbes, plants, fish, and land animals (including humans). Generic model enables rapid configuration of variety of specific life support component models for tradeoff studies culminating in single system design. Enables rapid evaluation of effects of substituting alternate technologies and even entire groups of technologies and subsystems. Used to synthesize and analyze life-support systems ranging from relatively simple, nonregenerative units like aquariums to complex closed-loop systems aboard submarines or spacecraft. Model, called Generic Modular Flow Schematic (GMFS), coded in such chemical-process-simulation languages as Aspen Plus and expressed as three-dimensional spreadsheet.

  13. Towards systems metabolic engineering in Pichia pastoris.

    Science.gov (United States)

    Schwarzhans, Jan-Philipp; Luttermann, Tobias; Geier, Martina; Kalinowski, Jörn; Friehs, Karl

    2017-11-01

    The methylotrophic yeast Pichia pastoris is firmly established as a host for the production of recombinant proteins, frequently outperforming other heterologous hosts. Already, a sizeable amount of systems biology knowledge has been acquired for this non-conventional yeast. By applying various omics-technologies, productivity features have been thoroughly analyzed and optimized via genetic engineering. However, challenging clonal variability, limited vector repertoire and insufficient genome annotation have hampered further developments. Yet, in the last few years a reinvigorated effort to establish P. pastoris as a host for both protein and metabolite production is visible. A variety of compounds from terpenoids to polyketides have been synthesized, often exceeding the productivity of other microbial systems. The clonal variability was systematically investigated and strategies formulated to circumvent untargeted events, thereby streamlining the screening procedure. Promoters with novel regulatory properties were discovered or engineered from existing ones. The genetic tractability was increased via the transfer of popular manipulation and assembly techniques, as well as the creation of new ones. A second generation of sequencing projects culminated in the creation of the second best functionally annotated yeast genome. In combination with landmark physiological insights and increased output of omics-data, a good basis for the creation of refined genome-scale metabolic models was created. The first application of model-based metabolic engineering in P. pastoris showcased the potential of this approach. Recent efforts to establish yeast peroxisomes for compartmentalized metabolite synthesis appear to fit ideally with the well-studied high capacity peroxisomal machinery of P. pastoris. Here, these recent developments are collected and reviewed with the aim of supporting the establishment of systems metabolic engineering in P. pastoris. Copyright © 2017. Published

  14. Amblypygids: Model Organisms for the Study of Arthropod Navigation Mechanisms in Complex Environments?

    Directory of Open Access Journals (Sweden)

    Daniel D Wiegmann

    2016-03-01

    Full Text Available Navigation is an ideal behavioral model for the study of sensory system integration and the neural substrates associated with complex behavior. For this broader purpose, however, it may be profitable to develop new model systems that are both tractable and sufficiently complex to ensure that information derived from a single sensory modality and path integration are inadequate to locate a goal. Here, we discuss some recent discoveries related to navigation by amblypygids, nocturnal arachnids that inhabit the tropics and sub-tropics. Nocturnal displacement experiments under the cover of a tropical rainforest reveal that these animals possess navigational abilities that are reminiscent, albeit on a smaller spatial scale, of true-navigating vertebrates. Specialized legs, called antenniform legs, which possess hundreds of olfactory and tactile sensory hairs, and vision appear to be involved. These animals also have enormous mushroom bodies, higher-order brain regions that, in insects, integrate contextual cues and may be involved in spatial memory. In amblypygids, the complexity of a nocturnal rainforest may impose navigational challenges that favor the integration of information derived from multimodal cues. Moreover, the movement of these animals is easily studied in the laboratory and putative neural integration sites of sensory information can be manipulated. Thus, amblypygids could serve as a model system for the discovery of neural substrates associated with a unique and potentially sophisticated navigational capability. The diversity of habitats in which amblypygids are found also offers an opportunity for comparative studies of sensory integration and ecological selection pressures on navigation mechanisms.

  15. Mobility Models for Systems Evaluation

    Science.gov (United States)

    Musolesi, Mirco; Mascolo, Cecilia

    Mobility models are used to simulate and evaluate the performance of mobile wireless systems and the algorithms and protocols at the basis of them. The definition of realistic mobility models is one of the most critical and, at the same time, difficult aspects of the simulation of applications and systems designed for mobile environments. There are essentially two possible types of mobility patterns that can be used to evaluate mobile network protocols and algorithms by means of simulations: traces and synthetic models [130]. Traces are obtained by means of measurements of deployed systems and usually consist of logs of connectivity or location information, whereas synthetic models are mathematical models, such as sets of equations, which try to capture the movement of the devices.

  16. Synchronization scenarios in the Winfree model of coupled oscillators

    Science.gov (United States)

    Gallego, Rafael; Montbrió, Ernest; Pazó, Diego

    2017-10-01

    Fifty years ago Arthur Winfree proposed a deeply influential mean-field model for the collective synchronization of large populations of phase oscillators. Here we provide a detailed analysis of the model for some special, analytically tractable cases. Adopting the thermodynamic limit, we derive an ordinary differential equation that exactly describes the temporal evolution of the macroscopic variables in the Ott-Antonsen invariant manifold. The low-dimensional model is then thoroughly investigated for a variety of pulse types and sinusoidal phase response curves (PRCs). Two structurally different synchronization scenarios are found, which are linked via the mutation of a Bogdanov-Takens point. From our results, we infer a general rule of thumb relating pulse shape and PRC offset with each scenario. Finally, we compare the exact synchronization threshold with the prediction of the averaging approximation given by the Kuramoto-Sakaguchi model. At the leading order, the discrepancy appears to behave as an odd function of the PRC offset.

  17. Modeling and estimating system availability

    International Nuclear Information System (INIS)

    Gaver, D.P.; Chu, B.B.

    1976-11-01

    Mathematical models to infer the availability of various types of more or less complicated systems are described. The analyses presented are probabilistic in nature and consist of three parts: a presentation of various analytic models for availability; a means of deriving approximate probability limits on system availability; and a means of statistical inference of system availability from sparse data, using a jackknife procedure. Various low-order redundant systems are used as examples, but extension to more complex systems is not difficult

  18. Drosophila melanogaster as a model organism to study nanotoxicity.

    Science.gov (United States)

    Ong, Cynthia; Yung, Lin-Yue Lanry; Cai, Yu; Bay, Boon-Huat; Baeg, Gyeong-Hun

    2015-05-01

    Drosophila melanogaster has been used as an in vivo model organism for the study of genetics and development since 100 years ago. Recently, the fruit fly Drosophila was also developed as an in vivo model organism for toxicology studies, in particular, the field of nanotoxicity. The incorporation of nanomaterials into consumer and biomedical products is a cause for concern as nanomaterials are often associated with toxicity in many in vitro studies. In vivo animal studies of the toxicity of nanomaterials with rodents and other mammals are, however, limited due to high operational cost and ethical objections. Hence, Drosophila, a genetically tractable organism with distinct developmental stages and short life cycle, serves as an ideal organism to study nanomaterial-mediated toxicity. This review discusses the basic biology of Drosophila, the toxicity of nanomaterials, as well as how the Drosophila model can be used to study the toxicity of various types of nanomaterials.

  19. Grey Box Modelling of Hydrological Systems

    DEFF Research Database (Denmark)

    Thordarson, Fannar Ørn

    of two papers where the stochastic differential equation based model is used for sewer runoff from a drainage system. A simple model is used to describe a complex rainfall-runoff process in a catchment, but the stochastic part of the system is formulated to include the increasing uncertainty when...... rainwater flows through the system, as well as describe the lower limit of the uncertainty when the flow approaches zero. The first paper demonstrates in detail the grey box model and all related transformations required to obtain a feasible model for the sewer runoff. In the last paper this model is used......The main topic of the thesis is grey box modelling of hydrologic systems, as well as formulation and assessment of their embedded uncertainties. Grey box model is a combination of a white box model, a physically-based model that is traditionally formulated using deterministic ordinary differential...

  20. Model systems in photosynthesis research

    International Nuclear Information System (INIS)

    Katz, J.J.; Hindman, J.C.

    1981-01-01

    After a general discussion of model studies in photosynthesis research, three recently developed model systems are described. The current status of covalently linked chlorophyll pairs as models for P700 and P865 is first briefly reviewed. Mg-tris(pyrochlorophyllide)1,1,1-tris(hydroxymethyl) ethane triester in its folded configuration is then discussed as a rudimentary antenna-photoreaction center model. Finally, self-assembled chlorophyll systems that contain a mixture of monomeric, oligomeric and special pair chlorophyll are shown to have fluorescence emission characteristics that resemble thoe of intact Tribonema aequale at room temperature in that both show fluorescence emission at 675 and 695 nm. In the self-assembled systems the wavelength of the emitted fluorescence depends on the wavelength of excitation, arguing that energy transfer between different chlorophyll species in these systems may be more complex than previously suspected

  1. Reverse engineering of logic-based differential equation models using a mixed-integer dynamic optimization approach.

    Science.gov (United States)

    Henriques, David; Rocha, Miguel; Saez-Rodriguez, Julio; Banga, Julio R

    2015-09-15

    Systems biology models can be used to test new hypotheses formulated on the basis of previous knowledge or new experimental data, contradictory with a previously existing model. New hypotheses often come in the shape of a set of possible regulatory mechanisms. This search is usually not limited to finding a single regulation link, but rather a combination of links subject to great uncertainty or no information about the kinetic parameters. In this work, we combine a logic-based formalism, to describe all the possible regulatory structures for a given dynamic model of a pathway, with mixed-integer dynamic optimization (MIDO). This framework aims to simultaneously identify the regulatory structure (represented by binary parameters) and the real-valued parameters that are consistent with the available experimental data, resulting in a logic-based differential equation model. The alternative to this would be to perform real-valued parameter estimation for each possible model structure, which is not tractable for models of the size presented in this work. The performance of the method presented here is illustrated with several case studies: a synthetic pathway problem of signaling regulation, a two-component signal transduction pathway in bacterial homeostasis, and a signaling network in liver cancer cells. Supplementary data are available at Bioinformatics online. julio@iim.csic.es or saezrodriguez@ebi.ac.uk. © The Author 2015. Published by Oxford University Press.

  2. [Model-based biofuels system analysis: a review].

    Science.gov (United States)

    Chang, Shiyan; Zhang, Xiliang; Zhao, Lili; Ou, Xunmin

    2011-03-01

    Model-based system analysis is an important tool for evaluating the potential and impacts of biofuels, and for drafting biofuels technology roadmaps and targets. The broad reach of the biofuels supply chain requires that biofuels system analyses span a range of disciplines, including agriculture/forestry, energy, economics, and the environment. Here we reviewed various models developed for or applied to modeling biofuels, and presented a critical analysis of Agriculture/Forestry System Models, Energy System Models, Integrated Assessment Models, Micro-level Cost, Energy and Emission Calculation Models, and Specific Macro-level Biofuel Models. We focused on the models' strengths, weaknesses, and applicability, facilitating the selection of a suitable type of model for specific issues. Such an analysis was a prerequisite for future biofuels system modeling, and represented a valuable resource for researchers and policy makers.

  3. A Tractable Estimate for the Dissipation Range Onset Wavenumber Throughout the Heliosphere

    Science.gov (United States)

    Engelbrecht, N. Eugene; Strauss, R. Du Toit

    2018-04-01

    The modulation of low-energy electrons in the heliosphere is extremely sensitive to the behavior of the dissipation range slab turbulence. The present study derives approximate expressions for the wavenumber at which the dissipation range on the slab turbulence power spectrum commences, by assuming that this onset occurs when dispersive waves propagating parallel to the background magnetic field gyroresonate with thermal plasma particles. This assumption yields results in reasonable agreement with existing spacecraft observations. These expressions are functions of the solar wind proton and electron temperatures, which are here modeled throughout the region where the solar wind is supersonic using a two-component turbulence transport model. The results so acquired are compared with extrapolations of existing models for the dissipation range onset wavenumber, and conclusions are drawn therefrom.

  4. Selective Advantage of Recombination in Evolving Protein Populations:. a Lattice Model Study

    Science.gov (United States)

    Williams, Paul D.; Pollock, David D.; Goldstein, Richard A.

    Recent research has attempted to clarify the contributions of several mutational processes, such as substitutions or homologous recombination. Simplistic, tractable protein models, which determine the compact native structure phenotype from the sequence genotype, are well-suited to such studies. In this paper, we use a lattice-protein model to examine the effects of point mutation and homologous recombination on evolving populations of proteins. We find that while the majority of mutation and recombination events are neutral or deleterious, recombination is far more likely to be beneficial. This results in a faster increase in fitness during evolution, although the final fitness level is not significantly changed. This transient advantage provides an evolutionary advantage to subpopulations that undergo recombination, allowing fixation of recombination to occur in the population.

  5. Evaluating late detection capability against diverse insider adversaries

    International Nuclear Information System (INIS)

    Sicherman, A.

    1987-01-01

    This paper describes a model for evaluating the late (after-the-fact) detection capability of material control and accountability (MCandA) systems against insider theft or diversion of special nuclear material. Potential insider cover-up strategies to defeat activities providing detection (e.g., inventories) are addressed by the model in a tractable manner. For each potential adversary and detection activity, two probabilities are assessed and used to fit the model. The model then computes the probability of detection for activities occurring periodically over time. The model provides insight into MCandA effectiveness and helps identify areas for safeguards improvement. 4 refs., 4 tabs

  6. Pembangunan Model Restaurant Management System

    Directory of Open Access Journals (Sweden)

    Fredy Jingga

    2014-12-01

    Full Text Available Model design for Restaurant Management System aims to help in restaurant business process, where Restaurant Management System (RMS help the waitress and chef could interact each other without paper limitation.  This Restaurant Management System Model develop using Agile Methodology and developed based on PHP Programming Langguage. The database management system is using MySQL. This web-based application model will enable the waitress and the chef to interact in realtime, from the time they accept the customer order until the chef could know what to cook and checklist for the waitress wheter the order is fullfill or not, until the cahsier that will calculate the bill and the payment that they accep from the customer.

  7. Learning Management Systems on Blended Learning Courses

    DEFF Research Database (Denmark)

    Kuran, Mehmet Şükrü; Pedersen, Jens Myrup; Elsner, Raphael

    2017-01-01

    LMSes, Moodle, Blackboard Learn, Canvas, and Stud.IP with respect to these. We explain how these features were utilized to increase the efficiency, tractability, and quality of experience of the course. We found that an LMS with advanced features such as progress tracking, modular course support...

  8. Semantic models for adaptive interactive systems

    CERN Document Server

    Hussein, Tim; Lukosch, Stephan; Ziegler, Jürgen; Calvary, Gaëlle

    2013-01-01

    Providing insights into methodologies for designing adaptive systems based on semantic data, and introducing semantic models that can be used for building interactive systems, this book showcases many of the applications made possible by the use of semantic models.Ontologies may enhance the functional coverage of an interactive system as well as its visualization and interaction capabilities in various ways. Semantic models can also contribute to bridging gaps; for example, between user models, context-aware interfaces, and model-driven UI generation. There is considerable potential for using

  9. Pressurized water reactor system model for control system design and analysis

    International Nuclear Information System (INIS)

    Cooper, K.F.; Cain, J.T.

    1975-01-01

    Satisfactory operation of present generation Pressurized Water Reactor (PWR) Nuclear Power systems requires that several independent and interactive control systems be designed. Since it is not practical to use an actual PWR system as a design tool, a mathematical model of the system must be developed as a design and analysis tool. The model presented has been developed to be used as an aid in applying optimal control theory to design and implement new control systems for PWR plants. To be applicable, the model developed must represent the PWR system in its normal operating range. For safety analysis the operating conditions of the system are usually abnormal and, therefore, the system modeling requirements are different from those for control system design and analysis

  10. Active Subspaces for Wind Plant Surrogate Modeling

    Energy Technology Data Exchange (ETDEWEB)

    King, Ryan N [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Quick, Julian [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Dykes, Katherine L [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Adcock, Christiane [Massachusetts Institute of Technology

    2018-01-12

    Understanding the uncertainty in wind plant performance is crucial to their cost-effective design and operation. However, conventional approaches to uncertainty quantification (UQ), such as Monte Carlo techniques or surrogate modeling, are often computationally intractable for utility-scale wind plants because of poor congergence rates or the curse of dimensionality. In this paper we demonstrate that wind plant power uncertainty can be well represented with a low-dimensional active subspace, thereby achieving a significant reduction in the dimension of the surrogate modeling problem. We apply the active sub-spaces technique to UQ of plant power output with respect to uncertainty in turbine axial induction factors, and find a single active subspace direction dominates the sensitivity in power output. When this single active subspace direction is used to construct a quadratic surrogate model, the number of model unknowns can be reduced by up to 3 orders of magnitude without compromising performance on unseen test data. We conclude that the dimension reduction achieved with active subspaces makes surrogate-based UQ approaches tractable for utility-scale wind plants.

  11. System Dynamics Modeling of Multipurpose Reservoir Operation

    Directory of Open Access Journals (Sweden)

    Ebrahim Momeni

    2006-03-01

    Full Text Available System dynamics, a feedback – based object – oriented simulation approach, not only represents complex dynamic systemic systems in a realistic way but also allows the involvement of end users in model development to increase their confidence in modeling process. The increased speed of model development, the possibility of group model development, the effective communication of model results, and the trust developed in the model due to user participation are the main strengths of this approach. The ease of model modification in response to changes in the system and the ability to perform sensitivity analysis make this approach more attractive compared with systems analysis techniques for modeling water management systems. In this study, a system dynamics model was developed for the Zayandehrud basin in central Iran. This model contains river basin, dam reservoir, plains, irrigation systems, and groundwater. Current operation rule is conjunctive use of ground and surface water. Allocation factor for each irrigation system is computed based on the feedback from groundwater storage in its zone. Deficit water is extracted from groundwater.The results show that applying better rules can not only satisfy all demands such as Gawkhuni swamp environmental demand, but it can also  prevent groundwater level drawdown in future.

  12. Modeling Sustainable Food Systems.

    Science.gov (United States)

    Allen, Thomas; Prosperi, Paolo

    2016-05-01

    The processes underlying environmental, economic, and social unsustainability derive in part from the food system. Building sustainable food systems has become a predominating endeavor aiming to redirect our food systems and policies towards better-adjusted goals and improved societal welfare. Food systems are complex social-ecological systems involving multiple interactions between human and natural components. Policy needs to encourage public perception of humanity and nature as interdependent and interacting. The systemic nature of these interdependencies and interactions calls for systems approaches and integrated assessment tools. Identifying and modeling the intrinsic properties of the food system that will ensure its essential outcomes are maintained or enhanced over time and across generations, will help organizations and governmental institutions to track progress towards sustainability, and set policies that encourage positive transformations. This paper proposes a conceptual model that articulates crucial vulnerability and resilience factors to global environmental and socio-economic changes, postulating specific food and nutrition security issues as priority outcomes of food systems. By acknowledging the systemic nature of sustainability, this approach allows consideration of causal factor dynamics. In a stepwise approach, a logical application is schematized for three Mediterranean countries, namely Spain, France, and Italy.

  13. Verifying and Validating Simulation Models

    Energy Technology Data Exchange (ETDEWEB)

    Hemez, Francois M. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-02-23

    This presentation is a high-level discussion of the Verification and Validation (V&V) of computational models. Definitions of V&V are given to emphasize that “validation” is never performed in a vacuum; it accounts, instead, for the current state-of-knowledge in the discipline considered. In particular comparisons between physical measurements and numerical predictions should account for their respective sources of uncertainty. The differences between error (bias), aleatoric uncertainty (randomness) and epistemic uncertainty (ignorance, lack-of- knowledge) are briefly discussed. Four types of uncertainty in physics and engineering are discussed: 1) experimental variability, 2) variability and randomness, 3) numerical uncertainty and 4) model-form uncertainty. Statistical sampling methods are available to propagate, and analyze, variability and randomness. Numerical uncertainty originates from the truncation error introduced by the discretization of partial differential equations in time and space. Model-form uncertainty is introduced by assumptions often formulated to render a complex problem more tractable and amenable to modeling and simulation. The discussion concludes with high-level guidance to assess the “credibility” of numerical simulations, which stems from the level of rigor with which these various sources of uncertainty are assessed and quantified.

  14. System Advisor Model: Flat Plate Photovoltaic Performance Modeling Validation Report

    Energy Technology Data Exchange (ETDEWEB)

    Freeman, Janine [National Renewable Energy Lab. (NREL), Golden, CO (United States); Whitmore, Jonathan [National Renewable Energy Lab. (NREL), Golden, CO (United States); Kaffine, Leah [National Renewable Energy Lab. (NREL), Golden, CO (United States); Blair, Nate [National Renewable Energy Lab. (NREL), Golden, CO (United States); Dobos, Aron P. [National Renewable Energy Lab. (NREL), Golden, CO (United States)

    2013-12-01

    The System Advisor Model (SAM) is a free software tool that performs detailed analysis of both system performance and system financing for a variety of renewable energy technologies. This report provides detailed validation of the SAM flat plate photovoltaic performance model by comparing SAM-modeled PV system generation data to actual measured production data for nine PV systems ranging from 75 kW to greater than 25 MW in size. The results show strong agreement between SAM predictions and field data, with annualized prediction error below 3% for all fixed tilt cases and below 8% for all one axis tracked cases. The analysis concludes that snow cover and system outages are the primary sources of disagreement, and other deviations resulting from seasonal biases in the irradiation models and one axis tracking issues are discussed in detail.

  15. Fuzzy Stochastic Unit Commitment Model with Wind Power and Demand Response under Conditional Value-At-Risk Assessment

    Directory of Open Access Journals (Sweden)

    Jiafu Yin

    2018-02-01

    Full Text Available With the increasing penetration of wind power and demand response integrated into the grid, the combined uncertainties from wind power and demand response have been a challenging concern for system operators. It is necessary to develop an approach to accommodate the combined uncertainties in the source side and load side. In this paper, the fuzzy stochastic conditional value-at-risk criterions are proposed as the risk measure of the combination of both wind power uncertainty and demand response uncertainty. To improve the computational tractability without sacrificing the accuracy, the fuzzy stochastic chance-constrained goal programming is proposed to transfer the fuzzy stochastic conditional value-at-risk to a deterministic equivalent. The operational risk of forecast error under fuzzy stochastic conditional value-at-risk assessment is represented by the shortage of reserve resource, which can be further divided into the load-shedding risk and the wind curtailment risk. To identify different priority levels for the different objective functions, the three-stage day-ahead unit commitment model is proposed through preemptive goal programming, in which the reliability requirement has the priority over the economic operation. Finally, a case simulation is performed on the IEEE 39-bus system to verify the effectiveness and efficiency of the proposed model.

  16. A stochastic model for immunological feedback in carcinogenesis analysis and approximations

    CERN Document Server

    Dubin, Neil

    1976-01-01

    Stochastic processes often pose the difficulty that, as soon as a model devi­ ates from the simplest kinds of assumptions, the differential equations obtained for the density and the generating functions become mathematically formidable. Worse still, one is very often led to equations which have no known solution and don't yield to standard analytical methods for differential equations. In the model considered here, one for tumor growth with an immunological re­ sponse from the normal tissue, a nonlinear term in the transition probability for the death of a tumor cell leads to the above-mentioned complications. Despite the mathematical disadvantages of this nonlinearity, we are able to consider a more sophisticated model biologically. Ultimately, in order to achieve a more realistic representation of a complicated phenomenon, it is necessary to examine mechanisms which allow the model to deviate from the more mathematically tractable linear format. Thus far, stochastic models for tumor growth have almost ex...

  17. The UK Earth System Model project

    Science.gov (United States)

    Tang, Yongming

    2016-04-01

    In this talk we will describe the development and current status of the UK Earth System Model (UKESM). This project is a NERC/Met Office collaboration and has two objectives; to develop and apply a world-leading Earth System Model, and to grow a community of UK Earth System Model scientists. We are building numerical models that include all the key components of the global climate system, and contain the important process interactions between global biogeochemistry, atmospheric chemistry and the physical climate system. UKESM will be used to make key CMIP6 simulations as well as long-time (e.g. millennium) simulations, large ensemble experiments and investigating a range of future carbon emission scenarios.

  18. Model-based version management system framework

    International Nuclear Information System (INIS)

    Mehmood, W.

    2016-01-01

    In this paper we present a model-based version management system. Version Management System (VMS) a branch of software configuration management (SCM) aims to provide a controlling mechanism for evolution of software artifacts created during software development process. Controlling the evolution requires many activities to perform, such as, construction and creation of versions, identification of differences between versions, conflict detection and merging. Traditional VMS systems are file-based and consider software systems as a set of text files. File based VMS systems are not adequate for performing software configuration management activities such as, version control on software artifacts produced in earlier phases of the software life cycle. New challenges of model differencing, merge, and evolution control arise while using models as central artifact. The goal of this work is to present a generic framework model-based VMS which can be used to overcome the problem of tradition file-based VMS systems and provide model versioning services. (author)

  19. A Tractable Model of the LTE Access Reservation Procedure for Machine-Type Communications

    DEFF Research Database (Denmark)

    Nielsen, Jimmy Jessen; Min Kim, Dong; Madueño, Germán Corrales

    2015-01-01

    A canonical scenario in Machine-Type Communications (MTC) is the one featuring a large number of devices, each of them with sporadic traffic. Hence, the number of served devices in a single LTE cell is not determined by the available aggregate rate, but rather by the limitations of the LTE access...

  20. Modeling and simulation of systems using Matlab and Simulink

    CERN Document Server

    Chaturvedi, Devendra K

    2009-01-01

    Introduction to SystemsSystemClassification of SystemsLinear SystemsTime-Varying vs. Time-Invariant Systems Lumped vs. Distributed Parameter SystemsContinuous- and Discrete-Time Systems Deterministic vs. Stochastic Systems Hard and Soft Systems Analysis of Systems Synthesis of Systems Introduction to System Philosophy System Thinking Large and Complex Applied System Engineering: A Generic ModelingSystems ModelingIntroduction Need of System Modeling Modeling Methods for Complex Systems Classification of ModelsCharacteristics of Models ModelingMathematical Modeling of Physical SystemsFormulation of State Space Model of SystemsPhysical Systems Theory System Components and Interconnections Computation of Parameters of a Component Single Port and Multiport Systems Techniques of System Analysis Basics of Linear Graph Theoretic ApproachFormulation of System Model for Conceptual SystemFormulation System Model for Physical SystemsTopological RestrictionsDevelopment of State Model of Degenerative SystemSolution of Stat...

  1. Neisseria gonorrhoeae co-infection exacerbates vaginal HIV shedding without affecting systemic viral loads in human CD34+ engrafted mice.

    Directory of Open Access Journals (Sweden)

    Stacey X Xu

    Full Text Available HIV synergy with sexually transmitted co-infections is well-documented in the clinic. Co-infection with Neisseria gonorrhoeae in particular, increases genital HIV shedding and mucosal transmission. However, no animal model of co-infection currently exists to directly explore this relationship or to bridge the gap in understanding between clinical and in vitro studies of this interaction. This study aims to test the feasibility of using a humanized mouse model to overcome this barrier. Combining recent in vivo modelling advancements in both HIV and gonococcal research, we developed a co-infection model by engrafting immunodeficient NSG mice with human CD34+ hematopoietic stem cells to generate humanized mice that permit both systemic HIV infection and genital N. gonorrhoeae infection. Systemic plasma and vaginal lavage titres of HIV were measured in order to assess the impact of gonococcal challenge on viral plasma titres and genital shedding. Engrafted mice showed human CD45+ leukocyte repopulation in blood and mucosal tissues. Systemic HIV challenge resulted in 104-105 copies/mL of viral RNA in blood by week 4 post-infection, as well as vaginal shedding of virus. Subsequent gonococcal challenge resulted in unchanged plasma HIV levels but higher viral shedding in the genital tract, which reflects published clinical observations. Thus, human CD34+ stem cell-transplanted NSG mice represent an experimentally tractable animal model in which to study HIV shedding during gonococcal co-infection, allowing dissection of molecular and immunological interactions between these pathogens, and providing a platform to assess future therapeutics aimed at reducing HIV transmission.

  2. Neisseria gonorrhoeae co-infection exacerbates vaginal HIV shedding without affecting systemic viral loads in human CD34+ engrafted mice.

    Science.gov (United States)

    Xu, Stacey X; Leontyev, Danila; Kaul, Rupert; Gray-Owen, Scott D

    2018-01-01

    HIV synergy with sexually transmitted co-infections is well-documented in the clinic. Co-infection with Neisseria gonorrhoeae in particular, increases genital HIV shedding and mucosal transmission. However, no animal model of co-infection currently exists to directly explore this relationship or to bridge the gap in understanding between clinical and in vitro studies of this interaction. This study aims to test the feasibility of using a humanized mouse model to overcome this barrier. Combining recent in vivo modelling advancements in both HIV and gonococcal research, we developed a co-infection model by engrafting immunodeficient NSG mice with human CD34+ hematopoietic stem cells to generate humanized mice that permit both systemic HIV infection and genital N. gonorrhoeae infection. Systemic plasma and vaginal lavage titres of HIV were measured in order to assess the impact of gonococcal challenge on viral plasma titres and genital shedding. Engrafted mice showed human CD45+ leukocyte repopulation in blood and mucosal tissues. Systemic HIV challenge resulted in 104-105 copies/mL of viral RNA in blood by week 4 post-infection, as well as vaginal shedding of virus. Subsequent gonococcal challenge resulted in unchanged plasma HIV levels but higher viral shedding in the genital tract, which reflects published clinical observations. Thus, human CD34+ stem cell-transplanted NSG mice represent an experimentally tractable animal model in which to study HIV shedding during gonococcal co-infection, allowing dissection of molecular and immunological interactions between these pathogens, and providing a platform to assess future therapeutics aimed at reducing HIV transmission.

  3. Hierarchical State Machines as Modular Horn Clauses

    Directory of Open Access Journals (Sweden)

    Pierre-Loïc Garoche

    2016-07-01

    Full Text Available In model based development, embedded systems are modeled using a mix of dataflow formalism, that capture the flow of computation, and hierarchical state machines, that capture the modal behavior of the system. For safety analysis, existing approaches rely on a compilation scheme that transform the original model (dataflow and state machines into a pure dataflow formalism. Such compilation often result in loss of important structural information that capture the modal behaviour of the system. In previous work we have developed a compilation technique from a dataflow formalism into modular Horn clauses. In this paper, we present a novel technique that faithfully compile hierarchical state machines into modular Horn clauses. Our compilation technique preserves the structural and modal behavior of the system, making the safety analysis of such models more tractable.

  4. Fuzzy model-based servo and model following control for nonlinear systems.

    Science.gov (United States)

    Ohtake, Hiroshi; Tanaka, Kazuo; Wang, Hua O

    2009-12-01

    This correspondence presents servo and nonlinear model following controls for a class of nonlinear systems using the Takagi-Sugeno fuzzy model-based control approach. First, the construction method of the augmented fuzzy system for continuous-time nonlinear systems is proposed by differentiating the original nonlinear system. Second, the dynamic fuzzy servo controller and the dynamic fuzzy model following controller, which can make outputs of the nonlinear system converge to target points and to outputs of the reference system, respectively, are introduced. Finally, the servo and model following controller design conditions are given in terms of linear matrix inequalities. Design examples illustrate the utility of this approach.

  5. Distribution system modeling and analysis

    CERN Document Server

    Kersting, William H

    2001-01-01

    For decades, distribution engineers did not have the sophisticated tools developed for analyzing transmission systems-often they had only their instincts. Things have changed, and we now have computer programs that allow engineers to simulate, analyze, and optimize distribution systems. Powerful as these programs are, however, without a real understanding of the operating characteristics of a distribution system, engineers using the programs can easily make serious errors in their designs and operating procedures. Distribution System Modeling and Analysis helps prevent those errors. It gives readers a basic understanding of the modeling and operating characteristics of the major components of a distribution system. One by one, the author develops and analyzes each component as a stand-alone element, then puts them all together to analyze a distribution system comprising the various shunt and series devices for power-flow and short-circuit studies. He includes the derivation of all models and includes many num...

  6. Analysis hierarchical model for discrete event systems

    Science.gov (United States)

    Ciortea, E. M.

    2015-11-01

    The This paper presents the hierarchical model based on discrete event network for robotic systems. Based on the hierarchical approach, Petri network is analysed as a network of the highest conceptual level and the lowest level of local control. For modelling and control of complex robotic systems using extended Petri nets. Such a system is structured, controlled and analysed in this paper by using Visual Object Net ++ package that is relatively simple and easy to use, and the results are shown as representations easy to interpret. The hierarchical structure of the robotic system is implemented on computers analysed using specialized programs. Implementation of hierarchical model discrete event systems, as a real-time operating system on a computer network connected via a serial bus is possible, where each computer is dedicated to local and Petri model of a subsystem global robotic system. Since Petri models are simplified to apply general computers, analysis, modelling, complex manufacturing systems control can be achieved using Petri nets. Discrete event systems is a pragmatic tool for modelling industrial systems. For system modelling using Petri nets because we have our system where discrete event. To highlight the auxiliary time Petri model using transport stream divided into hierarchical levels and sections are analysed successively. Proposed robotic system simulation using timed Petri, offers the opportunity to view the robotic time. Application of goods or robotic and transmission times obtained by measuring spot is obtained graphics showing the average time for transport activity, using the parameters sets of finished products. individually.

  7. Human performance modeling for system of systems analytics :soldier fatigue.

    Energy Technology Data Exchange (ETDEWEB)

    Lawton, Craig R.; Campbell, James E.; Miller, Dwight Peter

    2005-10-01

    The military has identified Human Performance Modeling (HPM) as a significant requirement and challenge of future systems modeling and analysis initiatives as can be seen in the Department of Defense's (DoD) Defense Modeling and Simulation Office's (DMSO) Master Plan (DoD 5000.59-P 1995). To this goal, the military is currently spending millions of dollars on programs devoted to HPM in various military contexts. Examples include the Human Performance Modeling Integration (HPMI) program within the Air Force Research Laboratory, which focuses on integrating HPMs with constructive models of systems (e.g. cockpit simulations) and the Navy's Human Performance Center (HPC) established in September 2003. Nearly all of these initiatives focus on the interface between humans and a single system. This is insufficient in the era of highly complex network centric SoS. This report presents research and development in the area of HPM in a system-of-systems (SoS). Specifically, this report addresses modeling soldier fatigue and the potential impacts soldier fatigue can have on SoS performance.

  8. Modeling cellular networks in fading environments with dominant specular components

    KAUST Repository

    AlAmmouri, Ahmad

    2016-07-26

    Stochastic geometry (SG) has been widely accepted as a fundamental tool for modeling and analyzing cellular networks. However, the fading models used with SG analysis are mainly confined to the simplistic Rayleigh fading, which is extended to the Nakagami-m fading in some special cases. However, neither the Rayleigh nor the Nakagami-m accounts for dominant specular components (DSCs) which may appear in realistic fading channels. In this paper, we present a tractable model for cellular networks with generalized two-ray (GTR) fading channel. The GTR fading explicitly accounts for two DSCs in addition to the diffuse components and offers high flexibility to capture diverse fading channels that appear in realistic outdoor/indoor wireless communication scenarios. It also encompasses the famous Rayleigh and Rician fading as special cases. To this end, the prominent effect of DSCs is highlighted in terms of average spectral efficiency. © 2016 IEEE.

  9. Quantum Dynamics in Biological Systems

    Science.gov (United States)

    Shim, Sangwoo

    In the first part of this dissertation, recent efforts to understand quantum mechanical effects in biological systems are discussed. Especially, long-lived quantum coherences observed during the electronic energy transfer process in the Fenna-Matthews-Olson complex at physiological condition are studied extensively using theories of open quantum systems. In addition to the usual master equation based approaches, the effect of the protein structure is investigated in atomistic detail through the combined application of quantum chemistry and molecular dynamics simulations. To evaluate the thermalized reduced density matrix, a path-integral Monte Carlo method with a novel importance sampling approach is developed for excitons coupled to an arbitrary phonon bath at a finite temperature. In the second part of the thesis, simulations of molecular systems and applications to vibrational spectra are discussed. First, the quantum dynamics of a molecule is simulated by combining semiclassical initial value representation and density funcitonal theory with analytic derivatives. A computationally-tractable approximation to the sum-of-states formalism of Raman spectra is subsequently discussed.

  10. Bayesian inference of chemical kinetic models from proposed reactions

    KAUST Repository

    Galagali, Nikhil

    2015-02-01

    © 2014 Elsevier Ltd. Bayesian inference provides a natural framework for combining experimental data with prior knowledge to develop chemical kinetic models and quantify the associated uncertainties, not only in parameter values but also in model structure. Most existing applications of Bayesian model selection methods to chemical kinetics have been limited to comparisons among a small set of models, however. The significant computational cost of evaluating posterior model probabilities renders traditional Bayesian methods infeasible when the model space becomes large. We present a new framework for tractable Bayesian model inference and uncertainty quantification using a large number of systematically generated model hypotheses. The approach involves imposing point-mass mixture priors over rate constants and exploring the resulting posterior distribution using an adaptive Markov chain Monte Carlo method. The posterior samples are used to identify plausible models, to quantify rate constant uncertainties, and to extract key diagnostic information about model structure-such as the reactions and operating pathways most strongly supported by the data. We provide numerical demonstrations of the proposed framework by inferring kinetic models for catalytic steam and dry reforming of methane using available experimental data.

  11. Dynamic Model of Kaplan Turbine Regulating System Suitable for Power System Analysis

    OpenAIRE

    Zhao, Jie; Wang, Li; Liu, Dichen; Wang, Jun; Zhao, Yu; Liu, Tian; Wang, Haoyu

    2015-01-01

    Accurate modeling of Kaplan turbine regulating system is of great significance for grid security and stability analysis. In this paper, Kaplan turbine regulating system model is divided into the governor system model, the blade control system model, and the turbine and water diversion system model. The Kaplan turbine has its particularity, and the on-cam relationship between the wicket gate opening and the runner blade angle under a certain water head on the whole range was obtained by high-o...

  12. A System of Systems Approach to Integrating Global Sea Level Change Application Programs

    Science.gov (United States)

    Bambachus, M. J.; Foster, R. S.; Powell, C.; Cole, M.

    2005-12-01

    but different in the details. These differences can discourage the potential for collaboration. Resources that are not inherently shared (or do not spring from a common authority) must be explicitly coordinated to avoid disrupting the collaborative research workflow. This includes tools which make the interaction of systems (and users with systems, and administrators of systems) more conceptual and higher-level than is typically done today. Such tools all appear under the heading of Grid, within a larger idea of metacomputing. We present an approach for successful collaboration and shared use of distributed research resources. The real advances in research throughput that are occurring through the use of large computers are occurring less as a function of progress in a given discrete algorithm and much more as a function of model and data coupling. Complexity normally reduces the ability of the human mind to understand and work with this kind of coupling. Intuitive Grid-based computational resources simultaneously reduce the effect of this complexity on the scientist/decision maker, and increase the ability to rationalize complexity. Research progress can even be achieved before full understanding of complexity has been reached, by modeling and experimenting and providing more data to think about. Analytic engines provided via the Grid can help digest this data and make it tractable through visualization and exploration tools. We present a rationale for increasing research throughput by leveraging more complex model and data interaction.

  13. System Dynamics Modeling for Emergency Operating System Resilience

    Energy Technology Data Exchange (ETDEWEB)

    Eng, Ang Wei; Kim, Jong Hyun [KEPCO International Nuclear Graduate School, Ulsan (Korea, Republic of)

    2014-10-15

    The purpose of this paper is to present a causal model which explain human error cause-effect relationships of emergency operating system (EOS) by using system dynamics (SD) approach. The causal model will further quantified by analyzes nuclear power plant incidents/accidents data in Korea for simulation modeling. Emergency Operating System (EOS) is generally defined as a system which consists personnel, human-machine interface and procedures; and how these components interact and coordinate to respond to an incident or accident. Understanding the behavior of EOS especially personnel behavior and the factors influencing it during accident will contribute in human reliability evaluation. Human Reliability Analysis (HRA) is a method which assesses how human decisions and actions affect to system risk and further used to reduce the human errors probability. There are many HRA method used performance influencing factors (PIFs) to identify the causes of human errors. However, these methods have several limitations. In HRA, PIFs are assumed independent each other and relationship between them are not been study. Through the SD simulation, users able to simulate various situation of nuclear power plant respond to emergency from human and organizational aspects. The simulation also provides users a comprehensive view on how to improve the safety in plants. This paper presents a causal model that explained cause-effect relationships of EOS human. Through SD simulation, users able to identify the main contribution of human error easily. Users can also use SD simulation to predict when and how a human error occurs over time. In future work, the SD model can be expanded more on low level factors. The relationship within low level factors can investigated by using correlation method and further included in the model. This can enables users to study more detailed human error cause-effect relationships and the behavior of EOS. Another improvement can be made is on EOS factors

  14. System Dynamics Modeling for Emergency Operating System Resilience

    International Nuclear Information System (INIS)

    Eng, Ang Wei; Kim, Jong Hyun

    2014-01-01

    The purpose of this paper is to present a causal model which explain human error cause-effect relationships of emergency operating system (EOS) by using system dynamics (SD) approach. The causal model will further quantified by analyzes nuclear power plant incidents/accidents data in Korea for simulation modeling. Emergency Operating System (EOS) is generally defined as a system which consists personnel, human-machine interface and procedures; and how these components interact and coordinate to respond to an incident or accident. Understanding the behavior of EOS especially personnel behavior and the factors influencing it during accident will contribute in human reliability evaluation. Human Reliability Analysis (HRA) is a method which assesses how human decisions and actions affect to system risk and further used to reduce the human errors probability. There are many HRA method used performance influencing factors (PIFs) to identify the causes of human errors. However, these methods have several limitations. In HRA, PIFs are assumed independent each other and relationship between them are not been study. Through the SD simulation, users able to simulate various situation of nuclear power plant respond to emergency from human and organizational aspects. The simulation also provides users a comprehensive view on how to improve the safety in plants. This paper presents a causal model that explained cause-effect relationships of EOS human. Through SD simulation, users able to identify the main contribution of human error easily. Users can also use SD simulation to predict when and how a human error occurs over time. In future work, the SD model can be expanded more on low level factors. The relationship within low level factors can investigated by using correlation method and further included in the model. This can enables users to study more detailed human error cause-effect relationships and the behavior of EOS. Another improvement can be made is on EOS factors

  15. Mathematical Modeling of Constrained Hamiltonian Systems

    NARCIS (Netherlands)

    Schaft, A.J. van der; Maschke, B.M.

    1995-01-01

    Network modelling of unconstrained energy conserving physical systems leads to an intrinsic generalized Hamiltonian formulation of the dynamics. Constrained energy conserving physical systems are directly modelled as implicit Hamiltonian systems with regard to a generalized Dirac structure on the

  16. A model management system for combat simulation

    OpenAIRE

    Dolk, Daniel R.

    1986-01-01

    The design and implementation of a model management system to support combat modeling is discussed. Structured modeling is introduced as a formalism for representing mathematical models. A relational information resource dictionary system is developed which can accommodate structured models. An implementation is described. Structured modeling is then compared to Jackson System Development (JSD) as a methodology for facilitating discrete event simulation. JSD is currently better at representin...

  17. A Comparative Analysis of the Value of Information in a Continuous Time Market Model with Partial Information: The Cases of Log-Utility and CRRA

    Directory of Open Access Journals (Sweden)

    Zhaojun Yang

    2011-01-01

    Full Text Available We study the question what value an agent in a generalized Black-Scholes model with partial information attributes to the complementary information. To do this, we study the utility maximization problems from terminal wealth for the two cases partial information and full information. We assume that the drift term of the risky asset is a dynamic process of general linear type and that the two levels of observation correspond to whether this drift term is observable or not. Applying methods from stochastic filtering theory we derive an analytical tractable formula for the value of information in the case of logarithmic utility. For the case of constant relative risk aversion (CRRA we derive a semianalytical formula, which uses as an input the numerical solution of a system of ODEs. For both cases we present a comparative analysis.

  18. Dynamic Model of Kaplan Turbine Regulating System Suitable for Power System Analysis

    Directory of Open Access Journals (Sweden)

    Jie Zhao

    2015-01-01

    Full Text Available Accurate modeling of Kaplan turbine regulating system is of great significance for grid security and stability analysis. In this paper, Kaplan turbine regulating system model is divided into the governor system model, the blade control system model, and the turbine and water diversion system model. The Kaplan turbine has its particularity, and the on-cam relationship between the wicket gate opening and the runner blade angle under a certain water head on the whole range was obtained by high-order curve fitting method. Progressively the linearized Kaplan turbine model, improved ideal Kaplan turbine model, and nonlinear Kaplan turbine model were developed. The nonlinear Kaplan turbine model considered the correction function of the blade angle on the turbine power, thereby improving the model simulation accuracy. The model parameters were calculated or obtained by the improved particle swarm optimization (IPSO algorithm. For the blade control system model, the default blade servomotor time constant given by value of one simplified the modeling and experimental work. Further studies combined with measured test data verified the established model accuracy and laid a foundation for further research into the influence of Kaplan turbine connecting to the grid.

  19. The red flour beetle as a model for bacterial oral infections.

    Directory of Open Access Journals (Sweden)

    Barbara Milutinović

    Full Text Available Experimental infection systems are important for studying antagonistic interactions and coevolution between hosts and their pathogens. The red flour beetle Tribolium castaneum and the spore-forming bacterial insect pathogen Bacillus thuringiensis (Bt are widely used and tractable model organisms. However, they have not been employed yet as an efficient experimental system to study host-pathogen interactions. We used a high throughput oral infection protocol to infect T. castaneum insects with coleopteran specific B. thuringiensis bv. tenebrionis (Btt bacteria. We found that larval mortality depends on the dietary spore concentration and on the duration of exposure to the spores. Furthermore, differential susceptibility of larvae from different T. castaneum populations indicates that the host genetic background influences infection success. The recovery of high numbers of infectious spores from the cadavers indicates successful replication of bacteria in the host and suggests that Btt could establish infectious cycles in T. castaneum in nature. We were able to transfer plasmids from Btt to a non-pathogenic but genetically well-characterised Bt strain, which was thereafter able to successfully infect T. castaneum, suggesting that factors residing on the plasmids are important for the virulence of Btt. The availability of a genetically accessible strain will provide an ideal model for more in-depth analyses of pathogenicity factors during oral infections. Combined with the availability of the full genome sequence of T. castaneum, this system will enable analyses of host responses during infection, as well as addressing basic questions concerning host-parasite coevolution.

  20. Emulation of recharge and evapotranspiration processes in shallow groundwater systems

    Science.gov (United States)

    Doble, Rebecca C.; Pickett, Trevor; Crosbie, Russell S.; Morgan, Leanne K.; Turnadge, Chris; Davies, Phil J.

    2017-12-01

    In shallow groundwater systems, recharge and evapotranspiration are highly sensitive to changes in the depth to water table. To effectively model these fluxes, complex functions that include soil and vegetation properties are often required. Model emulation (surrogate modelling or meta-modelling) can provide a means of incorporating detailed conceptualisation of recharge and evapotranspiration processes, while maintaining the numerical tractability and computational performance required for regional scale groundwater models and uncertainty analysis. A method for emulating recharge and evapotranspiration processes in groundwater flow models was developed, and applied to the South East region of South Australia and western Victoria, which is characterised by shallow groundwater, wetlands and coastal lakes. The soil-vegetation-atmosphere transfer (SVAT) model WAVES was used to generate relationships between net recharge (diffuse recharge minus evapotranspiration from groundwater) and depth to water table for different combinations of climate, soil and land cover types. These relationships, which mimicked previously described soil, vegetation and groundwater behaviour, were combined into a net recharge lookup table. The segmented evapotranspiration package in MODFLOW was adapted to select values of net recharge from the lookup table depending on groundwater depth, and the climate, soil and land use characteristics of each cell. The model was found to be numerically robust in steady state testing, had no major increase in run time, and would be more efficient than tightly-coupled modelling approaches. It made reasonable predictions of net recharge and groundwater head compared with remotely sensed estimates of net recharge and a standard MODFLOW comparison model. In particular, the method was better able to predict net recharge and groundwater head in areas with steep hydraulic gradients.

  1. A Model-Driven Visualization Tool for Use with Model-Based Systems Engineering Projects

    Science.gov (United States)

    Trase, Kathryn; Fink, Eric

    2014-01-01

    Model-Based Systems Engineering (MBSE) promotes increased consistency between a system's design and its design documentation through the use of an object-oriented system model. The creation of this system model facilitates data presentation by providing a mechanism from which information can be extracted by automated manipulation of model content. Existing MBSE tools enable model creation, but are often too complex for the unfamiliar model viewer to easily use. These tools do not yet provide many opportunities for easing into the development and use of a system model when system design documentation already exists. This study creates a Systems Modeling Language (SysML) Document Traceability Framework (SDTF) for integrating design documentation with a system model, and develops an Interactive Visualization Engine for SysML Tools (InVEST), that exports consistent, clear, and concise views of SysML model data. These exported views are each meaningful to a variety of project stakeholders with differing subjects of concern and depth of technical involvement. InVEST allows a model user to generate multiple views and reports from a MBSE model, including wiki pages and interactive visualizations of data. System data can also be filtered to present only the information relevant to the particular stakeholder, resulting in a view that is both consistent with the larger system model and other model views. Viewing the relationships between system artifacts and documentation, and filtering through data to see specialized views improves the value of the system as a whole, as data becomes information

  2. Modeling canopy-induced turbulence in the Earth system: a unified parameterization of turbulent exchange within plant canopies and the roughness sublayer (CLM-ml v0)

    Science.gov (United States)

    Bonan, Gordon B.; Patton, Edward G.; Harman, Ian N.; Oleson, Keith W.; Finnigan, John J.; Lu, Yaqiong; Burakowski, Elizabeth A.

    2018-04-01

    Land surface models used in climate models neglect the roughness sublayer and parameterize within-canopy turbulence in an ad hoc manner. We implemented a roughness sublayer turbulence parameterization in a multilayer canopy model (CLM-ml v0) to test if this theory provides a tractable parameterization extending from the ground through the canopy and the roughness sublayer. We compared the canopy model with the Community Land Model (CLM4.5) at seven forest, two grassland, and three cropland AmeriFlux sites over a range of canopy heights, leaf area indexes, and climates. CLM4.5 has pronounced biases during summer months at forest sites in midday latent heat flux, sensible heat flux, gross primary production, nighttime friction velocity, and the radiative temperature diurnal range. The new canopy model reduces these biases by introducing new physics. Advances in modeling stomatal conductance and canopy physiology beyond what is in CLM4.5 substantially improve model performance at the forest sites. The signature of the roughness sublayer is most evident in nighttime friction velocity and the diurnal cycle of radiative temperature, but is also seen in sensible heat flux. Within-canopy temperature profiles are markedly different compared with profiles obtained using Monin-Obukhov similarity theory, and the roughness sublayer produces cooler daytime and warmer nighttime temperatures. The herbaceous sites also show model improvements, but the improvements are related less systematically to the roughness sublayer parameterization in these canopies. The multilayer canopy with the roughness sublayer turbulence improves simulations compared with CLM4.5 while also advancing the theoretical basis for surface flux parameterizations.

  3. Cyber Physical System Modelling of Distribution Power Systems for Dynamic Demand Response

    Science.gov (United States)

    Chu, Xiaodong; Zhang, Rongxiang; Tang, Maosen; Huang, Haoyi; Zhang, Lei

    2018-01-01

    Dynamic demand response (DDR) is a package of control methods to enhance power system security. A CPS modelling and simulation platform for DDR in distribution power systems is presented in this paper. CPS modelling requirements of distribution power systems are analyzed. A coupled CPS modelling platform is built for assessing DDR in the distribution power system, which combines seamlessly modelling tools of physical power networks and cyber communication networks. Simulations results of IEEE 13-node test system demonstrate the effectiveness of the modelling and simulation platform.

  4. ECONOMIC MODELING STOCKS CONTROL SYSTEM: SIMULATION MODEL

    OpenAIRE

    Климак, М.С.; Войтко, С.В.

    2016-01-01

    Considered theoretical and applied aspects of the development of simulation models to predictthe optimal development and production systems that create tangible products andservices. It isproved that theprocessof inventory control needs of economicandmathematical modeling in viewof thecomplexity of theoretical studies. A simulation model of stocks control that allows make managementdecisions with production logistics

  5. Modeling Control Situations in Power System Operations

    DEFF Research Database (Denmark)

    Saleem, Arshad; Lind, Morten; Singh, Sri Niwas

    2010-01-01

    for intelligent operation and control must represent system features, so that information from measurements can be related to possible system states and to control actions. These general modeling requirements are well understood, but it is, in general, difficult to translate them into a model because of the lack...... of explicit principles for model construction. This paper presents a work on using explicit means-ends model based reasoning about complex control situations which results in maintaining consistent perspectives and selecting appropriate control action for goal driven agents. An example of power system......Increased interconnection and loading of the power system along with deregulation has brought new challenges for electric power system operation, control and automation. Traditional power system models used in intelligent operation and control are highly dependent on the task purpose. Thus, a model...

  6. On Verification Modelling of Embedded Systems

    NARCIS (Netherlands)

    Brinksma, Hendrik; Mader, Angelika H.

    Computer-aided verification of embedded systems hinges on the availability of good verification models of the systems at hand. Such models must be much simpler than full design models or specifications to be of practical value, because of the unavoidable combinatorial complexities in the

  7. Modeling and analysis of stochastic systems

    CERN Document Server

    Kulkarni, Vidyadhar G

    2011-01-01

    Based on the author's more than 25 years of teaching experience, Modeling and Analysis of Stochastic Systems, Second Edition covers the most important classes of stochastic processes used in the modeling of diverse systems, from supply chains and inventory systems to genetics and biological systems. For each class of stochastic process, the text includes its definition, characterization, applications, transient and limiting behavior, first passage times, and cost/reward models. Along with reorganizing the material, this edition revises and adds new exercises and examples. New to the second edi

  8. Electric-drive tractability indicator integrated in hybrid electric vehicle tachometer

    Science.gov (United States)

    Tamai, Goro; Zhou, Jing; Weslati, Feisel

    2014-09-02

    An indicator, system and method of indicating electric drive usability in a hybrid electric vehicle. A tachometer is used that includes a display having an all-electric drive portion and a hybrid drive portion. The all-electric drive portion and the hybrid drive portion share a first boundary which indicates a minimum electric drive usability and a beginning of hybrid drive operation of the vehicle. The indicated level of electric drive usability is derived from at least one of a percent battery discharge, a percent maximum torque provided by the electric drive, and a percent electric drive to hybrid drive operating cost for the hybrid electric vehicle.

  9. Modelling of Context: Designing Mobile Systems from Domain-Dependent Models

    DEFF Research Database (Denmark)

    Nielsen, Peter Axel; Stage, Jan

    2009-01-01

    Modelling of domain-dependent aspects is a key prerequisite for the design of software for mobile systems. Most mobile systems include a more or less advanced model of selected aspects of the domain in which they are used. This paper discusses the creation of such a model and its relevance for te...

  10. Model Reduction of Fuzzy Logic Systems

    Directory of Open Access Journals (Sweden)

    Zhandong Yu

    2014-01-01

    Full Text Available This paper deals with the problem of ℒ2-ℒ∞ model reduction for continuous-time nonlinear uncertain systems. The approach of the construction of a reduced-order model is presented for high-order nonlinear uncertain systems described by the T-S fuzzy systems, which not only approximates the original high-order system well with an ℒ2-ℒ∞ error performance level γ but also translates it into a linear lower-dimensional system. Then, the model approximation is converted into a convex optimization problem by using a linearization procedure. Finally, a numerical example is presented to show the effectiveness of the proposed method.

  11. Macrophage–Microbe Interactions: Lessons from the Zebrafish Model

    Directory of Open Access Journals (Sweden)

    Nagisa Yoshida

    2017-12-01

    Full Text Available Macrophages provide front line defense against infections. The study of macrophage–microbe interplay is thus crucial for understanding pathogenesis and infection control. Zebrafish (Danio rerio larvae provide a unique platform to study macrophage–microbe interactions in vivo, from the level of the single cell to the whole organism. Studies using zebrafish allow non-invasive, real-time visualization of macrophage recruitment and phagocytosis. Furthermore, the chemical and genetic tractability of zebrafish has been central to decipher the complex role of macrophages during infection. Here, we discuss the latest developments using zebrafish models of bacterial and fungal infection. We also review novel aspects of macrophage biology revealed by zebrafish, which can potentiate development of new therapeutic strategies for humans.

  12. Distinguishing Environment and System in Coloured Petri Net Models of Reactive Systems

    DEFF Research Database (Denmark)

    Tjell, Simon

    2007-01-01

    This paper introduces and formally defines the environment-and-system-partitioned property for behavioral models of reactive systems expressed in the formal modeling language Coloured Petri Net. The purpose of the formalization is to make it possible to automatically validate any CPN model...... with respect to this property based on structural analysis. A model has the environment-and-system-partitioned property if it is based on a clear division between environment and system. This division is important in many model-driven approaches to software development such as model-based testing and automated...

  13. An Aggregated Optimization Model for Multi-Head SMD Placements

    NARCIS (Netherlands)

    Ashayeri, J.; Ma, N.; Sotirov, R.

    2010-01-01

    In this article we propose an aggregate optimization approach by formulating the multi-head SMD placement optimization problem into a mixed integer program (MIP) with the variables based on batches of components. This MIP is tractable and effective in balancing workload among placement heads,

  14. An aggregated optimization model for multi-head SMD placements

    NARCIS (Netherlands)

    Ashayeri, J.; Ma, N.; Sotirov, R.

    2011-01-01

    In this article we propose an aggregate optimization approach by formulating the multi-head SMD placement optimization problem into a mixed integer program (MIP) with the variables based on batches of components. This MIP is tractable and effective in balancing workload among placement heads,

  15. RT-Syn: A real-time software system generator

    Science.gov (United States)

    Setliff, Dorothy E.

    1992-01-01

    This paper presents research into providing highly reusable and maintainable components by using automatic software synthesis techniques. This proposal uses domain knowledge combined with automatic software synthesis techniques to engineer large-scale mission-critical real-time software. The hypothesis centers on a software synthesis architecture that specifically incorporates application-specific (in this case real-time) knowledge. This architecture synthesizes complex system software to meet a behavioral specification and external interaction design constraints. Some examples of these external constraints are communication protocols, precisions, timing, and space limitations. The incorporation of application-specific knowledge facilitates the generation of mathematical software metrics which are used to narrow the design space, thereby making software synthesis tractable. Success has the potential to dramatically reduce mission-critical system life-cycle costs not only by reducing development time, but more importantly facilitating maintenance, modifications, and extensions of complex mission-critical software systems, which are currently dominating life cycle costs.

  16. Modeling the Dynamic Digestive System Microbiome†

    OpenAIRE

    Estes, Anne M.

    2015-01-01

    Modeling the Dynamic Digestive System Microbiome” is a hands-on activity designed to demonstrate the dynamics of microbiome ecology using dried pasta and beans to model disturbance events in the human digestive system microbiome. This exercise demonstrates how microbiome diversity is influenced by: 1) niche availability and habitat space and 2) a major disturbance event, such as antibiotic use. Students use a pictorial key to examine prepared models of digestive system microbiomes to determi...

  17. Modeling of the DZero data acquisition system

    Energy Technology Data Exchange (ETDEWEB)

    Angstadt, R.; Johnson, M.; Manning, I.L. [Fermi National Accelerator Lab., Batavia, IL (United States); Wightman, J.A. [Texas A and M Univ., College Station, TX (United States). Dept. of Physics]|[Texas Accelerator Center, The Woodlands, TX (United States)

    1991-12-01

    A queuing theory model was used in the initial design of the D0 data acquisition system. It was mainly used for the front end electronic systems. Since then the model has been extended to include the entire data path for the tracking system. The tracking system generates the most data so we expect this system to determine the overall transfer rate. The model was developed using both analytical and simulation methods for solving a series of single server queues. We describe the model and the methods used to develop it. We also present results from the original models, updated calculations representing the system as built and comparisons with measurements made with the hardware in place for the cosmic ray test run. 3 refs.

  18. Coupling population dynamics with earth system models: the POPEM model.

    Science.gov (United States)

    Navarro, Andrés; Moreno, Raúl; Jiménez-Alcázar, Alfonso; Tapiador, Francisco J

    2017-09-16

    Precise modeling of CO 2 emissions is important for environmental research. This paper presents a new model of human population dynamics that can be embedded into ESMs (Earth System Models) to improve climate modeling. Through a system dynamics approach, we develop a cohort-component model that successfully simulates historical population dynamics with fine spatial resolution (about 1°×1°). The population projections are used to improve the estimates of CO 2 emissions, thus transcending the bulk approach of existing models and allowing more realistic non-linear effects to feature in the simulations. The module, dubbed POPEM (from Population Parameterization for Earth Models), is compared with current emission inventories and validated against UN aggregated data. Finally, it is shown that the module can be used to advance toward fully coupling the social and natural components of the Earth system, an emerging research path for environmental science and pollution research.

  19. Modeling, Control and Coordination of Helicopter Systems

    CERN Document Server

    Ren, Beibei; Chen, Chang; Fua, Cheng-Heng; Lee, Tong Heng

    2012-01-01

    Modeling, Control and Coordination of Helicopter Systems provides a comprehensive treatment of helicopter systems, ranging from related nonlinear flight dynamic modeling and stability analysis to advanced control design for single helicopter systems, and also covers issues related to the coordination and formation control of multiple helicopter systems to achieve high performance tasks. Ensuring stability in helicopter flight is a challenging problem for nonlinear control design and development. This book is a valuable reference on modeling, control and coordination of helicopter systems,providing readers with practical solutions for the problems that still plague helicopter system design and implementation. Readers will gain a complete picture of helicopters at the systems level, as well as a better understanding of the technical intricacies involved. This book also: Presents a complete picture of modeling, control and coordination for helicopter systems Provides a modeling platform for a general class of ro...

  20. Identifying and Quantifying Emergent Behavior Through System of Systems Modeling and Simulation

    Science.gov (United States)

    2015-09-01

    the similarities and differences between Agent Based Modeling ( ABM ) and Equation Based Modeling (EBM). Both modeling approaches “simulate a system by...entities. For the latter difference, EBM focuses on the system level observables, while ABM defines behaviors at the individual agent level and observes...EMERGENT BEHAVIOR THROUGH SYSTEM OF SYSTEMS MODELING AND SIMULATION by Mary Ann Cummings September 2015 Dissertation Supervisor: Man-Tak Shing

  1. Bond graph modeling of centrifugal compression systems

    OpenAIRE

    Uddin, Nur; Gravdahl, Jan Tommy

    2015-01-01

    A novel approach to model unsteady fluid dynamics in a compressor network by using a bond graph is presented. The model is intended in particular for compressor control system development. First, we develop a bond graph model of a single compression system. Bond graph modeling offers a different perspective to previous work by modeling the compression system based on energy flow instead of fluid dynamics. Analyzing the bond graph model explains the energy flow during compressor surge. Two pri...

  2. A system-level multiprocessor system-on-chip modeling framework

    DEFF Research Database (Denmark)

    Virk, Kashif Munir; Madsen, Jan

    2004-01-01

    We present a system-level modeling framework to model system-on-chips (SoC) consisting of heterogeneous multiprocessors and network-on-chip communication structures in order to enable the developers of today's SoC designs to take advantage of the flexibility and scalability of network-on-chip and...... SoC design. We show how a hand-held multimedia terminal, consisting of JPEG, MP3 and GSM applications, can be modeled as a multiprocessor SoC in our framework....

  3. Investigating immune system aging: system dynamics and agent-based modeling

    OpenAIRE

    Figueredo, Grazziela; Aickelin, Uwe

    2010-01-01

    System dynamics and agent based simulation models can\\ud both be used to model and understand interactions of entities within a population. Our modeling work presented here is concerned with understanding the suitability of the different types of simulation for the immune system aging problems and comparing their results. We are trying to answer questions such as: How fit is the immune system given a certain age? Would an immune boost be of therapeutic value, e.g. to improve the effectiveness...

  4. Symbiosis induces widespread changes in the proteome of the model cnidarian Aiptasia.

    Science.gov (United States)

    Oakley, Clinton A; Ameismeier, Michael F; Peng, Lifeng; Weis, Virginia M; Grossman, Arthur R; Davy, Simon K

    2016-07-01

    Coral reef ecosystems are metabolically founded on the mutualism between corals and photosynthetic dinoflagellates of the genus Symbiodinium. The glass anemone Aiptasia sp. has become a tractable model for this symbiosis, and recent advances in genetic information have enabled the use of mass spectrometry-based proteomics in this model. We utilized label-free liquid chromatography electrospray-ionization tandem mass spectrometry to analyze the effects of symbiosis on the proteomes of symbiotic and aposymbiotic Aiptasia. We identified and obtained relative quantification of more than 3,300 proteins in 1,578 protein clusters, with 81 protein clusters showing significantly different expression between symbiotic states. Symbiotic anemones showed significantly higher expression of proteins involved in lipid storage and transport, nitrogen transport and cycling, intracellular trafficking, endocytosis and inorganic carbon transport. These changes reflect shifts in host metabolism and nutrient reserves due to increased nutritional exchange with the symbionts, as well as mechanisms for supplying inorganic nutrients to the algae. Aposymbiotic anemones exhibited increased expression of multiple systems responsible for mediating reactive oxygen stress, suggesting that the host derives direct or indirect protection from oxidative stress while in symbiosis. Aposymbiotic anemones also increased their expression of an array of proteases and chitinases, indicating a metabolic shift from autotrophy to heterotrophy. These results provide a comprehensive Aiptasia proteome with more direct relative quantification of protein abundance than transcriptomic methods. The extension of "omics" techniques to this model system will allow more powerful studies of coral physiology, ecosystem function, and the effects of biotic and abiotic stress on the coral-dinoflagellate mutualism. © 2015 John Wiley & Sons Ltd.

  5. Compositional Modelling of Stochastic Hybrid Systems

    NARCIS (Netherlands)

    Strubbe, S.N.

    2005-01-01

    In this thesis we present a modelling framework for compositional modelling of stochastic hybrid systems. Hybrid systems consist of a combination of continuous and discrete dynamics. The state space of a hybrid system is hybrid in the sense that it consists of a continuous component and a discrete

  6. System Identification, Environmental Modelling, and Control System Design

    CERN Document Server

    Garnier, Hugues

    2012-01-01

    System Identification, Environmetric Modelling, and Control Systems Design is dedicated to Professor Peter Young on the occasion of his seventieth birthday. Professor Young has been a pioneer in systems and control, and over the past 45 years he has influenced many developments in this field. This volume is comprised of a collection of contributions by leading experts in system identification, time-series analysis, environmetric modelling and control system design – modern research in topics that reflect important areas of interest in Professor Young’s research career. Recent theoretical developments in and relevant applications of these areas are explored treating the various subjects broadly and in depth. The authoritative and up-to-date research presented here will be of interest to academic researcher in control and disciplines related to environmental research, particularly those to with water systems. The tutorial style in which many of the contributions are composed also makes the book suitable as ...

  7. The tracking performance of distributed recoverable flight control systems subject to high intensity radiated fields

    Science.gov (United States)

    Wang, Rui

    It is known that high intensity radiated fields (HIRF) can produce upsets in digital electronics, and thereby degrade the performance of digital flight control systems. Such upsets, either from natural or man-made sources, can change data values on digital buses and memory and affect CPU instruction execution. HIRF environments are also known to trigger common-mode faults, affecting nearly-simultaneously multiple fault containment regions, and hence reducing the benefits of n-modular redundancy and other fault-tolerant computing techniques. Thus, it is important to develop models which describe the integration of the embedded digital system, where the control law is implemented, as well as the dynamics of the closed-loop system. In this dissertation, theoretical tools are presented to analyze the relationship between the design choices for a class of distributed recoverable computing platforms and the tracking performance degradation of a digital flight control system implemented on such a platform while operating in a HIRF environment. Specifically, a tractable hybrid performance model is developed for a digital flight control system implemented on a computing platform inspired largely by the NASA family of fault-tolerant, reconfigurable computer architectures known as SPIDER (scalable processor-independent design for enhanced reliability). The focus will be on the SPIDER implementation, which uses the computer communication system known as ROBUS-2 (reliable optical bus). A physical HIRF experiment was conducted at the NASA Langley Research Center in order to validate the theoretical tracking performance degradation predictions for a distributed Boeing 747 flight control system subject to a HIRF environment. An extrapolation of these results for scenarios that could not be physically tested is also presented.

  8. Modeling virtualized downlink cellular networks with ultra-dense small cells

    KAUST Repository

    Ibrahim, Hazem

    2015-09-11

    The unrelenting increase in the mobile users\\' populations and traffic demand drive cellular network operators to densify their infrastructure. Network densification increases the spatial frequency reuse efficiency while maintaining the signal-to-interference-plus-noise-ratio (SINR) performance, hence, increases the spatial spectral efficiency and improves the overall network performance. However, control signaling in such dense networks consumes considerable bandwidth and limits the densification gain. Radio access network (RAN) virtualization via control plane (C-plane) and user plane (U-plane) splitting has been recently proposed to lighten the control signaling burden and improve the network throughput. In this paper, we present a tractable analytical model for virtualized downlink cellular networks, using tools from stochastic geometry. We then apply the developed modeling framework to obtain design insights for virtualized RANs and quantify associated performance improvement. © 2015 IEEE.

  9. Consentaneous agent-based and stochastic model of the financial markets.

    Science.gov (United States)

    Gontis, Vygintas; Kononovicius, Aleksejus

    2014-01-01

    We are looking for the agent-based treatment of the financial markets considering necessity to build bridges between microscopic, agent based, and macroscopic, phenomenological modeling. The acknowledgment that agent-based modeling framework, which may provide qualitative and quantitative understanding of the financial markets, is very ambiguous emphasizes the exceptional value of well defined analytically tractable agent systems. Herding as one of the behavior peculiarities considered in the behavioral finance is the main property of the agent interactions we deal with in this contribution. Looking for the consentaneous agent-based and macroscopic approach we combine two origins of the noise: exogenous one, related to the information flow, and endogenous one, arising form the complex stochastic dynamics of agents. As a result we propose a three state agent-based herding model of the financial markets. From this agent-based model we derive a set of stochastic differential equations, which describes underlying macroscopic dynamics of agent population and log price in the financial markets. The obtained solution is then subjected to the exogenous noise, which shapes instantaneous return fluctuations. We test both Gaussian and q-Gaussian noise as a source of the short term fluctuations. The resulting model of the return in the financial markets with the same set of parameters reproduces empirical probability and spectral densities of absolute return observed in New York, Warsaw and NASDAQ OMX Vilnius Stock Exchanges. Our result confirms the prevalent idea in behavioral finance that herding interactions may be dominant over agent rationality and contribute towards bubble formation.

  10. CTBT Integrated Verification System Evaluation Model

    Energy Technology Data Exchange (ETDEWEB)

    Edenburn, M.W.; Bunting, M.L.; Payne, A.C. Jr.

    1997-10-01

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia`s Monitoring Systems and Technology Center and has been funded by the US Department of Energy`s Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, top-level, modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM`s unique features is that it integrates results from the various CTBT sensor technologies (seismic, infrasound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection) and location accuracy of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system`s performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. This report describes version 1.2 of IVSEM.

  11. Integrating systems biology models and biomedical ontologies.

    Science.gov (United States)

    Hoehndorf, Robert; Dumontier, Michel; Gennari, John H; Wimalaratne, Sarala; de Bono, Bernard; Cook, Daniel L; Gkoutos, Georgios V

    2011-08-11

    Systems biology is an approach to biology that emphasizes the structure and dynamic behavior of biological systems and the interactions that occur within them. To succeed, systems biology crucially depends on the accessibility and integration of data across domains and levels of granularity. Biomedical ontologies were developed to facilitate such an integration of data and are often used to annotate biosimulation models in systems biology. We provide a framework to integrate representations of in silico systems biology with those of in vivo biology as described by biomedical ontologies and demonstrate this framework using the Systems Biology Markup Language. We developed the SBML Harvester software that automatically converts annotated SBML models into OWL and we apply our software to those biosimulation models that are contained in the BioModels Database. We utilize the resulting knowledge base for complex biological queries that can bridge levels of granularity, verify models based on the biological phenomenon they represent and provide a means to establish a basic qualitative layer on which to express the semantics of biosimulation models. We establish an information flow between biomedical ontologies and biosimulation models and we demonstrate that the integration of annotated biosimulation models and biomedical ontologies enables the verification of models as well as expressive queries. Establishing a bi-directional information flow between systems biology and biomedical ontologies has the potential to enable large-scale analyses of biological systems that span levels of granularity from molecules to organisms.

  12. Mechatronic Systems Design Methods, Models, Concepts

    CERN Document Server

    Janschek, Klaus

    2012-01-01

    In this textbook, fundamental methods for model-based design of mechatronic systems are presented in a systematic, comprehensive form. The method framework presented here comprises domain-neutral methods for modeling and performance analysis: multi-domain modeling (energy/port/signal-based), simulation (ODE/DAE/hybrid systems), robust control methods, stochastic dynamic analysis, and quantitative evaluation of designs using system budgets. The model framework is composed of analytical dynamic models for important physical and technical domains of realization of mechatronic functions, such as multibody dynamics, digital information processing and electromechanical transducers. Building on the modeling concept of a technology-independent generic mechatronic transducer, concrete formulations for electrostatic, piezoelectric, electromagnetic, and electrodynamic transducers are presented. More than 50 fully worked out design examples clearly illustrate these methods and concepts and enable independent study of th...

  13. A strategic review of electricity systems models

    International Nuclear Information System (INIS)

    Foley, A.M.; O Gallachoir, B.P.; McKeogh, E.J.; Hur, J.; Baldick, R.

    2010-01-01

    Electricity systems models are software tools used to manage electricity demand and the electricity systems, to trade electricity and for generation expansion planning purposes. Various portfolios and scenarios are modelled in order to compare the effects of decision making in policy and on business development plans in electricity systems so as to best advise governments and industry on the least cost economic and environmental approach to electricity supply, while maintaining a secure supply of sufficient quality electricity. The modelling techniques developed to study vertically integrated state monopolies are now applied in liberalised markets where the issues and constraints are more complex. This paper reviews the changing role of electricity systems modelling in a strategic manner, focussing on the modelling response to key developments, the move away from monopoly towards liberalised market regimes and the increasing complexity brought about by policy targets for renewable energy and emissions. The paper provides an overview of electricity systems modelling techniques, discusses a number of key proprietary electricity systems models used in the USA and Europe and provides an information resource to the electricity analyst not currently readily available in the literature on the choice of model to investigate different aspects of the electricity system. (author)

  14. Elevated temperature alters carbon cycling in a model microbial community

    Science.gov (United States)

    Mosier, A.; Li, Z.; Thomas, B. C.; Hettich, R. L.; Pan, C.; Banfield, J. F.

    2013-12-01

    Earth's climate is regulated by biogeochemical carbon exchanges between the land, oceans and atmosphere that are chiefly driven by microorganisms. Microbial communities are therefore indispensible to the study of carbon cycling and its impacts on the global climate system. In spite of the critical role of microbial communities in carbon cycling processes, microbial activity is currently minimally represented or altogether absent from most Earth System Models. Method development and hypothesis-driven experimentation on tractable model ecosystems of reduced complexity, as presented here, are essential for building molecularly resolved, benchmarked carbon-climate models. Here, we use chemoautotropic acid mine drainage biofilms as a model community to determine how elevated temperature, a key parameter of global climate change, regulates the flow of carbon through microbial-based ecosystems. This study represents the first community proteomics analysis using tandem mass tags (TMT), which enable accurate, precise, and reproducible quantification of proteins. We compare protein expression levels of biofilms growing over a narrow temperature range expected to occur with predicted climate changes. We show that elevated temperature leads to up-regulation of proteins involved in amino acid metabolism and protein modification, and down-regulation of proteins involved in growth and reproduction. Closely related bacterial genotypes differ in their response to temperature: Elevated temperature represses carbon fixation by two Leptospirillum genotypes, whereas carbon fixation is significantly up-regulated at higher temperature by a third closely related genotypic group. Leptospirillum group III bacteria are more susceptible to viral stress at elevated temperature, which may lead to greater carbon turnover in the microbial food web through the release of viral lysate. Overall, this proteogenomics approach revealed the effects of climate change on carbon cycling pathways and other

  15. A systems modelling framework for the design of integrated process control systems

    International Nuclear Information System (INIS)

    Lind, M.

    1983-12-01

    The paper describes a systems modelling methodology, called multilevel flow modelling, or MFM, which aims at describing complex production plants as designs, i.e. as systems having goals, functions and equipment realizing these functions. The modelling concepts are based on thermodynamics and lead to a system description in terms of multiple levels of interrelated mass or energy flow structures. The paper discusses as a basis for the modelling framework the general properties of artifacts or designs, characterizes the complexity of production systems and defines the MFM concepts which allow a consistent specification of goals and functions of these systems as generated in the process design. A modelling example is given and the application of the models for the design of plant control strategies is outlined. (author)

  16. Use of an operational model evaluation system for model intercomparison

    Energy Technology Data Exchange (ETDEWEB)

    Foster, K. T., LLNL

    1998-03-01

    The Atmospheric Release Advisory Capability (ARAC) is a centralized emergency response system used to assess the impact from atmospheric releases of hazardous materials. As part of an on- going development program, new three-dimensional diagnostic windfield and Lagrangian particle dispersion models will soon replace ARAC`s current operational windfield and dispersion codes. A prototype model performance evaluation system has been implemented to facilitate the study of the capabilities and performance of early development versions of these new models relative to ARAC`s current operational codes. This system provides tools for both objective statistical analysis using common performance measures and for more subjective visualization of the temporal and spatial relationships of model results relative to field measurements. Supporting this system is a database of processed field experiment data (source terms and meteorological and tracer measurements) from over 100 individual tracer releases.

  17. Optimal maintenance of a multi-unit system under dependencies

    Science.gov (United States)

    Sung, Ho-Joon

    The availability, or reliability, of an engineering component greatly influences the operational cost and safety characteristics of a modern system over its life-cycle. Until recently, the reliance on past empirical data has been the industry-standard practice to develop maintenance policies that provide the minimum level of system reliability. Because such empirically-derived policies are vulnerable to unforeseen or fast-changing external factors, recent advancements in the study of topic on maintenance, which is known as optimal maintenance problem, has gained considerable interest as a legitimate area of research. An extensive body of applicable work is available, ranging from those concerned with identifying maintenance policies aimed at providing required system availability at minimum possible cost, to topics on imperfect maintenance of multi-unit system under dependencies. Nonetheless, these existing mathematical approaches to solve for optimal maintenance policies must be treated with caution when considered for broader applications, as they are accompanied by specialized treatments to ease the mathematical derivation of unknown functions in both objective function and constraint for a given optimal maintenance problem. These unknown functions are defined as reliability measures in this thesis, and theses measures (e.g., expected number of failures, system renewal cycle, expected system up time, etc.) do not often lend themselves to possess closed-form formulas. It is thus quite common to impose simplifying assumptions on input probability distributions of components' lifetime or repair policies. Simplifying the complex structure of a multi-unit system to a k-out-of-n system by neglecting any sources of dependencies is another commonly practiced technique intended to increase the mathematical tractability of a particular model. This dissertation presents a proposal for an alternative methodology to solve optimal maintenance problems by aiming to achieve the

  18. Hierarchy of the low-lying excitations for the (2+1-dimensional q=3 Potts model in the ordered phase

    Directory of Open Access Journals (Sweden)

    Yoshihiro Nishiyama

    2017-03-01

    Full Text Available The (2+1-dimensional q=3 Potts model was simulated with the exact diagonalization method. In the ordered phase, the elementary excitations (magnons are attractive, forming a series of bound states in the low-energy spectrum. We investigate the low-lying spectrum through a dynamical susceptibility, which is readily tractable with the exact diagonalization method via the continued-fraction expansion. As a result, we estimate the series of (scaled mass gaps, m2,3,4/m1 (m1: single-magnon mass, in proximity to the transition point.

  19. Pilot Signal Design and Direct Ranging Methods for Radio Localization Using OFDM Systems

    DEFF Research Database (Denmark)

    Jing, Lishuai

    Having accurate localization capability is becoming important for existing and future terrestrial wireless communication systems, in particular for orthogonal frequency-division multiplexing (OFDM) systems, such as WiMAX, wireless local area network, long-term evolution (LTE) and its extension LTE......-Advanced. To obtain accurate position estimates, not only advanced estimation algorithms are needed but also the transmitted signals should be scrutinized. In this dissertation, we investigate how to design OFDM pilot signals and propose and evaluate high accuracy ranging techniques with tractable computational....... For scenarios where the number of path components is unknown and these components are not necessary separable, we propose a direct ranging technique using the received frequency-domain OFDM pilot signals. Compared to conventional (two-step) ranging methods, which estimate intermediate parameters...

  20. Modeling soft interface dominated systems

    NARCIS (Netherlands)

    Lamorgese, A.; Mauri, R.; Sagis, L.M.C.

    2017-01-01

    The two main continuum frameworks used for modeling the dynamics of soft multiphase systems are the Gibbs dividing surface model, and the diffuse interface model. In the former the interface is modeled as a two dimensional surface, and excess properties such as a surface density, or surface energy

  1. Data management system performance modeling

    Science.gov (United States)

    Kiser, Larry M.

    1993-01-01

    This paper discusses analytical techniques that have been used to gain a better understanding of the Space Station Freedom's (SSF's) Data Management System (DMS). The DMS is a complex, distributed, real-time computer system that has been redesigned numerous times. The implications of these redesigns have not been fully analyzed. This paper discusses the advantages and disadvantages for static analytical techniques such as Rate Monotonic Analysis (RMA) and also provides a rationale for dynamic modeling. Factors such as system architecture, processor utilization, bus architecture, queuing, etc. are well suited for analysis with a dynamic model. The significance of performance measures for a real-time system are discussed.

  2. Simplified local density model for adsorption over large pressure ranges

    International Nuclear Information System (INIS)

    Rangarajan, B.; Lira, C.T.; Subramanian, R.

    1995-01-01

    Physical adsorption of high-pressure fluids onto solids is of interest in the transportation and storage of fuel and radioactive gases; the separation and purification of lower hydrocarbons; solid-phase extractions; adsorbent regenerations using supercritical fluids; supercritical fluid chromatography; and critical point drying. A mean-field model is developed that superimposes the fluid-solid potential on a fluid equation of state to predict adsorption on a flat wall from vapor, liquid, and supercritical phases. A van der Waals-type equation of state is used to represent the fluid phase, and is simplified with a local density approximation for calculating the configurational energy of the inhomogeneous fluid. The simplified local density approximation makes the model tractable for routine calculations over wide pressure ranges. The model is capable of prediction of Type 2 and 3 subcritical isotherms for adsorption on a flat wall, and shows the characteristic cusplike behavior and crossovers seen experimentally near the fluid critical point

  3. Modelling Pressurized Water Reactor cores in terms of porous media

    International Nuclear Information System (INIS)

    Ricciardi, G.; Collard, B.; Ricciardi, G.; Bellizzi, S.; Cochelin, B.

    2009-01-01

    The aim of this study is to develop a tractable model of a nuclear reactor core taking the complexity of the structure (including its nonlinear behaviour) and fluid flow coupling into account. The mechanical behaviour modelling includes the dynamics of both the fuel assemblies and the fluid. Each rod bundle is modelled in the form of a deformable porous medium; then, the velocity field of the fluid and the displacement field of the structure are defined over the whole domain. The fluid and the structure are first modelled separately, before being linked together. The equations of motion for the structure are obtained using a Lagrangian approach and, to be able to link up the fluid and the structure, the equations of motion for the fluid are obtained using an arbitrary Lagrangian Eulerian approach. The finite element method is applied to spatially discretize the equations. Simulations are performed to analyse the effects of the characteristics of the fluid and of the structure. Finally, the model is validated with a test involving two fuel assemblies, showing good agreement with the experimental data. (authors)

  4. Compiling models into real-time systems

    International Nuclear Information System (INIS)

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-based monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  5. Compiling models into real-time systems

    International Nuclear Information System (INIS)

    Dormoy, J.L.; Cherriaux, F.; Ancelin, J.

    1992-08-01

    This paper presents an architecture for building real-time systems from models, and model-compiling techniques. This has been applied for building a real-time model-base monitoring system for nuclear plants, called KSE, which is currently being used in two plants in France. We describe how we used various artificial intelligence techniques for building it: a model-based approach, a logical model of its operation, a declarative implementation of these models, and original knowledge-compiling techniques for automatically generating the real-time expert system from those models. Some of those techniques have just been borrowed from the literature, but we had to modify or invent other techniques which simply did not exist. We also discuss two important problems, which are often underestimated in the artificial intelligence literature: size, and errors. Our architecture, which could be used in other applications, combines the advantages of the model-based approach with the efficiency requirements of real-time applications, while in general model-based approaches present serious drawbacks on this point

  6. Importance-truncated shell model for multi-shell valence spaces

    Energy Technology Data Exchange (ETDEWEB)

    Stumpf, Christina; Vobig, Klaus; Roth, Robert [Institut fuer Kernphysik, TU Darmstadt (Germany)

    2016-07-01

    The valence-space shell model is one of the work horses in nuclear structure theory. In traditional applications, shell-model calculations are carried out using effective interactions constructed in a phenomenological framework for rather small valence spaces, typically spanned by one major shell. We improve on this traditional approach addressing two main aspects. First, we use new effective interactions derived in an ab initio approach and, thus, establish a connection to the underlying nuclear interaction providing access to single- and multi-shell valence spaces. Second, we extend the shell model to larger valence spaces by applying an importance-truncation scheme based on a perturbative importance measure. In this way, we reduce the model space to the relevant basis states for the description of a few target eigenstates and solve the eigenvalue problem in this physics-driven truncated model space. In particular multi-shell valence spaces are not tractable otherwise. We combine the importance-truncated shell model with refined extrapolation schemes to approximately recover the exact result. We present first results obtained in the importance-truncated shell model with the newly derived ab initio effective interactions for multi-shell valence spaces, e.g., the sdpf shell.

  7. Modeling Adaptive Behavior for Systems Design

    DEFF Research Database (Denmark)

    Rasmussen, Jens

    1994-01-01

    Field studies in modern work systems and analysis of recent major accidents have pointed to a need for better models of the adaptive behavior of individuals and organizations operating in a dynamic and highly competitive environment. The paper presents a discussion of some key characteristics.......) The basic difference between the models of system functions used in engineering and design and those evolving from basic research within the various academic disciplines and finally 3.) The models and methods required for closed-loop, feedback system design....

  8. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Science.gov (United States)

    Said, Nadia; Engelhart, Michael; Kirches, Christian; Körkel, Stefan; Holt, Daniel V

    2016-01-01

    Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  9. Applying Mathematical Optimization Methods to an ACT-R Instance-Based Learning Model.

    Directory of Open Access Journals (Sweden)

    Nadia Said

    Full Text Available Computational models of cognition provide an interface to connect advanced mathematical tools and methods to empirically supported theories of behavior in psychology, cognitive science, and neuroscience. In this article, we consider a computational model of instance-based learning, implemented in the ACT-R cognitive architecture. We propose an approach for obtaining mathematical reformulations of such cognitive models that improve their computational tractability. For the well-established Sugar Factory dynamic decision making task, we conduct a simulation study to analyze central model parameters. We show how mathematical optimization techniques can be applied to efficiently identify optimal parameter values with respect to different optimization goals. Beyond these methodological contributions, our analysis reveals the sensitivity of this particular task with respect to initial settings and yields new insights into how average human performance deviates from potential optimal performance. We conclude by discussing possible extensions of our approach as well as future steps towards applying more powerful derivative-based optimization methods.

  10. REQUIREMENTS FOR SYSTEMS DEVELOPMENT LIFE CYCLE MODELS FOR LARGE-SCALE DEFENSE SYSTEMS

    Directory of Open Access Journals (Sweden)

    Kadir Alpaslan DEMIR

    2015-10-01

    Full Text Available TLarge-scale defense system projects are strategic for maintaining and increasing the national defense capability. Therefore, governments spend billions of dollars in the acquisition and development of large-scale defense systems. The scale of defense systems is always increasing and the costs to build them are skyrocketing. Today, defense systems are software intensive and they are either a system of systems or a part of it. Historically, the project performances observed in the development of these systems have been signifi cantly poor when compared to other types of projects. It is obvious that the currently used systems development life cycle models are insuffi cient to address today’s challenges of building these systems. Using a systems development life cycle model that is specifi cally designed for largescale defense system developments and is effective in dealing with today’s and near-future challenges will help to improve project performances. The fi rst step in the development a large-scale defense systems development life cycle model is the identifi cation of requirements for such a model. This paper contributes to the body of literature in the fi eld by providing a set of requirements for system development life cycle models for large-scale defense systems. Furthermore, a research agenda is proposed.

  11. Identifying optimal models to represent biochemical systems.

    Directory of Open Access Journals (Sweden)

    Mochamad Apri

    Full Text Available Biochemical systems involving a high number of components with intricate interactions often lead to complex models containing a large number of parameters. Although a large model could describe in detail the mechanisms that underlie the system, its very large size may hinder us in understanding the key elements of the system. Also in terms of parameter identification, large models are often problematic. Therefore, a reduced model may be preferred to represent the system. Yet, in order to efficaciously replace the large model, the reduced model should have the same ability as the large model to produce reliable predictions for a broad set of testable experimental conditions. We present a novel method to extract an "optimal" reduced model from a large model to represent biochemical systems by combining a reduction method and a model discrimination method. The former assures that the reduced model contains only those components that are important to produce the dynamics observed in given experiments, whereas the latter ensures that the reduced model gives a good prediction for any feasible experimental conditions that are relevant to answer questions at hand. These two techniques are applied iteratively. The method reveals the biological core of a model mathematically, indicating the processes that are likely to be responsible for certain behavior. We demonstrate the algorithm on two realistic model examples. We show that in both cases the core is substantially smaller than the full model.

  12. Biomolecular electrostatics—I want your solvation (model)

    International Nuclear Information System (INIS)

    Bardhan, Jaydeep P

    2012-01-01

    We review the mathematical and computational foundations for implicit-solvent models in theoretical chemistry and molecular biophysics. These models are valuable theoretical tools for studying the influence of a solvent, often water or an aqueous electrolyte, on a molecular solute such as a protein. Detailed chemical and physical aspects of implicit-solvent models have been addressed in numerous exhaustive reviews, as have numerical algorithms for simulating the most popular models. This work highlights several important conceptual developments, focusing on selected works that spotlight the need for research at the intersections between chemical, biological, mathematical, and computational physics. To introduce the field to computational scientists, we begin by describing the basic theoretical ideas of implicit-solvent models and numerical implementations. We then address practical and philosophical challenges in parameterization, and major advances that speed up calculations (covering continuum theories based on Poisson as well as faster approximate theories such as generalized Born). We briefly describe the main shortcomings of existing models, and survey promising developments that deliver improved realism in a computationally tractable way, i.e. without increasing simulation time significantly. The review concludes with a discussion of ongoing modeling challenges and relevant trends in high-performance computing and computational science. (topical review)

  13. A verification environment for bigraphs

    DEFF Research Database (Denmark)

    Perrone, Gian David; Debois, Søren; Hildebrandt, Thomas

    2013-01-01

    We present the BigMC tool for bigraphical reactive systems that may be instantiated as a verification tool for any formalism or domain-specific modelling language encoded as a bigraphical reactive system. We introduce the syntax and use of BigMC, and exemplify its use with two small examples......: a textbook “philosophers” example, and an example motivated by a ubiquitous computing application. We give a tractable heuristic with which to approximate interference between reaction rules, and prove this analysis to be safe. We provide a mechanism for state reachability checking of bigraphical reactive...

  14. Revised sequence components power system models for unbalanced power system studies

    Energy Technology Data Exchange (ETDEWEB)

    Abdel-Akher, M. [Tunku Abdul Rahman Univ., Kuala Lumpur (Malaysia); Nor, K.-M. [Univ. of Technology Malaysia, Johor (Malaysia); Rashid, A.H.A. [Univ. of Malaya, Kuala Lumpur (Malaysia)

    2007-07-01

    The principle method of analysis using positive, negative, and zero-sequence networks has been used to examine the balanced power system under both balanced and unbalanced loading conditions. The significant advantage of the sequence networks is that the sequence networks become entirely uncoupled in the case of balanced three-phase power systems. The uncoupled sequence networks then can be solved in independent way such as in fault calculation programs. However, the hypothesis of balanced power systems cannot be considered in many cases due to untransposed transmission lines; multiphase line segments in a distribution power system; or transformer phase shifts which cannot be incorporated in the existing models. A revised sequence decoupled power system models for analyzing unbalanced power systems based on symmetrical networks was presented in this paper. These models included synchronous machines, transformers, transmission lines, and voltage regulators. The models were derived from their counterpart's models in phase coordinates frame of reference. In these models, the three sequence networks were fully decoupled with a three-phase coordinates features such as transformer phase shifts and transmission line coupling. The proposed models were used to develop an unbalanced power-flow program for analyzing both balanced and unbalanced networks. The power flow solution was identical to results obtained from a full phase coordinate three-phase power-flow program. 11 refs., 3 tabs.

  15. Model Reduction of Hybrid Systems

    DEFF Research Database (Denmark)

    Shaker, Hamid Reza

    gramians. Generalized gramians are the solutions to the observability and controllability Lyapunov inequalities. In the first framework the projection matrices are found based on the common generalized gramians. This framework preserves the stability of the original switched system for all switching...... is guaranteed to be preserved for arbitrary switching signal. To compute the common generalized gramians linear matrix inequalities (LMI’s) need to be solved. These LMI’s are not always feasible. In order to solve the problem of conservatism, the second framework is presented. In this method the projection......High-Technological solutions of today are characterized by complex dynamical models. A lot of these models have inherent hybrid/switching structure. Hybrid/switched systems are powerful models for distributed embedded systems design where discrete controls are applied to continuous processes...

  16. Discrete modelling of drapery systems

    Science.gov (United States)

    Thoeni, Klaus; Giacomini, Anna

    2016-04-01

    Drapery systems are an efficient and cost-effective measure in preventing and controlling rockfall hazards on rock slopes. The simplest form consists of a row of ground anchors along the top of the slope connected to a horizontal support cable from which a wire mesh is suspended down the face of the slope. Such systems are generally referred to as simple or unsecured draperies (Badger and Duffy 2012). Variations such as secured draperies, where a pattern of ground anchors is incorporated within the field of the mesh, and hybrid systems, where the upper part of an unsecured drapery is elevated to intercept rockfalls originating upslope of the installation, are becoming more and more popular. This work presents a discrete element framework for simulation of unsecured drapery systems and its variations. The numerical model is based on the classical discrete element method (DEM) and implemented into the open-source framework YADE (Šmilauer et al., 2010). The model takes all relevant interactions between block, drapery and slope into account (Thoeni et al., 2014) and was calibrated and validated based on full-scale experiments (Giacomini et al., 2012).The block is modelled as a rigid clump made of spherical particles which allows any shape to be approximated. The drapery is represented by a set of spherical particle with remote interactions. The behaviour of the remote interactions is governed by the constitutive behaviour of the wire and generally corresponds to a piecewise linear stress-strain relation (Thoeni et al., 2013). The same concept is used to model wire ropes. The rock slope is represented by rigid triangular elements where material properties (e.g., normal coefficient of restitution, friction angle) are assigned to each triangle. The capabilities of the developed model to simulate drapery systems and estimate the residual hazard involved with such systems is shown. References Badger, T.C., Duffy, J.D. (2012) Drapery systems. In: Turner, A.K., Schuster R

  17. An Empirical Model for Energy Storage Systems

    Energy Technology Data Exchange (ETDEWEB)

    Rosewater, David Martin [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Scott, Paul [TransPower, Poway, CA (United States)

    2016-03-17

    Improved models of energy storage systems are needed to enable the electric grid’s adaptation to increasing penetration of renewables. This paper develops a generic empirical model of energy storage system performance agnostic of type, chemistry, design or scale. Parameters for this model are calculated using test procedures adapted from the US DOE Protocol for Uniformly Measuring and Expressing the Performance of Energy Storage. We then assess the accuracy of this model for predicting the performance of the TransPower GridSaver – a 1 MW rated lithium-ion battery system that underwent laboratory experimentation and analysis. The developed model predicts a range of energy storage system performance based on the uncertainty of estimated model parameters. Finally, this model can be used to better understand the integration and coordination of energy storage on the electric grid.

  18. A coordination model for ultra-large scale systems of systems

    Directory of Open Access Journals (Sweden)

    Manuela L. Bujorianu

    2013-11-01

    Full Text Available The ultra large multi-agent systems are becoming increasingly popular due to quick decay of the individual production costs and the potential of speeding up the solving of complex problems. Examples include nano-robots, or systems of nano-satellites for dangerous meteorite detection, or cultures of stem cells for organ regeneration or nerve repair. The topics associated with these systems are usually dealt within the theories of intelligent swarms or biologically inspired computation systems. Stochastic models play an important role and they are based on various formulations of the mechanical statistics. In these cases, the main assumption is that the swarm elements have a simple behaviour and that some average properties can be deduced for the entire swarm. In contrast, complex systems in areas like aeronautics are formed by elements with sophisticated behaviour, which are even autonomous. In situations like this, a new approach to swarm coordination is necessary. We present a stochastic model where the swarm elements are communicating autonomous systems, the coordination is separated from the component autonomous activity and the entire swarm can be abstracted away as a piecewise deterministic Markov process, which constitutes one of the most popular model in stochastic control. Keywords: ultra large multi-agent systems, system of systems, autonomous systems, stochastic hybrid systems.

  19. Tractable dynamic global games and applications

    Czech Academy of Sciences Publication Activity Database

    Mathevet, L.; Steiner, Jakub

    2013-01-01

    Roč. 148, č. 6 (2013), s. 2583-2619 ISSN 0022-0531 R&D Projects: GA ČR(CZ) GA13-34759S Grant - others:UK(CZ) UNCE 204005/2012 Institutional support: PRVOUK-P23 Keywords : global games * dynamic game * coordination Subject RIV: AH - Economics Impact factor: 0.919, year: 2013

  20. Tractable dynamic global games and applications

    Czech Academy of Sciences Publication Activity Database

    Mathevet, L.; Steiner, Jakub

    2013-01-01

    Roč. 148, č. 6 (2013), s. 2583-2619 ISSN 0022-0531 Institutional support: RVO:67985998 Keywords : global games * dynamic game * coordination Subject RIV: AH - Economics Impact factor: 0.919, year: 2013

  1. Data retrieval systems and models of information situations

    International Nuclear Information System (INIS)

    Jankowski, L.

    1984-01-01

    Demands placed on data retrieval systems and their basic parameters are given. According to the stage of development of data collection and processing, data retrieval systems may be divided into systems for the simple recording and provision of data, systems for recording and providing data with integrated statistical functions, and logical information systems. The structure is characterized of the said information systems as are methods of processing and representation of facts. The notion is defined of ''artificial intelligence'' in the development of logical information systems. The structure of representing knowledge in diverse forms of the model is decisive in logical information systems related to nuclear research. The main model elements are the characteristics of data, forms of representation and program. In dependence on the structure of data, the structure of the preparatory and transformation algorithms and on the aim of the system it is possible to classify data retrieval systems related to nuclear research and technology into five logical information models: linear, identification, advisory, theory-experiment models and problem solving models. The characteristics are given of the said models and examples of data retrieval systems for the individual models. (E.S.)

  2. Review of the systems biology of the immune system using agent-based models.

    Science.gov (United States)

    Shinde, Snehal B; Kurhekar, Manish P

    2018-06-01

    The immune system is an inherent protection system in vertebrate animals including human beings that exhibit properties such as self-organisation, self-adaptation, learning, and recognition. It interacts with the other allied systems such as the gut and lymph nodes. There is a need for immune system modelling to know about its complex internal mechanism, to understand how it maintains the homoeostasis, and how it interacts with the other systems. There are two types of modelling techniques used for the simulation of features of the immune system: equation-based modelling (EBM) and agent-based modelling. Owing to certain shortcomings of the EBM, agent-based modelling techniques are being widely used. This technique provides various predictions for disease causes and treatments; it also helps in hypothesis verification. This study presents a review of agent-based modelling of the immune system and its interactions with the gut and lymph nodes. The authors also review the modelling of immune system interactions during tuberculosis and cancer. In addition, they also outline the future research directions for the immune system simulation through agent-based techniques such as the effects of stress on the immune system, evolution of the immune system, and identification of the parameters for a healthy immune system.

  3. Model-based decoding, information estimation, and change-point detection techniques for multineuron spike trains.

    Science.gov (United States)

    Pillow, Jonathan W; Ahmadian, Yashar; Paninski, Liam

    2011-01-01

    One of the central problems in systems neuroscience is to understand how neural spike trains convey sensory information. Decoding methods, which provide an explicit means for reading out the information contained in neural spike responses, offer a powerful set of tools for studying the neural coding problem. Here we develop several decoding methods based on point-process neural encoding models, or forward models that predict spike responses to stimuli. These models have concave log-likelihood functions, which allow efficient maximum-likelihood model fitting and stimulus decoding. We present several applications of the encoding model framework to the problem of decoding stimulus information from population spike responses: (1) a tractable algorithm for computing the maximum a posteriori (MAP) estimate of the stimulus, the most probable stimulus to have generated an observed single- or multiple-neuron spike train response, given some prior distribution over the stimulus; (2) a gaussian approximation to the posterior stimulus distribution that can be used to quantify the fidelity with which various stimulus features are encoded; (3) an efficient method for estimating the mutual information between the stimulus and the spike trains emitted by a neural population; and (4) a framework for the detection of change-point times (the time at which the stimulus undergoes a change in mean or variance) by marginalizing over the posterior stimulus distribution. We provide several examples illustrating the performance of these estimators with simulated and real neural data.

  4. Stochastic Models of Polymer Systems

    Science.gov (United States)

    2016-01-01

    Distribution Unlimited Final Report: Stochastic Models of Polymer Systems The views, opinions and/or findings contained in this report are those of the...ADDRESS. Princeton University PO Box 0036 87 Prospect Avenue - 2nd floor Princeton, NJ 08544 -2020 14-Mar-2014 ABSTRACT Number of Papers published in...peer-reviewed journals: Number of Papers published in non peer-reviewed journals: Final Report: Stochastic Models of Polymer Systems Report Title

  5. Modelling and Control of Thermal System

    Directory of Open Access Journals (Sweden)

    Vratislav Hladky

    2014-01-01

    Full Text Available Work presented here deals with the modelling of thermal processes in a thermal system consisting of direct and indirect heat exchangers. The overal thermal properties of the medium and the system itself such as liquid mixing or heat capacity are shortly analysed and their features required for modelling are reasoned and therefore simplified or neglected. Special attention is given to modelling heat losses radiated into the surroundings through the walls as they are the main issue of the effective work with the heat systems. Final part of the paper proposes several ways of controlling the individual parts’ temperatures as well as the temperature of the system considering heating elements or flowage rate as actuators.

  6. The radionuclide migration model in river system

    International Nuclear Information System (INIS)

    Zhukova, O.M.; Shiryaeva, N.M.; Myshkina, M.K.; Shagalova, Eh.D.; Denisova, V.V.; Skurat, V.V.

    2001-01-01

    It was propose the model of radionuclide migration in river system based on principle of the compartmental model at hydraulically stationary and chemically equilibrium conditions of interaction of radionuclides in system water-dredge, water-sediments. Different conditions of radioactive contamination entry in river system were considered. The model was verified on the data of radiation monitoring of Iput' river

  7. Model predictive control based on reduced order models applied to belt conveyor system.

    Science.gov (United States)

    Chen, Wei; Li, Xin

    2016-11-01

    In the paper, a model predictive controller based on reduced order model is proposed to control belt conveyor system, which is an electro-mechanics complex system with long visco-elastic body. Firstly, in order to design low-degree controller, the balanced truncation method is used for belt conveyor model reduction. Secondly, MPC algorithm based on reduced order model for belt conveyor system is presented. Because of the error bound between the full-order model and reduced order model, two Kalman state estimators are applied in the control scheme to achieve better system performance. Finally, the simulation experiments are shown that balanced truncation method can significantly reduce the model order with high-accuracy and model predictive control based on reduced-model performs well in controlling the belt conveyor system. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  8. Feature-based component model for design of embedded systems

    Science.gov (United States)

    Zha, Xuan Fang; Sriram, Ram D.

    2004-11-01

    An embedded system is a hybrid of hardware and software, which combines software's flexibility and hardware real-time performance. Embedded systems can be considered as assemblies of hardware and software components. An Open Embedded System Model (OESM) is currently being developed at NIST to provide a standard representation and exchange protocol for embedded systems and system-level design, simulation, and testing information. This paper proposes an approach to representing an embedded system feature-based model in OESM, i.e., Open Embedded System Feature Model (OESFM), addressing models of embedded system artifacts, embedded system components, embedded system features, and embedded system configuration/assembly. The approach provides an object-oriented UML (Unified Modeling Language) representation for the embedded system feature model and defines an extension to the NIST Core Product Model. The model provides a feature-based component framework allowing the designer to develop a virtual embedded system prototype through assembling virtual components. The framework not only provides a formal precise model of the embedded system prototype but also offers the possibility of designing variation of prototypes whose members are derived by changing certain virtual components with different features. A case study example is discussed to illustrate the embedded system model.

  9. Modeling canopy-induced turbulence in the Earth system: a unified parameterization of turbulent exchange within plant canopies and the roughness sublayer (CLM-ml v0

    Directory of Open Access Journals (Sweden)

    G. B. Bonan

    2018-04-01

    Full Text Available Land surface models used in climate models neglect the roughness sublayer and parameterize within-canopy turbulence in an ad hoc manner. We implemented a roughness sublayer turbulence parameterization in a multilayer canopy model (CLM-ml v0 to test if this theory provides a tractable parameterization extending from the ground through the canopy and the roughness sublayer. We compared the canopy model with the Community Land Model (CLM4.5 at seven forest, two grassland, and three cropland AmeriFlux sites over a range of canopy heights, leaf area indexes, and climates. CLM4.5 has pronounced biases during summer months at forest sites in midday latent heat flux, sensible heat flux, gross primary production, nighttime friction velocity, and the radiative temperature diurnal range. The new canopy model reduces these biases by introducing new physics. Advances in modeling stomatal conductance and canopy physiology beyond what is in CLM4.5 substantially improve model performance at the forest sites. The signature of the roughness sublayer is most evident in nighttime friction velocity and the diurnal cycle of radiative temperature, but is also seen in sensible heat flux. Within-canopy temperature profiles are markedly different compared with profiles obtained using Monin–Obukhov similarity theory, and the roughness sublayer produces cooler daytime and warmer nighttime temperatures. The herbaceous sites also show model improvements, but the improvements are related less systematically to the roughness sublayer parameterization in these canopies. The multilayer canopy with the roughness sublayer turbulence improves simulations compared with CLM4.5 while also advancing the theoretical basis for surface flux parameterizations.

  10. An expert system for dispersion model interpretation

    International Nuclear Information System (INIS)

    Skyllingstad, E.D.; Ramsdell, J.V.

    1988-10-01

    A prototype expert system designed to diagnose dispersion model uncertainty is described in this paper with application to a puff transport model. The system obtains qualitative information from the model user and through an expert-derived knowledge base, performs a rating of the current simulation. These results can then be used in combination with dispersion model output for deciding appropriate evacuation measures. Ultimately, the goal of this work is to develop an expert system that may be operated accurately by an individual uneducated in meteorology or dispersion modeling. 5 refs., 3 figs

  11. Modelling energy systems for developing countries

    International Nuclear Information System (INIS)

    Urban, F.; Benders, R.M.J.; Moll, H.C.

    2007-01-01

    Developing countries' energy use is rapidly increasing, which affects global climate change and global and regional energy settings. Energy models are helpful for exploring the future of developing and industrialised countries. However, energy systems of developing countries differ from those of industrialised countries, which has consequences for energy modelling. New requirements need to be met by present-day energy models to adequately explore the future of developing countries' energy systems. This paper aims to assess if the main characteristics of developing countries are adequately incorporated in present-day energy models. We first discuss these main characteristics, focusing particularly on developing Asia, and then present a model comparison of 12 selected energy models to test their suitability for developing countries. We conclude that many models are biased towards industrialised countries, neglecting main characteristics of developing countries, e.g. the informal economy, supply shortages, poor performance of the power sector, structural economic change, electrification, traditional bio-fuels, urban-rural divide. To more adequately address the energy systems of developing countries, energy models have to be adjusted and new models have to be built. We therefore indicate how to improve energy models for increasing their suitability for developing countries and give advice on modelling techniques and data requirements

  12. Modelling the Replication Management in Information Systems

    Directory of Open Access Journals (Sweden)

    Cezar TOADER

    2017-01-01

    Full Text Available In the modern economy, the benefits of Web services are significant because they facilitates the activities automation in the framework of Internet distributed businesses as well as the cooperation between organizations through interconnection process running in the computer systems. This paper presents the development stages of a model for a reliable information system. This paper describes the communication between the processes within the distributed system, based on the message exchange, and also presents the problem of distributed agreement among processes. A list of objectives for the fault-tolerant systems is defined and a framework model for distributed systems is proposed. This framework makes distinction between management operations and execution operations. The proposed model promotes the use of a central process especially designed for the coordination and control of other application processes. The execution phases and the protocols for the management and the execution components are presented. This model of a reliable system could be a foundation for an entire class of distributed systems models based on the management of replication process.

  13. Systems Engineering Model for ART Energy Conversion

    Energy Technology Data Exchange (ETDEWEB)

    Mendez Cruz, Carmen Margarita [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rochau, Gary E. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Wilson, Mollye C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-02-01

    The near-term objective of the EC team is to establish an operating, commercially scalable Recompression Closed Brayton Cycle (RCBC) to be constructed for the NE - STEP demonstration system (demo) with the lowest risk possible. A systems engineering approach is recommended to ensure adequate requirements gathering, documentation, and mode ling that supports technology development relevant to advanced reactors while supporting crosscut interests in potential applications. A holistic systems engineering model was designed for the ART Energy Conversion program by leveraging Concurrent Engineering, Balance Model, Simplified V Model, and Project Management principles. The resulting model supports the identification and validation of lifecycle Brayton systems requirements, and allows designers to detail system-specific components relevant to the current stage in the lifecycle, while maintaining a holistic view of all system elements.

  14. System dynamics modelling of situation awareness

    CSIR Research Space (South Africa)

    Oosthuizen, R

    2015-11-01

    Full Text Available . The feedback loops and delays in the Command and Control system also contribute to the complex dynamic behavior. This paper will build on existing situation awareness models to develop a System Dynamics model to support a qualitative investigation through...

  15. Modeling the Dynamic Digestive System Microbiome

    Directory of Open Access Journals (Sweden)

    Anne M. Estes

    2015-08-01

    Full Text Available “Modeling the Dynamic Digestive System Microbiome” is a hands-on activity designed to demonstrate the dynamics of microbiome ecology using dried pasta and beans to model disturbance events in the human digestive system microbiome. This exercise demonstrates how microbiome diversity is influenced by: 1 niche availability and habitat space and 2 a major disturbance event, such as antibiotic use. Students use a pictorial key to examine prepared models of digestive system microbiomes to determine what the person with the microbiome “ate.” Students then model the effect of taking antibiotics by removing certain “antibiotic sensitive” pasta. Finally, they add in “environmental microbes” or “native microbes” to recolonize the digestive system, determine how resilient their model microbome community is to disturbance, and discuss the implications. Throughout the exercise, students discuss differences in the habitat space available and microbiome community diversity. This exercise can be modified to discuss changes in the microbiome due to diet shifts and the emergence of antibiotic resistance in more depth.

  16. Stirling Engine Dynamic System Modeling

    Science.gov (United States)

    Nakis, Christopher G.

    2004-01-01

    The Thermo-Mechanical systems branch at the Glenn Research Center focuses a large amount time on Stirling engines. These engines will be used on missions where solar power is inefficient, especially in deep space. I work with Tim Regan and Ed Lewandowski who are currently developing and validating a mathematical model for the Stirling engines. This model incorporates all aspects of the system including, mechanical, electrical and thermodynamic components. Modeling is done through Simplorer, a program capable of running simulations of the model. Once created and then proven to be accurate, a model is used for developing new ideas for engine design. My largest specific project involves varying key parameters in the model and quantifying the results. This can all be done relatively trouble-free with the help of Simplorer. Once the model is complete, Simplorer will do all the necessary calculations. The more complicated part of this project is determining which parameters to vary. Finding key parameters depends on the potential for a value to be independently altered in the design. For example, a change in one dimension may lead to a proportional change to the rest of the model, and no real progress is made. Also, the ability for a changed value to have a substantial impact on the outputs of the system is important. Results will be condensed into graphs and tables with the purpose of better communication and understanding of the data. With the changing of these parameters, a more optimal design can be created without having to purchase or build any models. Also, hours and hours of results can be simulated in minutes. In the long run, using mathematical models can save time and money. Along with this project, I have many other smaller assignments throughout the summer. My main goal is to assist in the processes of model development, validation and testing.

  17. Executive Information Systems' Multidimensional Models

    Directory of Open Access Journals (Sweden)

    2007-01-01

    Full Text Available Executive Information Systems are design to improve the quality of strategic level of management in organization through a new type of technology and several techniques for extracting, transforming, processing, integrating and presenting data in such a way that the organizational knowledge filters can easily associate with this data and turn it into information for the organization. These technologies are known as Business Intelligence Tools. But in order to build analytic reports for Executive Information Systems (EIS in an organization we need to design a multidimensional model based on the business model from the organization. This paper presents some multidimensional models that can be used in EIS development and propose a new model that is suitable for strategic business requests.

  18. A Framework for Concrete Reputation-Systems with Applications to History-Based Access Control

    DEFF Research Database (Denmark)

    Krukow, Karl Kristian; Nielsen, Mogens; Sassone, Vladimiro

    2005-01-01

    -based trust-management systems provide no formal security-guarantees. In this extended abstract, we describe a mathematical framework for a class of simple reputation-based systems. In these systems, decisions about interaction are taken based on policies that are exact requirements on agents' past histories....... We present a basic declarative language, based on pure-past linear temporal logic, intended for writing simple policies. While the basic language is reasonably expressive (encoding e.g. Chinese Wall policies) we show how one can extend it with quantification and parameterized events. This allows us...... to encode other policies known from the literature, e.g., `one-out-of-k'. The problem of checking a history with respect to a policy is efficient for the basic language, and tractable for the quantified language when policies do not have too many variables....

  19. Balmorel open source energy system model

    DEFF Research Database (Denmark)

    Wiese, Frauke; Bramstoft, Rasmus; Koduvere, Hardi

    2018-01-01

    As the world progresses towards a cleaner energy future with more variable renewable energy sources, energy system models are required to deal with new challenges. This article describes design, development and applications of the open source energy system model Balmorel, which is a result...... of a long and fruitful cooperation between public and private institutions within energy system research and analysis. The purpose of the article is to explain the modelling approach, to highlight strengths and challenges of the chosen approach, to create awareness about the possible applications...... of Balmorel as well as to inspire to new model developments and encourage new users to join the community. Some of the key strengths of the model are the flexible handling of the time and space dimensions and the combination of operation and investment optimisation. Its open source character enables diverse...

  20. Numerical Modeling of Microelectrochemical Systems

    DEFF Research Database (Denmark)

    Adesokan, Bolaji James

    incorporates the finite size of ionic species in the transport equation. The model presents a more appropriate boundary conditions which describe the modified Butler-Volmer reaction kinetics and account for the surface capacitance of the thin electric double layer. We also have found analytical solution...... at the electrode in a microelectrochemical system. In our analysis, we account for the finite size properties of ions in the mass and the charge transport of ionic species in an electrochemical system. This term characterizes the saturation of the ionic species close to the electrode surface. We then analyse......The PhD dissertation is concerned with mathematical modeling and simulation of electrochemical systems. The first three chapters of the thesis consist of the introductory part, the model development chapter and the chapter on the summary of the main results. The remaining three chapters report...

  1. Mechanistic spatio-temporal point process models for marked point processes, with a view to forest stand data

    DEFF Research Database (Denmark)

    Møller, Jesper; Ghorbani, Mohammad; Rubak, Ege Holger

    We show how a spatial point process, where to each point there is associated a random quantitative mark, can be identified with a spatio-temporal point process specified by a conditional intensity function. For instance, the points can be tree locations, the marks can express the size of trees......, and the conditional intensity function can describe the distribution of a tree (i.e., its location and size) conditionally on the larger trees. This enable us to construct parametric statistical models which are easily interpretable and where likelihood-based inference is tractable. In particular, we consider maximum...

  2. Multiplicative Attribute Graph Model of Real-World Networks

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Myunghwan [Stanford Univ., CA (United States); Leskovec, Jure [Stanford Univ., CA (United States)

    2010-10-20

    Large scale real-world network data, such as social networks, Internet andWeb graphs, is ubiquitous in a variety of scientific domains. The study of such social and information networks commonly finds patterns and explain their emergence through tractable models. In most networks, especially in social networks, nodes also have a rich set of attributes (e.g., age, gender) associatedwith them. However, most of the existing network models focus only on modeling the network structure while ignoring the features of nodes in the network. Here we present a class of network models that we refer to as the Multiplicative Attribute Graphs (MAG), which naturally captures the interactions between the network structure and node attributes. We consider a model where each node has a vector of categorical features associated with it. The probability of an edge between a pair of nodes then depends on the product of individual attributeattribute similarities. The model yields itself to mathematical analysis as well as fit to real data. We derive thresholds for the connectivity, the emergence of the giant connected component, and show that the model gives rise to graphs with a constant diameter. Moreover, we analyze the degree distribution to show that the model can produce networks with either lognormal or power-law degree distribution depending on certain conditions.

  3. Agent-based Modeling Methodology for Analyzing Weapons Systems

    Science.gov (United States)

    2015-03-26

    technique involve model structure, system representation and the degree of validity, coupled with the simplicity, of the overall model. ABM is best suited... system representation of the air combat system . We feel that a simulation model that combines ABM with equation-based representation of weapons and...AGENT-BASED MODELING METHODOLOGY FOR ANALYZING WEAPONS SYSTEMS THESIS Casey D. Connors, Major, USA

  4. A control strategy for electro-magneto-mechanical system based on virtual system model

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Hong Youn; Heo, Hoon [Dept. of Control and Instrumentation Engineering, Korea University, Seoul (Korea, Republic of); Yun, Young Min [TPC Mechatronics Co., Ltd., Incheon (Korea, Republic of)

    2016-09-15

    A new approach to the control of electro-magneto-mechanical system is proposed in this paper. Conventionally, these systems are controlled based on the Maxwell system model via an on-off or PID control technique, which displays acceptable performance in the low frequency region, but not in the high frequency region where position control performance is greatly degraded. In order to improve the performance, a newly developed virtual 2nd order system modeling technique, SSID, is adopted for a complex electro-magnetomechanical system in the study. This technique states that any unknown system exposed to a random disturbance with unknown intensity can be identified in terms of a virtual 2nd order system model via the inverse process of a certain stochastic analysis. As a typical hybrid system, a solenoid valve is used as the target electro-magneto-mechanical system to study the modeling of the virtual 2nd order system. In order to confirm the performance of the proposed control strategy, autotuning PID controller in PWM mode is utilized. Simulations based on the conventional Maxwell system model with control via the bang-bang, autotuning PID, and the proposed virtual 2nd order system model approaches are conducted using MATLAB Simulink. Performance of these three systems in the low and high frequency bands is also compared. The simulation results reveal that the control performance of the virtual 2nd order system model is much improved compared with that of the Maxwell system model under autotuning PID and bang-bang controls in both low and high frequency regions, where the error is drastically reduced to approximately 1/5 of the original value.

  5. Recurrent dynamics in an epidemic model due to stimulated bifurcation crossovers

    Energy Technology Data Exchange (ETDEWEB)

    Juanico, Drandreb Earl [Department of Mathematics, Ateneo de Manila University, Loyola Heights, Quezon City, Philippines 1108 (Philippines); National Institute of Physics, University of the Philippines, Diliman, Quezon City, Philippines 1101 (Philippines)

    2015-05-15

    Epidemics are known to persist in the form of recurrence cycles. Despite intervention efforts through vaccination and targeted social distancing, peaks of activity for infectious diseases like influenza reappear over time. Analysis of a stochastic model is here undertaken to explore a proposed cycle-generating mechanism – the bifurcation crossover. Time series from simulations of the model exhibit oscillations similar to the temporal signature of influenza activity. Power-spectral density indicates a resonant frequency, which corresponds to the annual seasonality of influenza in temperate zones. The study finds that intervention actions influence the extinguishability of epidemic activity. Asymptotic solution to a backward Kolmogorov equation corresponds to a mean extinction time that is a function of both intervention efficacy and population size. Intervention efficacy must be greater than a certain threshold to increase the chances of extinguishing the epidemic. Agreement of the model with several phenomenological features of epidemic cycles lends to it a tractability that may serve as early warning of imminent outbreaks.

  6. Recurrent dynamics in an epidemic model due to stimulated bifurcation crossovers

    International Nuclear Information System (INIS)

    Juanico, Drandreb Earl

    2015-01-01

    Epidemics are known to persist in the form of recurrence cycles. Despite intervention efforts through vaccination and targeted social distancing, peaks of activity for infectious diseases like influenza reappear over time. Analysis of a stochastic model is here undertaken to explore a proposed cycle-generating mechanism – the bifurcation crossover. Time series from simulations of the model exhibit oscillations similar to the temporal signature of influenza activity. Power-spectral density indicates a resonant frequency, which corresponds to the annual seasonality of influenza in temperate zones. The study finds that intervention actions influence the extinguishability of epidemic activity. Asymptotic solution to a backward Kolmogorov equation corresponds to a mean extinction time that is a function of both intervention efficacy and population size. Intervention efficacy must be greater than a certain threshold to increase the chances of extinguishing the epidemic. Agreement of the model with several phenomenological features of epidemic cycles lends to it a tractability that may serve as early warning of imminent outbreaks

  7. Modeling of Generic Slung Load System

    DEFF Research Database (Denmark)

    Bisgaard, Morten; Bendtsen, Jan Dimon; la Cour-Harbo, Anders

    2009-01-01

    This paper presents the result of the modelling and verification of a generic slung load system using a small-scale helicopter. The model is intended for use in simulation, pilot training, estimation, and control. The model is derived using a redundant coordinate formulation based on Gauss...... slackening and tightening as well as aerodynamic coupling between the helicopter and the load. Furthermore, it is shown how the model can be easily used for multi-lift systems either with multiple helicopters or multiple loads. A numerical stabilisation algorithm is introduced and finally the use...... of the model is illustrated through simulations and flight verifications.  ...

  8. Qualitative models for space system engineering

    Science.gov (United States)

    Forbus, Kenneth D.

    1990-01-01

    The objectives of this project were: (1) to investigate the implications of qualitative modeling techniques for problems arising in the monitoring, diagnosis, and design of Space Station subsystems and procedures; (2) to identify the issues involved in using qualitative models to enhance and automate engineering functions. These issues include representing operational criteria, fault models, alternate ontologies, and modeling continuous signals at a functional level of description; and (3) to develop a prototype collection of qualitative models for fluid and thermal systems commonly found in Space Station subsystems. Potential applications of qualitative modeling to space-systems engineering, including the notion of intelligent computer-aided engineering are summarized. Emphasis is given to determining which systems of the proposed Space Station provide the most leverage for study, given the current state of the art. Progress on using qualitative models, including development of the molecular collection ontology for reasoning about fluids, the interaction of qualitative and quantitative knowledge in analyzing thermodynamic cycles, and an experiment on building a natural language interface to qualitative reasoning is reported. Finally, some recommendations are made for future research.

  9. Model Checking Real-Time Systems

    DEFF Research Database (Denmark)

    Bouyer, Patricia; Fahrenberg, Uli; Larsen, Kim Guldstrand

    2018-01-01

    This chapter surveys timed automata as a formalism for model checking real-time systems. We begin with introducing the model, as an extension of finite-state automata with real-valued variables for measuring time. We then present the main model-checking results in this framework, and give a hint...

  10. Using A Model-Based Systems Engineering Approach For Exploration Medical System Development

    Science.gov (United States)

    Hanson, A.; Mindock, J.; McGuire, K.; Reilly, J.; Cerro, J.; Othon, W.; Rubin, D.; Urbina, M.; Canga, M.

    2017-01-01

    NASA's Human Research Program's Exploration Medical Capabilities (ExMC) element is defining the medical system needs for exploration class missions. ExMC's Systems Engineering (SE) team will play a critical role in successful design and implementation of the medical system into exploration vehicles. The team's mission is to "Define, develop, validate, and manage the technical system design needed to implement exploration medical capabilities for Mars and test the design in a progression of proving grounds." Development of the medical system is being conducted in parallel with exploration mission architecture and vehicle design development. Successful implementation of the medical system in this environment will require a robust systems engineering approach to enable technical communication across communities to create a common mental model of the emergent engineering and medical systems. Model-Based Systems Engineering (MBSE) improves shared understanding of system needs and constraints between stakeholders and offers a common language for analysis. The ExMC SE team is using MBSE techniques to define operational needs, decompose requirements and architecture, and identify medical capabilities needed to support human exploration. Systems Modeling Language (SysML) is the specific language the SE team is utilizing, within an MBSE approach, to model the medical system functional needs, requirements, and architecture. Modeling methods are being developed through the practice of MBSE within the team, and tools are being selected to support meta-data exchange as integration points to other system models are identified. Use of MBSE is supporting the development of relationships across disciplines and NASA Centers to build trust and enable teamwork, enhance visibility of team goals, foster a culture of unbiased learning and serving, and be responsive to customer needs. The MBSE approach to medical system design offers a paradigm shift toward greater integration between

  11. Modelling hair follicle growth dynamics as an excitable medium.

    Directory of Open Access Journals (Sweden)

    Philip J Murray

    Full Text Available The hair follicle system represents a tractable model for the study of stem cell behaviour in regenerative adult epithelial tissue. However, although there are numerous spatial scales of observation (molecular, cellular, follicle and multi follicle, it is not yet clear what mechanisms underpin the follicle growth cycle. In this study we seek to address this problem by describing how the growth dynamics of a large population of follicles can be treated as a classical excitable medium. Defining caricature interactions at the molecular scale and treating a single follicle as a functional unit, a minimal model is proposed in which the follicle growth cycle is an emergent phenomenon. Expressions are derived, in terms of parameters representing molecular regulation, for the time spent in the different functional phases of the cycle, a formalism that allows the model to be directly compared with a previous cellular automaton model and experimental measurements made at the single follicle scale. A multi follicle model is constructed and numerical simulations are used to demonstrate excellent qualitative agreement with a range of experimental observations. Notably, the excitable medium equations exhibit a wider family of solutions than the previous work and we demonstrate how parameter changes representing altered molecular regulation can explain perturbed patterns in Wnt over-expression and BMP down-regulation mouse models. Further experimental scenarios that could be used to test the fundamental premise of the model are suggested. The key conclusion from our work is that positive and negative regulatory interactions between activators and inhibitors can give rise to a range of experimentally observed phenomena at the follicle and multi follicle spatial scales and, as such, could represent a core mechanism underlying hair follicle growth.

  12. Graphical Model Debugger Framework for Embedded Systems

    DEFF Research Database (Denmark)

    Zeng, Kebin

    2010-01-01

    Model Driven Software Development has offered a faster way to design and implement embedded real-time software by moving the design to a model level, and by transforming models to code. However, the testing of embedded systems has remained at the code level. This paper presents a Graphical Model...... Debugger Framework, providing an auxiliary avenue of analysis of system models at runtime by executing generated code and updating models synchronously, which allows embedded developers to focus on the model level. With the model debugger, embedded developers can graphically test their design model...

  13. Combining multimedia models with integrated urban water system models for micropollutants

    DEFF Research Database (Denmark)

    De Keyser, W.; Gevaert, V.; Verdonck, F.

    2010-01-01

    Integrated urban water system (IUWS) modeling aims at assessing the quality of the surface water receiving the urban emissions through sewage treatment plants, combined sewer overflows (CSOS) and stormwater drainage systems However, some micropollutants tend to appear in more than one environmental...... medium (air, water, sediment, soil, groundwater, etc) In this work, a multimedia fate and transport model (MFTM) is "wrapped around" a dynamic IUWS model for organic micropollutants to enable integrated environmental assessment The combined model was tested on a hypothetical catchment using two scenarios...... on the one hand a reference scenario with a combined sewerage system and on the other hand a stormwater infiltration pond scenario, as an example of a sustainable urban drainage system (SUDS) A case for Bis(2-ethylhexyl) phthalate (DEHP) was simulated and resulted in reduced surface water concentrations...

  14. Natural gas transmission and distribution model of the National Energy Modeling System

    International Nuclear Information System (INIS)

    1997-02-01

    The Natural Gas Transmission and Distribution Model (NGTDM) is the component of the National Energy Modeling System (NEMS) that is used to represent the domestic natural gas transmission and distribution system. NEMS was developed in the Office of Integrated Analysis and Forecasting of the Energy Information Administration (EIA). NEMS is the third in a series of computer-based, midterm energy modeling systems used since 1974 by the EIA and its predecessor, the Federal Energy Administration, to analyze domestic energy-economy markets and develop projections. From 1982 through 1993, the Intermediate Future Forecasting System (IFFS) was used by the EIA for its analyses, and the Gas Analysis Modeling System (GAMS) was used within IFFS to represent natural gas markets. Prior to 1982, the Midterm Energy Forecasting System (MEFS), also referred to as the Project Independence Evaluation System (PIES), was employed. NEMS was developed to enhance and update EIA's modeling capability by internally incorporating models of energy markets that had previously been analyzed off-line. In addition, greater structural detail in NEMS permits the analysis of a broader range of energy issues. The time horizon of NEMS is the midterm period (i.e., through 2015). In order to represent the regional differences in energy markets, the component models of NEMS function at regional levels appropriate for the markets represented, with subsequent aggregation/disaggregation to the Census Division level for reporting purposes

  15. Natural gas transmission and distribution model of the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    1997-02-01

    The Natural Gas Transmission and Distribution Model (NGTDM) is the component of the National Energy Modeling System (NEMS) that is used to represent the domestic natural gas transmission and distribution system. NEMS was developed in the Office of Integrated Analysis and Forecasting of the Energy Information Administration (EIA). NEMS is the third in a series of computer-based, midterm energy modeling systems used since 1974 by the EIA and its predecessor, the Federal Energy Administration, to analyze domestic energy-economy markets and develop projections. From 1982 through 1993, the Intermediate Future Forecasting System (IFFS) was used by the EIA for its analyses, and the Gas Analysis Modeling System (GAMS) was used within IFFS to represent natural gas markets. Prior to 1982, the Midterm Energy Forecasting System (MEFS), also referred to as the Project Independence Evaluation System (PIES), was employed. NEMS was developed to enhance and update EIA`s modeling capability by internally incorporating models of energy markets that had previously been analyzed off-line. In addition, greater structural detail in NEMS permits the analysis of a broader range of energy issues. The time horizon of NEMS is the midterm period (i.e., through 2015). In order to represent the regional differences in energy markets, the component models of NEMS function at regional levels appropriate for the markets represented, with subsequent aggregation/disaggregation to the Census Division level for reporting purposes.

  16. A hierarchy for modeling high speed propulsion systems

    Science.gov (United States)

    Hartley, Tom T.; Deabreu, Alex

    1991-01-01

    General research efforts on reduced order propulsion models for control systems design are overviewed. Methods for modeling high speed propulsion systems are discussed including internal flow propulsion systems that do not contain rotating machinery such as inlets, ramjets, and scramjets. The discussion is separated into four sections: (1) computational fluid dynamics model for the entire nonlinear system or high order nonlinear models; (2) high order linearized model derived from fundamental physics; (3) low order linear models obtained from other high order models; and (4) low order nonlinear models. Included are special considerations on any relevant control system designs. The methods discussed are for the quasi-one dimensional Euler equations of gasdynamic flow. The essential nonlinear features represented are large amplitude nonlinear waves, moving normal shocks, hammershocks, subsonic combustion via heat addition, temperature dependent gases, detonation, and thermal choking.

  17. Structural system identification: Structural dynamics model validation

    Energy Technology Data Exchange (ETDEWEB)

    Red-Horse, J.R.

    1997-04-01

    Structural system identification is concerned with the development of systematic procedures and tools for developing predictive analytical models based on a physical structure`s dynamic response characteristics. It is a multidisciplinary process that involves the ability (1) to define high fidelity physics-based analysis models, (2) to acquire accurate test-derived information for physical specimens using diagnostic experiments, (3) to validate the numerical simulation model by reconciling differences that inevitably exist between the analysis model and the experimental data, and (4) to quantify uncertainties in the final system models and subsequent numerical simulations. The goal of this project was to develop structural system identification techniques and software suitable for both research and production applications in code and model validation.

  18. Design of coherent quantum observers for linear quantum systems

    International Nuclear Information System (INIS)

    Vuglar, Shanon L; Amini, Hadis

    2014-01-01

    Quantum versions of control problems are often more difficult than their classical counterparts because of the additional constraints imposed by quantum dynamics. For example, the quantum LQG and quantum H ∞ optimal control problems remain open. To make further progress, new, systematic and tractable methods need to be developed. This paper gives three algorithms for designing coherent quantum observers, i.e., quantum systems that are connected to a quantum plant and their outputs provide information about the internal state of the plant. Importantly, coherent quantum observers avoid measurements of the plant outputs. We compare our coherent quantum observers with a classical (measurement-based) observer by way of an example involving an optical cavity with thermal and vacuum noises as inputs. (paper)

  19. Modeling congenital disease and inborn errors of development in Drosophila melanogaster

    Science.gov (United States)

    Moulton, Matthew J.; Letsou, Anthea

    2016-01-01

    ABSTRACT Fly models that faithfully recapitulate various aspects of human disease and human health-related biology are being used for research into disease diagnosis and prevention. Established and new genetic strategies in Drosophila have yielded numerous substantial successes in modeling congenital disorders or inborn errors of human development, as well as neurodegenerative disease and cancer. Moreover, although our ability to generate sequence datasets continues to outpace our ability to analyze these datasets, the development of high-throughput analysis platforms in Drosophila has provided access through the bottleneck in the identification of disease gene candidates. In this Review, we describe both the traditional and newer methods that are facilitating the incorporation of Drosophila into the human disease discovery process, with a focus on the models that have enhanced our understanding of human developmental disorders and congenital disease. Enviable features of the Drosophila experimental system, which make it particularly useful in facilitating the much anticipated move from genotype to phenotype (understanding and predicting phenotypes directly from the primary DNA sequence), include its genetic tractability, the low cost for high-throughput discovery, and a genome and underlying biology that are highly evolutionarily conserved. In embracing the fly in the human disease-gene discovery process, we can expect to speed up and reduce the cost of this process, allowing experimental scales that are not feasible and/or would be too costly in higher eukaryotes. PMID:26935104

  20. Visual prosthesis wireless energy transfer system optimal modeling.

    Science.gov (United States)

    Li, Xueping; Yang, Yuan; Gao, Yong

    2014-01-16

    Wireless energy transfer system is an effective way to solve the visual prosthesis energy supply problems, theoretical modeling of the system is the prerequisite to do optimal energy transfer system design. On the basis of the ideal model of the wireless energy transfer system, according to visual prosthesis application condition, the system modeling is optimized. During the optimal modeling, taking planar spiral coils as the coupling devices between energy transmitter and receiver, the effect of the parasitic capacitance of the transfer coil is considered, and especially the concept of biological capacitance is proposed to consider the influence of biological tissue on the energy transfer efficiency, resulting in the optimal modeling's more accuracy for the actual application. The simulation data of the optimal model in this paper is compared with that of the previous ideal model, the results show that under high frequency condition, the parasitic capacitance of inductance and biological capacitance considered in the optimal model could have great impact on the wireless energy transfer system. The further comparison with the experimental data verifies the validity and accuracy of the optimal model proposed in this paper. The optimal model proposed in this paper has a higher theoretical guiding significance for the wireless energy transfer system's further research, and provide a more precise model reference for solving the power supply problem in visual prosthesis clinical application.

  1. Hybrid Energy System Modeling in Modelica

    Energy Technology Data Exchange (ETDEWEB)

    William R. Binder; Christiaan J. J. Paredis; Humberto E. Garcia

    2014-03-01

    In this paper, a Hybrid Energy System (HES) configuration is modeled in Modelica. Hybrid Energy Systems (HES) have as their defining characteristic the use of one or more energy inputs, combined with the potential for multiple energy outputs. Compared to traditional energy systems, HES provide additional operational flexibility so that high variability in both energy production and consumption levels can be absorbed more effectively. This is particularly important when including renewable energy sources, whose output levels are inherently variable, determined by nature. The specific HES configuration modeled in this paper include two energy inputs: a nuclear plant, and a series of wind turbines. In addition, the system produces two energy outputs: electricity and synthetic fuel. The models are verified through simulations of the individual components, and the system as a whole. The simulations are performed for a range of component sizes, operating conditions, and control schemes.

  2. Agent-Based Modeling in Systems Pharmacology.

    Science.gov (United States)

    Cosgrove, J; Butler, J; Alden, K; Read, M; Kumar, V; Cucurull-Sanchez, L; Timmis, J; Coles, M

    2015-11-01

    Modeling and simulation (M&S) techniques provide a platform for knowledge integration and hypothesis testing to gain insights into biological systems that would not be possible a priori. Agent-based modeling (ABM) is an M&S technique that focuses on describing individual components rather than homogenous populations. This tutorial introduces ABM to systems pharmacologists, using relevant case studies to highlight how ABM-specific strengths have yielded success in the area of preclinical mechanistic modeling.

  3. Annotation of rule-based models with formal semantics to enable creation, analysis, reuse and visualization

    Science.gov (United States)

    Misirli, Goksel; Cavaliere, Matteo; Waites, William; Pocock, Matthew; Madsen, Curtis; Gilfellon, Owen; Honorato-Zimmer, Ricardo; Zuliani, Paolo; Danos, Vincent; Wipat, Anil

    2016-01-01

    Motivation: Biological systems are complex and challenging to model and therefore model reuse is highly desirable. To promote model reuse, models should include both information about the specifics of simulations and the underlying biology in the form of metadata. The availability of computationally tractable metadata is especially important for the effective automated interpretation and processing of models. Metadata are typically represented as machine-readable annotations which enhance programmatic access to information about models. Rule-based languages have emerged as a modelling framework to represent the complexity of biological systems. Annotation approaches have been widely used for reaction-based formalisms such as SBML. However, rule-based languages still lack a rich annotation framework to add semantic information, such as machine-readable descriptions, to the components of a model. Results: We present an annotation framework and guidelines for annotating rule-based models, encoded in the commonly used Kappa and BioNetGen languages. We adapt widely adopted annotation approaches to rule-based models. We initially propose a syntax to store machine-readable annotations and describe a mapping between rule-based modelling entities, such as agents and rules, and their annotations. We then describe an ontology to both annotate these models and capture the information contained therein, and demonstrate annotating these models using examples. Finally, we present a proof of concept tool for extracting annotations from a model that can be queried and analyzed in a uniform way. The uniform representation of the annotations can be used to facilitate the creation, analysis, reuse and visualization of rule-based models. Although examples are given, using specific implementations the proposed techniques can be applied to rule-based models in general. Availability and implementation: The annotation ontology for rule-based models can be found at http

  4. Risk Modeling of Interdependent Complex Systems of Systems: Theory and Practice.

    Science.gov (United States)

    Haimes, Yacov Y

    2018-01-01

    The emergence of the complexity characterizing our systems of systems (SoS) requires a reevaluation of the way we model, assess, manage, communicate, and analyze the risk thereto. Current models for risk analysis of emergent complex SoS are insufficient because too often they rely on the same risk functions and models used for single systems. These models commonly fail to incorporate the complexity derived from the networks of interdependencies and interconnectedness (I-I) characterizing SoS. There is a need to reevaluate currently practiced risk analysis to respond to this reality by examining, and thus comprehending, what makes emergent SoS complex. The key to evaluating the risk to SoS lies in understanding the genesis of characterizing I-I of systems manifested through shared states and other essential entities within and among the systems that constitute SoS. The term "essential entities" includes shared decisions, resources, functions, policies, decisionmakers, stakeholders, organizational setups, and others. This undertaking can be accomplished by building on state-space theory, which is fundamental to systems engineering and process control. This article presents a theoretical and analytical framework for modeling the risk to SoS with two case studies performed with the MITRE Corporation and demonstrates the pivotal contributions made by shared states and other essential entities to modeling and analysis of the risk to complex SoS. A third case study highlights the multifarious representations of SoS, which require harmonizing the risk analysis process currently applied to single systems when applied to complex SoS. © 2017 Society for Risk Analysis.

  5. Analytical performance modeling for computer systems

    CERN Document Server

    Tay, Y C

    2013-01-01

    This book is an introduction to analytical performance modeling for computer systems, i.e., writing equations to describe their performance behavior. It is accessible to readers who have taken college-level courses in calculus and probability, networking and operating systems. This is not a training manual for becoming an expert performance analyst. Rather, the objective is to help the reader construct simple models for analyzing and understanding the systems that they are interested in.Describing a complicated system abstractly with mathematical equations requires a careful choice of assumpti

  6. Modeling Of Proton Exchange Membrane Fuel Cell Systems

    DEFF Research Database (Denmark)

    Nielsen, Mads Pagh

    The objective of this doctoral thesis was to develop reliable steady-state and transient component models suitable to asses-, develop- and optimize proton exchange membrane (PEM) fuel cell systems. Several components in PEM fuel cell systems were characterized and modeled. The developed component...... cell systems. Consequences of indirectly fueling PEM stacks with hydrocarbons using reforming technology were investigated using a PEM stack model including CO poisoning kinetics and a transient Simulink steam reforming system model. Aspects regarding the optimization of PEM fuel cell systems...

  7. A Multi-Model Approach for System Diagnosis

    DEFF Research Database (Denmark)

    Niemann, Hans Henrik; Poulsen, Niels Kjølstad; Bækgaard, Mikkel Ask Buur

    2007-01-01

    A multi-model approach for system diagnosis is presented in this paper. The relation with fault diagnosis as well as performance validation is considered. The approach is based on testing a number of pre-described models and find which one is the best. It is based on an active approach......,i.e. an auxiliary input to the system is applied. The multi-model approach is applied on a wind turbine system....

  8. Systems and context modeling approach to requirements analysis

    Science.gov (United States)

    Ahuja, Amrit; Muralikrishna, G.; Patwari, Puneet; Subhrojyoti, C.; Swaminathan, N.; Vin, Harrick

    2014-08-01

    Ensuring completeness and correctness of the requirements for a complex system such as the SKA is challenging. Current system engineering practice includes developing a stakeholder needs definition, a concept of operations, and defining system requirements in terms of use cases and requirements statements. We present a method that enhances this current practice into a collection of system models with mutual consistency relationships. These include stakeholder goals, needs definition and system-of-interest models, together with a context model that participates in the consistency relationships among these models. We illustrate this approach by using it to analyze the SKA system requirements.

  9. High-level PC-based laser system modeling

    Science.gov (United States)

    Taylor, Michael S.

    1991-05-01

    Since the inception of the Strategic Defense Initiative (SDI) there have been a multitude of comparison studies done in an attempt to evaluate the effectiveness and relative sizes of complementary, and sometimes competitive, laser weapon systems. It became more and more apparent that what the systems analyst needed was not only a fast, but a cost effective way to perform high-level trade studies. In the present investigation, a general procedure is presented for the development of PC-based algorithmic systems models for laser systems. This procedure points out all of the major issues that should be addressed in the design and development of such a model. Issues addressed include defining the problem to be modeled, defining a strategy for development, and finally, effective use of the model once developed. Being a general procedure, it will allow a systems analyst to develop a model to meet specific needs. To illustrate this method of model development, a description of the Strategic Defense Simulation - Design To (SDS-DT) model developed and used by Science Applications International Corporation (SAIC) is presented. SDS-DT is a menu-driven, fast executing, PC-based program that can be used to either calculate performance, weight, volume, and cost values for a particular design or, alternatively, to run parametrics on particular system parameters to perhaps optimize a design.

  10. Applying Modeling Tools to Ground System Procedures

    Science.gov (United States)

    Di Pasquale, Peter

    2012-01-01

    As part of a long-term effort to revitalize the Ground Systems (GS) Engineering Section practices, Systems Modeling Language (SysML) and Business Process Model and Notation (BPMN) have been used to model existing GS products and the procedures GS engineers use to produce them.

  11. Numerical Modelling Approaches for Sediment Transport in Sewer Systems

    DEFF Research Database (Denmark)

    Mark, Ole

    A study of the sediment transport processes in sewers has been carried out. Based on this study a mathematical modelling system has been developed to describe the transport processes of sediments and dissolved matter in sewer systems. The modelling system consists of three sub-models which...... constitute the basic modelling system necessary to give a discription of the most dominant physical transport processes concerning particles and dissolved matter in sewer systems: A surface model. An advection-dispersion model. A sediment transport model....

  12. Stochastic Modelling Of The Repairable System

    Directory of Open Access Journals (Sweden)

    Andrzejczak Karol

    2015-11-01

    Full Text Available All reliability models consisting of random time factors form stochastic processes. In this paper we recall the definitions of the most common point processes which are used for modelling of repairable systems. Particularly this paper presents stochastic processes as examples of reliability systems for the support of the maintenance related decisions. We consider the simplest one-unit system with a negligible repair or replacement time, i.e., the unit is operating and is repaired or replaced at failure, where the time required for repair and replacement is negligible. When the repair or replacement is completed, the unit becomes as good as new and resumes operation. The stochastic modelling of recoverable systems constitutes an excellent method of supporting maintenance related decision-making processes and enables their more rational use.

  13. Agent oriented modeling of business information systems

    OpenAIRE

    Vymetal, Dominik

    2009-01-01

    Enterprise modeling is an abstract definition of processes running in enterprise using process, value, data and resource models. There are two perspectives of business modeling: process perspective and value chain perspective. Both have some advantages and disadvantages. This paper proposes a combination of both perspectives into one generic model. The model takes also social part or the enterprise system into consideration and pays attention to disturbances influencing the enterprise system....

  14. Learning Markov models for stationary system behaviors

    DEFF Research Database (Denmark)

    Chen, Yingke; Mao, Hua; Jaeger, Manfred

    2012-01-01

    to a single long observation sequence, and in these situations existing automatic learning methods cannot be applied. In this paper, we adapt algorithms for learning variable order Markov chains from a single observation sequence of a target system, so that stationary system properties can be verified using......Establishing an accurate model for formal verification of an existing hardware or software system is often a manual process that is both time consuming and resource demanding. In order to ease the model construction phase, methods have recently been proposed for automatically learning accurate...... the learned model. Experiments demonstrate that system properties (formulated as stationary probabilities of LTL formulas) can be reliably identified using the learned model....

  15. Modelling and Verification of Relay Interlocking Systems

    DEFF Research Database (Denmark)

    Haxthausen, Anne Elisabeth; Bliguet, Marie Le; Kjær, Andreas

    2010-01-01

    This paper describes how relay interlocking systems as used by the Danish railways can be formally modelled and verified. Such systems are documented by circuit diagrams describing their static layout. It is explained how to derive a state transition system model for the dynamic behaviour...

  16. Economic model of pipeline transportation systems

    Energy Technology Data Exchange (ETDEWEB)

    Banks, W. F.

    1977-07-29

    The objective of the work reported here was to develop a model which could be used to assess the economic effects of energy-conservative technological innovations upon the pipeline industry. The model is a dynamic simulator which accepts inputs of two classes: the physical description (design parameters, fluid properties, and financial structures) of the system to be studied, and the postulated market (throughput and price) projection. The model consists of time-independent submodels: the fluidics model which simulates the physical behavior of the system, and the financial model which operates upon the output of the fluidics model to calculate the economics outputs. Any of a number of existing fluidics models can be used in addition to that developed as a part of this study. The financial model, known as the Systems, Science and Software (S/sup 3/) Financial Projection Model, contains user options whereby pipeline-peculiar characteristics can be removed and/or modified, so that the model can be applied to virtually any kind of business enterprise. The several dozen outputs are of two classes: the energetics and the economics. The energetics outputs of primary interest are the energy intensity, also called unit energy consumption, and the total energy consumed. The primary economics outputs are the long-run average cost, profit, cash flow, and return on investment.

  17. Force and Stress along Simulated Dissociation Pathways of Cucurbituril-Guest Systems.

    Science.gov (United States)

    Velez-Vega, Camilo; Gilson, Michael K

    2012-03-13

    The field of host-guest chemistry provides computationally tractable yet informative model systems for biomolecular recognition. We applied molecular dynamics simulations to study the forces and mechanical stresses associated with forced dissociation of aqueous cucurbituril-guest complexes with high binding affinities. First, the unbinding transitions were modeled with constant velocity pulling (steered dynamics) and a soft spring constant, to model atomic force microscopy (AFM) experiments. The computed length-force profiles yield rupture forces in good agreement with available measurements. We also used steered dynamics with high spring constants to generate paths characterized by a tight control over the specified pulling distance; these paths were then equilibrated via umbrella sampling simulations and used to compute time-averaged mechanical stresses along the dissociation pathways. The stress calculations proved to be informative regarding the key interactions determining the length-force profiles and rupture forces. In particular, the unbinding transition of one complex is found to be a stepwise process, which is initially dominated by electrostatic interactions between the guest's ammoniums and the host's carbonyl groups, and subsequently limited by the extraction of the guest's bulky bicyclooctane moiety; the latter step requires some bond stretching at the cucurbituril's extraction portal. Conversely, the dissociation of a second complex with a more slender guest is mainly driven by successive electrostatic interactions between the different guest's ammoniums and the host's carbonyl groups. The calculations also provide information on the origins of thermodynamic irreversibilities in these forced dissociation processes.

  18. Combining multimedia models with integrated urban water system models for micropollutants

    DEFF Research Database (Denmark)

    De Keyser, W.; Gevaert, V.; Verdonck, F.

    2009-01-01

    Integrated urban water system (IUWS) modelling aims at assessing the quality of the surface water receiving the urban emissions through sewage treatment plants, combined sewer overflows (CSOs) and stormwater drainage systems. However, some micropollutants have the tendency to occur in more than one...... environmental medium. In this work, a multimedia fate and transport model (MFTM) is “wrapped around” a dynamic IUWS model for organic micropollutants to enable integrated environmental assessment. The combined model was tested on a hypothetical catchment using two scenarios: a reference scenario...... and a stormwater infiltration pond scenario, as an example of a sustainable urban drainage system (SUDS). A case for Bis(2-ethylhexyl) phthalate (DEHP) was simulated and resulted in a reduced surface water concentration for the latter scenario. However, the model also showed that this was at the expense...

  19. Automated statistical modeling of analytical measurement systems

    International Nuclear Information System (INIS)

    Jacobson, J.J.

    1992-01-01

    The statistical modeling of analytical measurement systems at the Idaho Chemical Processing Plant (ICPP) has been completely automated through computer software. The statistical modeling of analytical measurement systems is one part of a complete quality control program used by the Remote Analytical Laboratory (RAL) at the ICPP. The quality control program is an integration of automated data input, measurement system calibration, database management, and statistical process control. The quality control program and statistical modeling program meet the guidelines set forth by the American Society for Testing Materials and American National Standards Institute. A statistical model is a set of mathematical equations describing any systematic bias inherent in a measurement system and the precision of a measurement system. A statistical model is developed from data generated from the analysis of control standards. Control standards are samples which are made up at precise known levels by an independent laboratory and submitted to the RAL. The RAL analysts who process control standards do not know the values of those control standards. The object behind statistical modeling is to describe real process samples in terms of their bias and precision and, to verify that a measurement system is operating satisfactorily. The processing of control standards gives us this ability

  20. Error performance analysis in downlink cellular networks with interference management

    KAUST Repository

    Afify, Laila H.

    2015-05-01

    Modeling aggregate network interference in cellular networks has recently gained immense attention both in academia and industry. While stochastic geometry based models have succeeded to account for the cellular network geometry, they mostly abstract many important wireless communication system aspects (e.g., modulation techniques, signal recovery techniques). Recently, a novel stochastic geometry model, based on the Equivalent-in-Distribution (EiD) approach, succeeded to capture the aforementioned communication system aspects and extend the analysis to averaged error performance, however, on the expense of increasing the modeling complexity. Inspired by the EiD approach, the analysis developed in [1] takes into consideration the key system parameters, while providing a simple tractable analysis. In this paper, we extend this framework to study the effect of different interference management techniques in downlink cellular network. The accuracy of the proposed analysis is verified via Monte Carlo simulations.

  1. Design theoretic analysis of three system modeling frameworks.

    Energy Technology Data Exchange (ETDEWEB)

    McDonald, Michael James

    2007-05-01

    This paper analyzes three simulation architectures from the context of modeling scalability to address System of System (SoS) and Complex System problems. The paper first provides an overview of the SoS problem domain and reviews past work in analyzing model and general system complexity issues. It then identifies and explores the issues of vertical and horizontal integration as well as coupling and hierarchical decomposition as the system characteristics and metrics against which the tools are evaluated. In addition, it applies Nam Suh's Axiomatic Design theory as a construct for understanding coupling and its relationship to system feasibility. Next it describes the application of MATLAB, Swarm, and Umbra (three modeling and simulation approaches) to modeling swarms of Unmanned Flying Vehicle (UAV) agents in relation to the chosen characteristics and metrics. Finally, it draws general conclusions for analyzing model architectures that go beyond those analyzed. In particular, it identifies decomposition along phenomena of interaction and modular system composition as enabling features for modeling large heterogeneous complex systems.

  2. An L-system model for root system mycorrhization

    Science.gov (United States)

    Schnepf, Andrea; Schweiger, Peter; Jansa, Jan; Leitner, Daniel

    2014-05-01

    Mineral phosphate fertilisers are a non-renewable resource; rock phosphate reserves are estimated to be depleted in 50 to 100 years. In order to prevent a severe phosphate crisis in the 21st century, there is a need to decrease agricultural inputs such as P fertilisers by making use of plant mechanisms that increase P acquisition efficiency. Most plants establish mycorrhizal symbiosis as an adaptation to increase/economize their P acquisition from the soil. However, there is a great functional diversity in P acquisition mechanisms among different fungal species that colonize the roots (Thonar et al. 2011), and the composition of mycorrhizal community is known to depend strongly on agricultural management practices. Thus, the agroecosystem management may substantially affect the mycorrhizal functioning and also the use of P fertilizers. To date, it is still difficult to quantify the potential input savings for the agricultural crops through manipulation of their symbiotic microbiome, mainly due to lack of mechanistic understanding of P uptake dynamics by the fungal hyphae. In a first attempt, Schnepf et al. (2008b) have used mathematical modelling to show on the single root scale how different fungal growth pattern influence root P uptake. However, their approach was limited by the fact that it was restricted to the scale of a single root. The goal of this work is to advance the dynamic, three-dimensional root architecture model of Leitner et al. (2010) to include root system infection with arbuscular mycorrhizal fungi and growth of external mycelium. The root system infection model assumes that there is an average probability of infection (primary infection), that the probability of infection of a new root segment immediately adjacent to an existing infection is much higher than the average (secondary infection), that infected root segments have entry points that are the link between internal and external mycelium, that only uninfected root segments are susceptible

  3. Spatial Models and Networks of Living Systems

    DEFF Research Database (Denmark)

    Juul, Jeppe Søgaard

    When studying the dynamics of living systems, insight can often be gained by developing a mathematical model that can predict future behaviour of the system or help classify system characteristics. However, in living cells, organisms, and especially groups of interacting individuals, a large number...... variables of the system. However, this approach disregards any spatial structure of the system, which may potentially change the behaviour drastically. An alternative approach is to construct a cellular automaton with nearest neighbour interactions, or even to model the system as a complex network...... with interactions defined by network topology. In this thesis I first describe three different biological models of ageing and cancer, in which spatial structure is important for the system dynamics. I then turn to describe characteristics of ecosystems consisting of three cyclically interacting species...

  4. Long-Term Adult Feline Liver Organoid Cultures for Disease Modeling of Hepatic Steatosis.

    Science.gov (United States)

    Kruitwagen, Hedwig S; Oosterhoff, Loes A; Vernooij, Ingrid G W H; Schrall, Ingrid M; van Wolferen, Monique E; Bannink, Farah; Roesch, Camille; van Uden, Lisa; Molenaar, Martijn R; Helms, J Bernd; Grinwis, Guy C M; Verstegen, Monique M A; van der Laan, Luc J W; Huch, Meritxell; Geijsen, Niels; Vries, Robert G; Clevers, Hans; Rothuizen, Jan; Schotanus, Baukje A; Penning, Louis C; Spee, Bart

    2017-04-11

    Hepatic steatosis is a highly prevalent liver disease, yet research is hampered by the lack of tractable cellular and animal models. Steatosis also occurs in cats, where it can cause severe hepatic failure. Previous studies demonstrate the potential of liver organoids for modeling genetic diseases. To examine the possibility of using organoids to model steatosis, we established a long-term feline liver organoid culture with adult liver stem cell characteristics and differentiation potential toward hepatocyte-like cells. Next, organoids from mouse, human, dog, and cat liver were provided with fatty acids. Lipid accumulation was observed in all organoids and interestingly, feline liver organoids accumulated more lipid droplets than human organoids. Finally, we demonstrate effects of interference with β-oxidation on lipid accumulation in feline liver organoids. In conclusion, feline liver organoids can be successfully cultured and display a predisposition for lipid accumulation, making them an interesting model in hepatic steatosis research. Copyright © 2017 The Author(s). Published by Elsevier Inc. All rights reserved.

  5. System and circuit models for microwave antennas

    OpenAIRE

    Sobhy, Mohammed; Sanz-Izquierdo, Benito; Batchelor, John C.

    2007-01-01

    This paper describes how circuit and system models are derived for antennas from measurement of the input reflection coefficient. Circuit models are used to optimize the antenna performance and to calculate the radiated power and the transfer function of the antenna. System models are then derived for transmitting and receiving antennas. The most important contribution of this study is to show how microwave structures can be integrated into the simulation of digital communication systems. Thi...

  6. Engineered Barrier System: Physical and Chemical Environment Model

    International Nuclear Information System (INIS)

    Jolley, D. M.; Jarek, R.; Mariner, P.

    2004-01-01

    The conceptual and predictive models documented in this Engineered Barrier System: Physical and Chemical Environment Model report describe the evolution of the physical and chemical conditions within the waste emplacement drifts of the repository. The modeling approaches and model output data will be used in the total system performance assessment (TSPA-LA) to assess the performance of the engineered barrier system and the waste form. These models evaluate the range of potential water compositions within the emplacement drifts, resulting from the interaction of introduced materials and minerals in dust with water seeping into the drifts and with aqueous solutions forming by deliquescence of dust (as influenced by atmospheric conditions), and from thermal-hydrological-chemical (THC) processes in the drift. These models also consider the uncertainty and variability in water chemistry inside the drift and the compositions of introduced materials within the drift. This report develops and documents a set of process- and abstraction-level models that constitute the engineered barrier system: physical and chemical environment model. Where possible, these models use information directly from other process model reports as input, which promotes integration among process models used for total system performance assessment. Specific tasks and activities of modeling the physical and chemical environment are included in the technical work plan ''Technical Work Plan for: In-Drift Geochemistry Modeling'' (BSC 2004 [DIRS 166519]). As described in the technical work plan, the development of this report is coordinated with the development of other engineered barrier system analysis model reports

  7. Fermion dynamical symmetry and the nuclear shell model

    International Nuclear Information System (INIS)

    Ginocchio, J.N.

    1985-01-01

    The interacting boson model (IBM) has been very successful in giving a unified and simple description of the spectroscopic properties of a wide range of nuclei, from vibrational through rotational nuclei. The three basic assumptions of the model are that: (1) the valence nucleons move about a doubly closed core, (2) the collective low-lying states are composed primarily of coherent pairs of neutrons and pairs of protons coupled to angular momentum zero and two, and (3) these coherent pairs are approximated as bosons. In this review we shall show how it is possible to have fermion Hamiltonians which have a class of collective eigenstates composed entirely of monopole and quadrupole pairs of fermions. Hence these models satisfy the assumptions (1) and (2) above but no boson approximation need be made. Thus the Pauli principle is kept in tact. Furthermore the fermion shell model states excluded in the IBM can be classified by the number of fermion pairs which are not coherent monopole or quadrupole pairs. Hence the mixing of these states into the low-lying spectrum can be calculated in a systematic and tractable manner. Thus we can introduce features which are outside the IBM. 11 refs

  8. Using Difference Equation to Model Discrete-time Behavior in System Dynamics Modeling

    NARCIS (Netherlands)

    Hesan, R.; Ghorbani, A.; Dignum, M.V.

    2014-01-01

    In system dynamics modeling, differential equations have been used as the basic mathematical operator. Using difference equation to build system dynamics models instead of differential equation, can be insightful for studying small organizations or systems with micro behavior. In this paper we

  9. The simplest maximum entropy model for collective behavior in a neural network

    International Nuclear Information System (INIS)

    Tkačik, Gašper; Marre, Olivier; Mora, Thierry; Amodei, Dario; Bialek, William; Berry II, Michael J

    2013-01-01

    Recent work emphasizes that the maximum entropy principle provides a bridge between statistical mechanics models for collective behavior in neural networks and experiments on networks of real neurons. Most of this work has focused on capturing the measured correlations among pairs of neurons. Here we suggest an alternative, constructing models that are consistent with the distribution of global network activity, i.e. the probability that K out of N cells in the network generate action potentials in the same small time bin. The inverse problem that we need to solve in constructing the model is analytically tractable, and provides a natural ‘thermodynamics’ for the network in the limit of large N. We analyze the responses of neurons in a small patch of the retina to naturalistic stimuli, and find that the implied thermodynamics is very close to an unusual critical point, in which the entropy (in proper units) is exactly equal to the energy. (paper)

  10. The FEL-TNO uniform open systems model

    NARCIS (Netherlands)

    Luiijf, H.A.M.; Overbeek, P.L.

    1989-01-01

    The FEL-TNO Uniform Open Systems Model is based upon the IS0/0SI Basic Reference Model and integrates operating systems, (OSI) networks, equipment and media into one single uniform nodel. Usage of the model stimulates the development of operating systen and network independent applications and puts

  11. Intrinsic Uncertainties in Modeling Complex Systems.

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, Curtis S; Bramson, Aaron L.; Ames, Arlo L.

    2014-09-01

    Models are built to understand and predict the behaviors of both natural and artificial systems. Because it is always necessary to abstract away aspects of any non-trivial system being modeled, we know models can potentially leave out important, even critical elements. This reality of the modeling enterprise forces us to consider the prospective impacts of those effects completely left out of a model - either intentionally or unconsidered. Insensitivity to new structure is an indication of diminishing returns. In this work, we represent a hypothetical unknown effect on a validated model as a finite perturba- tion whose amplitude is constrained within a control region. We find robustly that without further constraints, no meaningful bounds can be placed on the amplitude of a perturbation outside of the control region. Thus, forecasting into unsampled regions is a very risky proposition. We also present inherent difficulties with proper time discretization of models and representing in- herently discrete quantities. We point out potentially worrisome uncertainties, arising from math- ematical formulation alone, which modelers can inadvertently introduce into models of complex systems. Acknowledgements This work has been funded under early-career LDRD project #170979, entitled "Quantify- ing Confidence in Complex Systems Models Having Structural Uncertainties", which ran from 04/2013 to 09/2014. We wish to express our gratitude to the many researchers at Sandia who con- tributed ideas to this work, as well as feedback on the manuscript. In particular, we would like to mention George Barr, Alexander Outkin, Walt Beyeler, Eric Vugrin, and Laura Swiler for provid- ing invaluable advice and guidance through the course of the project. We would also like to thank Steven Kleban, Amanda Gonzales, Trevor Manzanares, and Sarah Burwell for their assistance in managing project tasks and resources.

  12. Modeling of nonlinear biological phenomena modeled by S-systems.

    Science.gov (United States)

    Mansouri, Majdi M; Nounou, Hazem N; Nounou, Mohamed N; Datta, Aniruddha A

    2014-03-01

    A central challenge in computational modeling of biological systems is the determination of the model parameters. In such cases, estimating these variables or parameters from other easily obtained measurements can be extremely useful. For example, time-series dynamic genomic data can be used to develop models representing dynamic genetic regulatory networks, which can be used to design intervention strategies to cure major diseases and to better understand the behavior of biological systems. Unfortunately, biological measurements are usually highly infected by errors that hide the important characteristics in the data. Therefore, these noisy measurements need to be filtered to enhance their usefulness in practice. This paper addresses the problem of state and parameter estimation of biological phenomena modeled by S-systems using Bayesian approaches, where the nonlinear observed system is assumed to progress according to a probabilistic state space model. The performances of various conventional and state-of-the-art state estimation techniques are compared. These techniques include the extended Kalman filter (EKF), unscented Kalman filter (UKF), particle filter (PF), and the developed variational Bayesian filter (VBF). Specifically, two comparative studies are performed. In the first comparative study, the state variables (the enzyme CadA, the model cadBA, the cadaverine Cadav and the lysine Lys for a model of the Cad System in Escherichia coli (CSEC)) are estimated from noisy measurements of these variables, and the various estimation techniques are compared by computing the estimation root mean square error (RMSE) with respect to the noise-free data. In the second comparative study, the state variables as well as the model parameters are simultaneously estimated. In this case, in addition to comparing the performances of the various state estimation techniques, the effect of the number of estimated model parameters on the accuracy and convergence of these

  13. Microphysics in Multi-scale Modeling System with Unified Physics

    Science.gov (United States)

    Tao, Wei-Kuo

    2012-01-01

    Recently, a multi-scale modeling system with unified physics was developed at NASA Goddard. It consists of (1) a cloud-resolving model (Goddard Cumulus Ensemble model, GCE model), (2) a regional scale model (a NASA unified weather research and forecast, WRF), (3) a coupled CRM and global model (Goddard Multi-scale Modeling Framework, MMF), and (4) a land modeling system. The same microphysical processes, long and short wave radiative transfer and land processes and the explicit cloud-radiation, and cloud-land surface interactive processes are applied in this multi-scale modeling system. This modeling system has been coupled with a multi-satellite simulator to use NASA high-resolution satellite data to identify the strengths and weaknesses of cloud and precipitation processes simulated by the model. In this talk, a review of developments and applications of the multi-scale modeling system will be presented. In particular, the microphysics development and its performance for the multi-scale modeling system will be presented.

  14. Mathematical Modeling of Hybrid Electrical Engineering Systems

    Directory of Open Access Journals (Sweden)

    A. A. Lobaty

    2016-01-01

    Full Text Available A large class of systems that have found application in various industries and households, electrified transportation facilities and energy sector has been classified as electrical engineering systems. Their characteristic feature is a combination of continuous and discontinuous modes of operation, which is reflected in the appearance of a relatively new term “hybrid systems”. A wide class of hybrid systems is pulsed DC converters operating in a pulse width modulation, which are non-linear systems with variable structure. Using various methods for linearization it is possible to obtain linear mathematical models that rather accurately simulate behavior of such systems. However, the presence in the mathematical models of exponential nonlinearities creates considerable difficulties in the implementation of digital hardware. The solution can be found while using an approximation of exponential functions by polynomials of the first order, that, however, violates the rigor accordance of the analytical model with characteristics of a real object. There are two practical approaches to synthesize algorithms for control of hybrid systems. The first approach is based on the representation of the whole system by a discrete model which is described by difference equations that makes it possible to synthesize discrete algorithms. The second approach is based on description of the system by differential equations. The equations describe synthesis of continuous algorithms and their further implementation in a digital computer included in the control loop system. The paper considers modeling of a hybrid electrical engineering system using differential equations. Neglecting the pulse duration, it has been proposed to describe behavior of vector components in phase coordinates of the hybrid system by stochastic differential equations containing generally non-linear differentiable random functions. A stochastic vector-matrix equation describing dynamics of the

  15. CTBT integrated verification system evaluation model supplement

    International Nuclear Information System (INIS)

    EDENBURN, MICHAEL W.; BUNTING, MARCUS; PAYNE, ARTHUR C. JR.; TROST, LAWRENCE C.

    2000-01-01

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0

  16. A distributed snow-evolution modeling system (SnowModel)

    Science.gov (United States)

    Glen E. Liston; Kelly. Elder

    2006-01-01

    SnowModel is a spatially distributed snow-evolution modeling system designed for application in landscapes, climates, and conditions where snow occurs. It is an aggregation of four submodels: MicroMet defines meteorological forcing conditions, EnBal calculates surface energy exchanges, SnowPack simulates snow depth and water-equivalent evolution, and SnowTran-3D...

  17. Challenges in Modeling the Sun-Earth System

    Science.gov (United States)

    Spann, James

    2004-01-01

    The transfer of mass, energy and momentum through the coupled Sun-Earth system spans a wide range of scales in time and space. While profound advances have been made in modeling isolated regions of the Sun-Earth system, minimal progress has been achieved in modeling the end-to-end system. Currently, end-to-end modeling of the Sun-Earth system is a major goal of the National Space Weather and NASA Living With a Star (LWS) programs. The uncertainty in the underlying physics responsible for coupling contiguous regions of the Sun-Earth system is recognized as a significant barrier to progress. Our limited understanding of the underlying coupling physics is illustrated by the following example questions: how does the propagation of a typical CME/solar flare influence the measured properties of the solar wind at 1 AU? How does the solar wind compel the dynamic response of the Earth's magnetosphere? How is variability in the ionosphere-thermosphere system coupled to magnetospheric variations? Why do these and related important questions remain unanswered? What are the primary problems that need to be resolved to enable significant progress in comprehensive modeling of the Sun-Earth system? Which model/technique improvements are required and what new data coverage is required to enable full model advances? This poster opens the discussion for how these and other important questions can be addressed. A workshop scheduled for October 8-22, 2004 in Huntsville, Alabama, will be a forum for identifying ana exploring promising new directions and approaches for characterizing and understanding the system. To focus the discussion, the workshop will emphasize the genesis, evolution, propagation and interaction of high-speed solar wind streamers or CME/flares with geospace and the subsequent response of geospace from its outer reaches in the magnetosphere to the lower edge of the ionosphere-mesosphere-thermosphere. Particular emphasis will be placed on modeling the coupling aspects

  18. Models of radon entry: A review

    International Nuclear Information System (INIS)

    Gadgil, A.J.

    1991-08-01

    This paper reviews existing models of radon entry into houses. The primary mechanism of radon entry in houses with high indoor concentrations is, in most cases, convective entry of radon bearing soil-gas from the surrounding soil. The driving force for this convective entry is the small indoor-outdoor pressure difference arising from the stack effect and other causes. Entry points for the soil-gas generally are the cracks or gaps in the building substructure, or though other parts of the building shell in direct contact with the soil, although entry may also occur by flow though permeable concrete or cinder block walls of the substructure. Models using analytical solutions to idealized geometrical configurations with simplified boundary conditions obtain analytical tractability of equations to be solved at the cost of severe approximations; their strength is in the insights they offer with their solutions. Models based on lumped parameters attempt to characterize the significant physical behavioral characteristics of the soil-gas and radon flow. When realistic approximations are desired for the boundary conditions and terms in the governing equations, numerical models must be used; these are usually based on finite difference or finite element solutions to the governing equations. Limited data are now available for experimental verification of model predictions. The models are briefly reviewed and their strengths and limitations are discussed

  19. Practical Robust Optimization Method for Unit Commitment of a System with Integrated Wind Resource

    Directory of Open Access Journals (Sweden)

    Yuanchao Yang

    2017-01-01

    Full Text Available Unit commitment, one of the significant tasks in power system operations, faces new challenges as the system uncertainty increases dramatically due to the integration of time-varying resources, such as wind. To address these challenges, we propose the formulation and solution of a generalized unit commitment problem for a system with integrated wind resources. Given the prespecified interval information acquired from real central wind forecasting system for uncertainty representation of nodal wind injections with their correlation information, the proposed unit commitment problem solution is computationally tractable and robust against all uncertain wind power injection realizations. We provide a solution approach to tackle this problem with complex mathematical basics and illustrate the capabilities of the proposed mixed integer solution approach on the large-scale power system of the Northwest China Grid. The numerical results demonstrate that the approach is realistic and not overly conservative in terms of the resulting dispatch cost outcomes.

  20. Quantum cosmological relational model of shape and scale in 1D

    International Nuclear Information System (INIS)

    Anderson, Edward

    2011-01-01

    Relational particle models are useful toy models for quantum cosmology and the problem of time in quantum general relativity. This paper shows how to extend existing work on concrete examples of relational particle models in 1D to include a notion of scale. This is useful as regards forming a tight analogy with quantum cosmology and the emergent semiclassical time and hidden time approaches to the problem of time. This paper shows furthermore that the correspondence between relational particle models and classical and quantum cosmology can be strengthened using judicious choices of the mechanical potential. This gives relational particle mechanics models with analogues of spatial curvature, cosmological constant, dust and radiation terms. A number of these models are then tractable at the quantum level. These models can be used to study important issues (1) in canonical quantum gravity: the problem of time, the semiclassical approach to it and timeless approaches to it (such as the naive Schroedinger interpretation and records theory) and (2) in quantum cosmology, such as in the investigation of uniform states, robustness and the qualitative understanding of the origin of structure formation.

  1. System of systems dependability – Theoretical models and applications examples

    International Nuclear Information System (INIS)

    Bukowski, L.

    2016-01-01

    The aim of this article is to generalise the concept of 'dependability' in a way, that could be applied to all types of systems, especially the system of systems (SoS), operating under both normal and abnormal work conditions. In order to quantitatively assess the dependability we applied service continuity oriented approach. This approach is based on the methodology of service engineering and is closely related to the idea of resilient enterprise as well as to the concept of disruption-tolerant operation. On this basis a framework for evaluation of SoS dependability has been developed in a static as well as dynamic approach. The static model is created as a fuzzy logic-oriented advisory expert system and can be particularly useful at the design stage of SoS. The dynamic model is based on the risk oriented approach, and can be useful both at the design stage and for management of SoS. The integrated model of dependability can also form the basis for a new definition of the dependability engineering, namely as a superior discipline to reliability engineering, safety engineering, security engineering, resilience engineering and risk engineering. - Highlights: • A framework for evaluation of system of systems dependability is presented. • The model is based on the service continuity concept and consists of two parts. • The static part can be created as a fuzzy logic-oriented advisory expert system. • The dynamic, risk oriented part, is related to the concept of throughput chain. • A new definition of dependability engineering is proposed.

  2. Developing a Model of the Irish Energy-System

    DEFF Research Database (Denmark)

    Connolly, David; Lund, Henrik; Mathiesen, Brian Vad

    2009-01-01

    to create the model as it accounts for all sectors that need to be considered for integrating large penetrations of renewable energy: the electricity, heat and transport sectors. Before various alternative energy-systems could be investigated for Ireland, a reference model of the existing system needed...... is a vital step due to the scale of the change required for large-scale renewable penetrations. In this paper, a model of the Irish energy system is created to identify how Ireland can transform from a fossil-fuel to a renewable energy-system. The energy-systems-analysis tool, EnergyPLAN, was chosen...... to be created. This paper focuses on the construction of this reference model, in terms of the data gathered, the assumptions made and the accuracy achieved. In future work, this model will be used to investigate alternative energy-systems for Ireland, with the aim to determine the most effective energy system...

  3. Hybrid simulation models for data-intensive systems

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00473067

    Data-intensive systems are used to access and store massive amounts of data by combining the storage resources of multiple data-centers, usually deployed all over the world, in one system. This enables users to utilize these massive storage capabilities in a simple and efficient way. However, with the growth of these systems it becomes a hard problem to estimate the effects of modifications to the system, such as data placement algorithms or hardware upgrades, and to validate these changes for potential side effects. This thesis addresses the modeling of operational data-intensive systems and presents a novel simulation model which estimates the performance of system operations. The running example used throughout this thesis is the data-intensive system Rucio, which is used as the data man- agement system of the ATLAS experiment at CERN’s Large Hadron Collider. Existing system models in literature are not applicable to data-intensive workflows, as they only consider computational workflows or make assumpti...

  4. A mathematical model of a crocodilian population using delay-differential equations.

    Science.gov (United States)

    Gallegos, Angela; Plummer, Tenecia; Uminsky, David; Vega, Cinthia; Wickman, Clare; Zawoiski, Michael

    2008-11-01

    The crocodilia have multiple interesting characteristics that affect their population dynamics. They are among several reptile species which exhibit temperature-dependent sex determination (TSD) in which the temperature of egg incubation determines the sex of the hatchlings. Their life parameters, specifically birth and death rates, exhibit strong age-dependence. We develop delay-differential equation (DDE) models describing the evolution of a crocodilian population. In using the delay formulation, we are able to account for both the TSD and the age-dependence of the life parameters while maintaining some analytical tractability. In our single-delay model we also find an equilibrium point and prove its local asymptotic stability. We numerically solve the different models and investigate the effects of multiple delays on the age structure of the population as well as the sex ratio of the population. For all models we obtain very strong agreement with the age structure of crocodilian population data as reported in Smith and Webb (Aust. Wild. Res. 12, 541-554, 1985). We also obtain reasonable values for the sex ratio of the simulated population.

  5. Modeling of battery energy storage in the National Energy Modeling System

    Energy Technology Data Exchange (ETDEWEB)

    Swaminathan, S.; Flynn, W.T.; Sen, R.K. [Sentech, Inc., Bethesda, MD (United States)

    1997-12-01

    The National Energy Modeling System (NEMS) developed by the U.S. Department of Energy`s Energy Information Administration is a well-recognized model that is used to project the potential impact of new electric generation technologies. The NEMS model does not presently have the capability to model energy storage on the national grid. The scope of this study was to assess the feasibility of, and make recommendations for, the modeling of battery energy storage systems in the Electricity Market of the NEMS. Incorporating storage within the NEMS will allow the national benefits of storage technologies to be evaluated.

  6. Engineered Barrier System: Physical and Chemical Environment Model

    Energy Technology Data Exchange (ETDEWEB)

    D. M. Jolley; R. Jarek; P. Mariner

    2004-02-09

    The conceptual and predictive models documented in this Engineered Barrier System: Physical and Chemical Environment Model report describe the evolution of the physical and chemical conditions within the waste emplacement drifts of the repository. The modeling approaches and model output data will be used in the total system performance assessment (TSPA-LA) to assess the performance of the engineered barrier system and the waste form. These models evaluate the range of potential water compositions within the emplacement drifts, resulting from the interaction of introduced materials and minerals in dust with water seeping into the drifts and with aqueous solutions forming by deliquescence of dust (as influenced by atmospheric conditions), and from thermal-hydrological-chemical (THC) processes in the drift. These models also consider the uncertainty and variability in water chemistry inside the drift and the compositions of introduced materials within the drift. This report develops and documents a set of process- and abstraction-level models that constitute the engineered barrier system: physical and chemical environment model. Where possible, these models use information directly from other process model reports as input, which promotes integration among process models used for total system performance assessment. Specific tasks and activities of modeling the physical and chemical environment are included in the technical work plan ''Technical Work Plan for: In-Drift Geochemistry Modeling'' (BSC 2004 [DIRS 166519]). As described in the technical work plan, the development of this report is coordinated with the development of other engineered barrier system analysis model reports.

  7. Models for a stand-alone PV system

    DEFF Research Database (Denmark)

    Hansen, A.D.; Sørensen, Poul Ejnar; Hansen, L.H.

    2001-01-01

    are based on the model descriptions found in the literature. The battery model is developed at UMASS and is known as the Kinetic Battery Model(KiBaM). The other component models in the PV system are based on simple electrical knowledge. The implementation is done using Matlab/Simulink, a simulation program......This report presents a number of models for modelling and simulation of a stand-alone photovoltaic (PV) system with a battery bank verified against a system installed at Risø National Laboratory. The work has been supported by the Danish Ministry ofEnergy, as a part of the activities in the Solar...... Energy Centre Denmark. The study is carried out at Risø National Laboratory with the main purpose to establish a library of simple mathematical models for each individual element of a stand-alone PVsystem, namely solar cells, battery, controller, inverter and load. The models for PV module and battery...

  8. Robust Admissibilization of Descriptor Systems by Static Output-Feedback: An LMI Approach

    Directory of Open Access Journals (Sweden)

    M. Chaabane

    2011-01-01

    static output-feedback is studied in this paper and an approach to solve it is proposed. For this, sufficient conditions are derived for the closed-loop system to be admissible (i.e., stable, regular, and impulse-free. These conditions are expressed in terms of a strict Linear Matrix Inequality (LMI; so they are tractable using numerical computations. The proposed controller design methodology is based on two steps: the first is dedicated to synthesizing a classical state-feedback controller, which is used as the initial value for the second step, which uses an LMI problem to obtain static output-feedback controllers that give admissibility. Finally, a numerical example is given to illustrate the results.

  9. On domain modelling of the service system with its application to enterprise information systems

    Science.gov (United States)

    Wang, J. W.; Wang, H. F.; Ding, J. L.; Furuta, K.; Kanno, T.; Ip, W. H.; Zhang, W. J.

    2016-01-01

    Information systems are a kind of service systems and they are throughout every element of a modern industrial and business system, much like blood in our body. Types of information systems are heterogeneous because of extreme uncertainty in changes in modern industrial and business systems. To effectively manage information systems, modelling of the work domain (or domain) of information systems is necessary. In this paper, a domain modelling framework for the service system is proposed and its application to the enterprise information system is outlined. The framework is defined based on application of a general domain modelling tool called function-context-behaviour-principle-state-structure (FCBPSS). The FCBPSS is based on a set of core concepts, namely: function, context, behaviour, principle, state and structure and system decomposition. Different from many other applications of FCBPSS in systems engineering, the FCBPSS is applied to both infrastructure and substance systems, which is novel and effective to modelling of service systems including enterprise information systems. It is to be noted that domain modelling of systems (e.g. enterprise information systems) is a key to integration of heterogeneous systems and to coping with unanticipated situations facing to systems.

  10. Canadian Association of Neurosciences Review: learning at a snail's pace.

    Science.gov (United States)

    Parvez, Kashif; Rosenegger, David; Martens, Kara; Orr, Michael; Lukowiak, Ken

    2006-11-01

    While learning and memory are related, they are distinct processes each with different forms of expression and underlying molecular mechanisms. An invertebrate model system, Lymnaea stagnalis, is used to study memory formation of a non-declarative memory. We have done so because: (1) We have discovered the neural circuit that mediates an interesting and tractable behaviour; (2) This behaviour can be operantly conditioned and intermediate-term and long-term memory can be demonstrated; and (3) It is possible to demonstrate that a single neuron in the model system is a necessary site of memory formation. This article reviews how Lymnaea has been used in the study of behavioural and molecular mechanisms underlying consolidation, reconsolidation, extinction and forgetting.

  11. Modelling and control of dynamic systems using gaussian process models

    CERN Document Server

    Kocijan, Juš

    2016-01-01

    This monograph opens up new horizons for engineers and researchers in academia and in industry dealing with or interested in new developments in the field of system identification and control. It emphasizes guidelines for working solutions and practical advice for their implementation rather than the theoretical background of Gaussian process (GP) models. The book demonstrates the potential of this recent development in probabilistic machine-learning methods and gives the reader an intuitive understanding of the topic. The current state of the art is treated along with possible future directions for research. Systems control design relies on mathematical models and these may be developed from measurement data. This process of system identification, when based on GP models, can play an integral part of control design in data-based control and its description as such is an essential aspect of the text. The background of GP regression is introduced first with system identification and incorporation of prior know...

  12. System Dynamics Modeling for Supply Chain Information Sharing

    Science.gov (United States)

    Feng, Yang

    In this paper, we try to use the method of system dynamics to model supply chain information sharing. Firstly, we determine the model boundaries, establish system dynamics model of supply chain before information sharing, analyze the model's simulation results under different changed parameters and suggest improvement proposal. Then, we establish system dynamics model of supply chain information sharing and make comparison and analysis on the two model's simulation results, to show the importance of information sharing in supply chain management. We wish that all these simulations would provide scientific supports for enterprise decision-making.

  13. Models used to assess the performance of photovoltaic systems.

    Energy Technology Data Exchange (ETDEWEB)

    Stein, Joshua S.; Klise, Geoffrey T.

    2009-12-01

    This report documents the various photovoltaic (PV) performance models and software developed and utilized by researchers at Sandia National Laboratories (SNL) in support of the Photovoltaics and Grid Integration Department. In addition to PV performance models, hybrid system and battery storage models are discussed. A hybrid system using other distributed sources and energy storage can help reduce the variability inherent in PV generation, and due to the complexity of combining multiple generation sources and system loads, these models are invaluable for system design and optimization. Energy storage plays an important role in reducing PV intermittency and battery storage models are used to understand the best configurations and technologies to store PV generated electricity. Other researcher's models used by SNL are discussed including some widely known models that incorporate algorithms developed at SNL. There are other models included in the discussion that are not used by or were not adopted from SNL research but may provide some benefit to researchers working on PV array performance, hybrid system models and energy storage. The paper is organized into three sections to describe the different software models as applied to photovoltaic performance, hybrid systems, and battery storage. For each model, there is a description which includes where to find the model, whether it is currently maintained and any references that may be available. Modeling improvements underway at SNL include quantifying the uncertainty of individual system components, the overall uncertainty in modeled vs. measured results and modeling large PV systems. SNL is also conducting research into the overall reliability of PV systems.

  14. Aerial Measuring System Sensor Modeling

    International Nuclear Information System (INIS)

    Detwiler, R.S.

    2002-01-01

    This project deals with the modeling the Aerial Measuring System (AMS) fixed-wing and rotary-wing sensor systems, which are critical U.S. Department of Energy's National Nuclear Security Administration (NNSA) Consequence Management assets. The fixed-wing system is critical in detecting lost or stolen radiography or medical sources, or mixed fission products as from a commercial power plant release at high flying altitudes. The helicopter is typically used at lower altitudes to determine ground contamination, such as in measuring americium from a plutonium ground dispersal during a cleanup. Since the sensitivity of these instruments as a function of altitude is crucial in estimating detection limits of various ground contaminations and necessary count times, a characterization of their sensitivity as a function of altitude and energy is needed. Experimental data at altitude as well as laboratory benchmarks is important to insure that the strong effects of air attenuation are modeled correctly. The modeling presented here is the first attempt at such a characterization of the equipment for flying altitudes. The sodium iodide (NaI) sensors utilized with these systems were characterized using the Monte Carlo N-Particle code (MCNP) developed at Los Alamos National Laboratory. For the fixed wing system, calculations modeled the spectral response for the 3-element NaI detector pod and High-Purity Germanium (HPGe) detector, in the relevant energy range of 50 keV to 3 MeV. NaI detector responses were simulated for both point and distributed surface sources as a function of gamma energy and flying altitude. For point sources, photopeak efficiencies were calculated for a zero radial distance and an offset equal to the altitude. For distributed sources approximating an infinite plane, gross count efficiencies were calculated and normalized to a uniform surface deposition of 1 microCi/m 2 . The helicopter calculations modeled the transport of americium-241 ( 241 Am) as this is

  15. System dynamics and control with bond graph modeling

    CERN Document Server

    Kypuros, Javier

    2013-01-01

    Part I Dynamic System ModelingIntroduction to System DynamicsIntroductionSystem Decomposition and Model ComplexityMathematical Modeling of Dynamic SystemsAnalysis and Design of Dynamic SystemsControl of Dynamic SystemsDiagrams of Dynamic SystemsA Graph-Centered Approach to ModelingSummaryPracticeExercisesBasic Bond Graph ElementsIntroductionPower and Energy VariablesBasic 1-Port ElementsBasic 2-Ports ElementsJunction ElementsSimple Bond Graph ExamplesSummaryPracticeExercisesBond Graph Synthesis and Equation DerivationIntroductionGeneral GuidelinesMechanical TranslationMechanical RotationElectrical CircuitsHydraulic CircuitsMixed SystemsState Equation DerivationState-Space RepresentationsAlgebraic Loops and Derivative CausalitySummaryPracticeExercisesImpedance Bond GraphsIntroductionLaplace Transform of the State-Space EquationBasic 1-Port ImpedancesImpedance Bond Graph SynthesisJunctions, Transformers, and GyratorsEffort and Flow DividersSign ChangesTransfer Function DerivationAlternative Derivation of Transf...

  16. Time evolution of tokamak states with flow

    International Nuclear Information System (INIS)

    Kerner, W.; Weitzner, H.

    1985-12-01

    The general dissipative Braginskii single-fluid model is applied to simulate tokamak transport. An expansion with respect to epsilon = (ω/sub i/tau/sub i/) -1 , the factor by which perpendicular and parallel transport coefficients differ, yields a numerically tractable scheme. The resulting 1-1/2 D procedure requires computation of 2D toroidal equilibria with flow together with the solution of a system of ordinary 1D flux-averaged equations for the time evolution of the profiles. 13 refs

  17. Experimental Modeling of Dynamic Systems

    DEFF Research Database (Denmark)

    Knudsen, Morten Haack

    2006-01-01

    An engineering course, Simulation and Experimental Modeling, has been developed that is based on a method for direct estimation of physical parameters in dynamic systems. Compared with classical system identification, the method appears to be easier to understand, apply, and combine with physical...

  18. CTBT integrated verification system evaluation model supplement

    Energy Technology Data Exchange (ETDEWEB)

    EDENBURN,MICHAEL W.; BUNTING,MARCUS; PAYNE JR.,ARTHUR C.; TROST,LAWRENCE C.

    2000-03-02

    Sandia National Laboratories has developed a computer based model called IVSEM (Integrated Verification System Evaluation Model) to estimate the performance of a nuclear detonation monitoring system. The IVSEM project was initiated in June 1994, by Sandia's Monitoring Systems and Technology Center and has been funded by the U.S. Department of Energy's Office of Nonproliferation and National Security (DOE/NN). IVSEM is a simple, ''top-level,'' modeling tool which estimates the performance of a Comprehensive Nuclear Test Ban Treaty (CTBT) monitoring system and can help explore the impact of various sensor system concepts and technology advancements on CTBT monitoring. One of IVSEM's unique features is that it integrates results from the various CTBT sensor technologies (seismic, in sound, radionuclide, and hydroacoustic) and allows the user to investigate synergy among the technologies. Specifically, IVSEM estimates the detection effectiveness (probability of detection), location accuracy, and identification capability of the integrated system and of each technology subsystem individually. The model attempts to accurately estimate the monitoring system's performance at medium interfaces (air-land, air-water) and for some evasive testing methods such as seismic decoupling. The original IVSEM report, CTBT Integrated Verification System Evaluation Model, SAND97-25 18, described version 1.2 of IVSEM. This report describes the changes made to IVSEM version 1.2 and the addition of identification capability estimates that have been incorporated into IVSEM version 2.0.

  19. National Energy Outlook Modelling System

    Energy Technology Data Exchange (ETDEWEB)

    Volkers, C.M. [ECN Policy Studies, Petten (Netherlands)

    2013-12-15

    For over 20 years, the Energy research Centre of the Netherlands (ECN) has been developing the National Energy Outlook Modelling System (NEOMS) for Energy projections and policy evaluations. NEOMS enables 12 energy models of ECN to exchange data and produce consistent and detailed results.

  20. Formal Modeling and Analysis of Timed Systems

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand; Niebert, Peter

    This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts of ...... systems, discrete time systems, timed languages, and real-time operating systems....... of two invited talks were carefully selected from 36 submissions during two rounds of reviewing and improvement. All current aspects of formal method for modeling and analyzing timed systems are addressed; among the timed systems dealt with are timed automata, timed Petri nets, max-plus algebras, real-time......This book constitutes the thoroughly refereed post-proceedings of the First International Workshop on Formal Modeling and Analysis of Timed Systems, FORMATS 2003, held in Marseille, France in September 2003. The 19 revised full papers presented together with an invited paper and the abstracts...

  1. Cockpit System Situational Awareness Modeling Tool

    Science.gov (United States)

    Keller, John; Lebiere, Christian; Shay, Rick; Latorella, Kara

    2004-01-01

    This project explored the possibility of predicting pilot situational awareness (SA) using human performance modeling techniques for the purpose of evaluating developing cockpit systems. The Improved Performance Research Integration Tool (IMPRINT) was combined with the Adaptive Control of Thought-Rational (ACT-R) cognitive modeling architecture to produce a tool that can model both the discrete tasks of pilots and the cognitive processes associated with SA. The techniques for using this tool to predict SA were demonstrated using the newly developed Aviation Weather Information (AWIN) system. By providing an SA prediction tool to cockpit system designers, cockpit concepts can be assessed early in the design process while providing a cost-effective complement to the traditional pilot-in-the-loop experiments and data collection techniques.

  2. Spatio-temporal modeling of nonlinear distributed parameter systems

    CERN Document Server

    Li, Han-Xiong

    2011-01-01

    The purpose of this volume is to provide a brief review of the previous work on model reduction and identifi cation of distributed parameter systems (DPS), and develop new spatio-temporal models and their relevant identifi cation approaches. In this book, a systematic overview and classifi cation on the modeling of DPS is presented fi rst, which includes model reduction, parameter estimation and system identifi cation. Next, a class of block-oriented nonlinear systems in traditional lumped parameter systems (LPS) is extended to DPS, which results in the spatio-temporal Wiener and Hammerstein s

  3. An Analytically Tractable Model for Pricing Multiasset Options with Correlated Jump-Diffusion Equity Processes and a Two-Factor Stochastic Yield Curve

    Directory of Open Access Journals (Sweden)

    Tristan Guillaume

    2016-01-01

    Full Text Available This paper shows how to value multiasset options analytically in a modeling framework that combines both continuous and discontinuous variations in the underlying equity or foreign exchange processes and a stochastic, two-factor yield curve. All correlations are taken into account, between the factors driving the yield curve, between fixed income and equity as asset classes, and between the individual equity assets themselves. The valuation method is applied to three of the most popular two-asset options.

  4. An ecological process model of systems change.

    Science.gov (United States)

    Peirson, Leslea J; Boydell, Katherine M; Ferguson, H Bruce; Ferris, Lorraine E

    2011-06-01

    In June 2007 the American Journal of Community Psychology published a special issue focused on theories, methods and interventions for systems change which included calls from the editors and authors for theoretical advancement in this field. We propose a conceptual model of systems change that integrates familiar and fundamental community psychology principles (succession, interdependence, cycling of resources, adaptation) and accentuates a process orientation. To situate our framework we offer a definition of systems change and a brief review of the ecological perspective and principles. The Ecological Process Model of Systems Change is depicted, described and applied to a case example of policy driven systems level change in publicly funded social programs. We conclude by identifying salient implications for thinking and action which flow from the Model.

  5. Modelling solid solutions with cluster expansion, special quasirandom structures, and thermodynamic approaches

    Science.gov (United States)

    Saltas, V.; Horlait, D.; Sgourou, E. N.; Vallianatos, F.; Chroneos, A.

    2017-12-01

    Modelling solid solutions is fundamental in understanding the properties of numerous materials which are important for a range of applications in various fields including nanoelectronics and energy materials such as fuel cells, nuclear materials, and batteries, as the systematic understanding throughout the composition range of solid solutions for a range of conditions can be challenging from an experimental viewpoint. The main motivation of this review is to contribute to the discussion in the community of the applicability of methods that constitute the investigation of solid solutions computationally tractable. This is important as computational modelling is required to calculate numerous defect properties and to act synergistically with experiment to understand these materials. This review will examine in detail two examples: silicon germanium alloys and MAX phase solid solutions. Silicon germanium alloys are technologically important in nanoelectronic devices and are also relevant considering the recent advances in ternary and quaternary groups IV and III-V semiconductor alloys. MAX phase solid solutions display a palette of ceramic and metallic properties and it is anticipated that via their tuning they can have applications ranging from nuclear to aerospace industries as well as being precursors for particular MXenes. In the final part, a brief summary assesses the limitations and possibilities of the methodologies discussed, whereas there is discussion on the future directions and examples of solid solution systems that should prove fruitful to consider.

  6. Real-Time System for Water Modeling and Management

    Science.gov (United States)

    Lee, J.; Zhao, T.; David, C. H.; Minsker, B.

    2012-12-01

    Working closely with the Texas Commission on Environmental Quality (TCEQ) and the University of Texas at Austin (UT-Austin), we are developing a real-time system for water modeling and management using advanced cyberinfrastructure, data integration and geospatial visualization, and numerical modeling. The state of Texas suffered a severe drought in 2011 that cost the state $7.62 billion in agricultural losses (crops and livestock). Devastating situations such as this could potentially be avoided with better water modeling and management strategies that incorporate state of the art simulation and digital data integration. The goal of the project is to prototype a near-real-time decision support system for river modeling and management in Texas that can serve as a national and international model to promote more sustainable and resilient water systems. The system uses National Weather Service current and predicted precipitation data as input to the Noah-MP Land Surface model, which forecasts runoff, soil moisture, evapotranspiration, and water table levels given land surface features. These results are then used by a river model called RAPID, along with an error model currently under development at UT-Austin, to forecast stream flows in the rivers. Model forecasts are visualized as a Web application for TCEQ decision makers, who issue water diversion (withdrawal) permits and any needed drought restrictions; permit holders; and reservoir operation managers. Users will be able to adjust model parameters to predict the impacts of alternative curtailment scenarios or weather forecasts. A real-time optimization system under development will help TCEQ to identify optimal curtailment strategies to minimize impacts on permit holders and protect health and safety. To develop the system we have implemented RAPID as a remotely-executed modeling service using the Cyberintegrator workflow system with input data downloaded from the North American Land Data Assimilation System. The

  7. Implementation of a Sage-Based Stirling Model Into a System-Level Numerical Model of the Fission Power System Technology Demonstration Unit

    Science.gov (United States)

    Briggs, Maxwell H.

    2011-01-01

    The Fission Power System (FPS) project is developing a Technology Demonstration Unit (TDU) to verify the performance and functionality of a subscale version of the FPS reference concept in a relevant environment, and to verify component and system models. As hardware is developed for the TDU, component and system models must be refined to include the details of specific component designs. This paper describes the development of a Sage-based pseudo-steady-state Stirling convertor model and its implementation into a system-level model of the TDU.

  8. Reliability modelling and simulation of switched linear system ...

    African Journals Online (AJOL)

    Reliability modelling and simulation of switched linear system control using temporal databases. ... design of fault-tolerant real-time switching systems control and modelling embedded micro-schedulers for complex systems maintenance.

  9. OFFl Models: Novel Schema for Dynamical Modeling of Biological Systems.

    Directory of Open Access Journals (Sweden)

    C Brandon Ogbunugafor

    Full Text Available Flow diagrams are a common tool used to help build and interpret models of dynamical systems, often in biological contexts such as consumer-resource models and similar compartmental models. Typically, their usage is intuitive and informal. Here, we present a formalized version of flow diagrams as a kind of weighted directed graph which follow a strict grammar, which translate into a system of ordinary differential equations (ODEs by a single unambiguous rule, and which have an equivalent representation as a relational database. (We abbreviate this schema of "ODEs and formalized flow diagrams" as OFFL. Drawing a diagram within this strict grammar encourages a mental discipline on the part of the modeler in which all dynamical processes of a system are thought of as interactions between dynamical species that draw parcels from one or more source species and deposit them into target species according to a set of transformation rules. From these rules, the net rate of change for each species can be derived. The modeling schema can therefore be understood as both an epistemic and practical heuristic for modeling, serving both as an organizational framework for the model building process and as a mechanism for deriving ODEs. All steps of the schema beyond the initial scientific (intuitive, creative abstraction of natural observations into model variables are algorithmic and easily carried out by a computer, thus enabling the future development of a dedicated software implementation. Such tools would empower the modeler to consider significantly more complex models than practical limitations might have otherwise proscribed, since the modeling framework itself manages that complexity on the modeler's behalf. In this report, we describe the chief motivations for OFFL, carefully outline its implementation, and utilize a range of classic examples from ecology and epidemiology to showcase its features.

  10. OFFl Models: Novel Schema for Dynamical Modeling of Biological Systems.

    Science.gov (United States)

    Ogbunugafor, C Brandon; Robinson, Sean P

    2016-01-01

    Flow diagrams are a common tool used to help build and interpret models of dynamical systems, often in biological contexts such as consumer-resource models and similar compartmental models. Typically, their usage is intuitive and informal. Here, we present a formalized version of flow diagrams as a kind of weighted directed graph which follow a strict grammar, which translate into a system of ordinary differential equations (ODEs) by a single unambiguous rule, and which have an equivalent representation as a relational database. (We abbreviate this schema of "ODEs and formalized flow diagrams" as OFFL.) Drawing a diagram within this strict grammar encourages a mental discipline on the part of the modeler in which all dynamical processes of a system are thought of as interactions between dynamical species that draw parcels from one or more source species and deposit them into target species according to a set of transformation rules. From these rules, the net rate of change for each species can be derived. The modeling schema can therefore be understood as both an epistemic and practical heuristic for modeling, serving both as an organizational framework for the model building process and as a mechanism for deriving ODEs. All steps of the schema beyond the initial scientific (intuitive, creative) abstraction of natural observations into model variables are algorithmic and easily carried out by a computer, thus enabling the future development of a dedicated software implementation. Such tools would empower the modeler to consider significantly more complex models than practical limitations might have otherwise proscribed, since the modeling framework itself manages that complexity on the modeler's behalf. In this report, we describe the chief motivations for OFFL, carefully outline its implementation, and utilize a range of classic examples from ecology and epidemiology to showcase its features.

  11. Vortex Tube Modeling Using the System Identification Method

    Energy Technology Data Exchange (ETDEWEB)

    Han, Jaeyoung; Jeong, Jiwoong; Yu, Sangseok [Chungnam Nat’l Univ., Daejeon (Korea, Republic of); Im, Seokyeon [Tongmyong Univ., Busan (Korea, Republic of)

    2017-05-15

    In this study, vortex tube system model is developed to predict the temperature of the hot and the cold sides. The vortex tube model is developed based on the system identification method, and the model utilized in this work to design the vortex tube is ARX type (Auto-Regressive with eXtra inputs). The derived polynomial model is validated against experimental data to verify the overall model accuracy. It is also shown that the derived model passes the stability test. It is confirmed that the derived model closely mimics the physical behavior of the vortex tube from both the static and dynamic numerical experiments by changing the angles of the low-temperature side throttle valve, clearly showing temperature separation. These results imply that the system identification based modeling can be a promising approach for the prediction of complex physical systems, including the vortex tube.

  12. Externalizing Behaviour for Analysing System Models

    DEFF Research Database (Denmark)

    Ivanova, Marieta Georgieva; Probst, Christian W.; Hansen, René Rydhof

    2013-01-01

    System models have recently been introduced to model organisations and evaluate their vulnerability to threats and especially insider threats. Especially for the latter these models are very suitable, since insiders can be assumed to have more knowledge about the attacked organisation than outside...... attackers. Therefore, many attacks are considerably easier to be performed for insiders than for outsiders. However, current models do not support explicit specification of different behaviours. Instead, behaviour is deeply embedded in the analyses supported by the models, meaning that it is a complex......, if not impossible task to change behaviours. Especially when considering social engineering or the human factor in general, the ability to use different kinds of behaviours is essential. In this work we present an approach to make the behaviour a separate component in system models, and explore how to integrate...

  13. Integrated Main Propulsion System Performance Reconstruction Process/Models

    Science.gov (United States)

    Lopez, Eduardo; Elliott, Katie; Snell, Steven; Evans, Michael

    2013-01-01

    The Integrated Main Propulsion System (MPS) Performance Reconstruction process provides the MPS post-flight data files needed for postflight reporting to the project integration management and key customers to verify flight performance. This process/model was used as the baseline for the currently ongoing Space Launch System (SLS) work. The process utilizes several methodologies, including multiple software programs, to model integrated propulsion system performance through space shuttle ascent. It is used to evaluate integrated propulsion systems, including propellant tanks, feed systems, rocket engine, and pressurization systems performance throughout ascent based on flight pressure and temperature data. The latest revision incorporates new methods based on main engine power balance model updates to model higher mixture ratio operation at lower engine power levels.

  14. Progress in integrated energy-economy-environment model system development

    International Nuclear Information System (INIS)

    Yasukawa, Shigeru; Mankin, Shuichi; Sato, Osamu; Tadokoro, Yoshihiro; Nakano, Yasuyuki; Nagano, Takao

    1987-11-01

    The Integrated Energy-Economy-Environment Model System has been developed for providing analytical tools for the system analysis and technology assessments in the field of nuclear research and development. This model system consists of the following four model groups. The first model block installs 5 models and can serve to analyze and generate long-term scenarios on economy-energy-environment evolution. The second model block installs 2 models and can serve to analyze the structural transition phenomena in energy-economy-environment interactions. The third model block installs 2 models and can handle power reactor installation strategy problem and long-term fuel cycle analysis. The fourth model block installs 5 models and codes and can treats cost-benefit-risk analysis and assessments. This report describes mainly the progress and the outlines of application of the model system in these years after the first report on the research and development of the model system (JAERI-M 84 - 139). (author)

  15. System-level Modeling of Wireless Integrated Sensor Networks

    DEFF Research Database (Denmark)

    Virk, Kashif M.; Hansen, Knud; Madsen, Jan

    2005-01-01

    Wireless integrated sensor networks have emerged as a promising infrastructure for a new generation of monitoring and tracking applications. In order to efficiently utilize the extremely limited resources of wireless sensor nodes, accurate modeling of the key aspects of wireless sensor networks...... is necessary so that system-level design decisions can be made about the hardware and the software (applications and real-time operating system) architecture of sensor nodes. In this paper, we present a SystemC-based abstract modeling framework that enables system-level modeling of sensor network behavior...... by modeling the applications, real-time operating system, sensors, processor, and radio transceiver at the sensor node level and environmental phenomena, including radio signal propagation, at the sensor network level. We demonstrate the potential of our modeling framework by simulating and analyzing a small...

  16. Translation of Land Surface Model Accuracy and Uncertainty into Coupled Land-Atmosphere Prediction

    Science.gov (United States)

    Santanello, Joseph A.; Kumar, Sujay; Peters-Lidard, Christa D.; Harrison, Kenneth W.; Zhou, Shuija

    2012-01-01

    Land-atmosphere (L-A) Interactions playa critical role in determining the diurnal evolution of both planetary boundary layer (PBL) and land surface heat and moisture budgets, as well as controlling feedbacks with clouds and precipitation that lead to the persistence of dry and wet regimes. Recent efforts to quantify the strength of L-A coupling in prediction models have produced diagnostics that integrate across both the land and PBL components of the system. In this study, we examine the impact of improved specification of land surface states, anomalies, and fluxes on coupled WRF forecasts during the summers of extreme dry (2006) and wet (2007) land surface conditions in the U.S. Southern Great Plains. The improved land initialization and surface flux parameterizations are obtained through the use of a new optimization and uncertainty estimation module in NASA's Land Information System (US-OPT/UE), whereby parameter sets are calibrated in the Noah land surface model and classified according to a land cover and soil type mapping of the observation sites to the full model domain. The impact of calibrated parameters on the a) spinup of the land surface used as initial conditions, and b) heat and moisture states and fluxes of the coupled WRF Simulations are then assessed in terms of ambient weather and land-atmosphere coupling along with measures of uncertainty propagation into the forecasts. In addition, the sensitivity of this approach to the period of calibration (dry, wet, average) is investigated. Finally, tradeoffs of computational tractability and scientific validity, and the potential for combining this approach with satellite remote sensing data are also discussed.

  17. Translation of Land Surface Model Accuracy and Uncertainty into Coupled Land-Atmosphere Prediction

    Science.gov (United States)

    Santanello, J. A.; Kumar, S.; Peters-Lidard, C. D.; Harrison, K. W.; Zhou, S.

    2012-12-01

    Land-atmosphere (L-A) interactions play a critical role in determining the diurnal evolution of both planetary boundary layer (PBL) and land surface heat and moisture budgets, as well as controlling feedbacks with clouds and precipitation that lead to the persistence of dry and wet regimes. Recent efforts to quantify the strength of L-A coupling in prediction models have produced diagnostics that integrate across both the land and PBL components of the system. In this study, we examine the impact of improved specification of land surface states, anomalies, and fluxes on coupled WRF forecasts during the summers of extreme dry (2006) and wet (2007) land surface conditions in the U.S. Southern Great Plains. The improved land initialization and surface flux parameterizations are obtained through the use of a new optimization and uncertainty estimation module in NASA's Land Information System (LIS-OPT/UE), whereby parameter sets are calibrated in the Noah land surface model and classified according to a land cover and soil type mapping of the observation sites to the full model domain. The impact of calibrated parameters on the a) spinup of the land surface used as initial conditions, and b) heat and moisture states and fluxes of the coupled WRF simulations are then assessed in terms of ambient weather and land-atmosphere coupling along with measures of uncertainty propagation into the forecasts. In addition, the sensitivity of this approach to the period of calibration (dry, wet, average) is investigated. Finally, tradeoffs of computational tractability and scientific validity, and the potential for combining this approach with satellite remote sensing data are also discussed.

  18. A Structural Model Decomposition Framework for Systems Health Management

    Science.gov (United States)

    Roychoudhury, Indranil; Daigle, Matthew J.; Bregon, Anibal; Pulido, Belamino

    2013-01-01

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  19. A structural model decomposition framework for systems health management

    Science.gov (United States)

    Roychoudhury, I.; Daigle, M.; Bregon, A.; Pulido, B.

    Systems health management (SHM) is an important set of technologies aimed at increasing system safety and reliability by detecting, isolating, and identifying faults; and predicting when the system reaches end of life (EOL), so that appropriate fault mitigation and recovery actions can be taken. Model-based SHM approaches typically make use of global, monolithic system models for online analysis, which results in a loss of scalability and efficiency for large-scale systems. Improvement in scalability and efficiency can be achieved by decomposing the system model into smaller local submodels and operating on these submodels instead. In this paper, the global system model is analyzed offline and structurally decomposed into local submodels. We define a common model decomposition framework for extracting submodels from the global model. This framework is then used to develop algorithms for solving model decomposition problems for the design of three separate SHM technologies, namely, estimation (which is useful for fault detection and identification), fault isolation, and EOL prediction. We solve these model decomposition problems using a three-tank system as a case study.

  20. Socio-Environmental Resilience and Complex Urban Systems Modeling

    Science.gov (United States)

    Deal, Brian; Petri, Aaron; Pan, Haozhi; Goldenberg, Romain; Kalantari, Zahra; Cvetkovic, Vladimir

    2017-04-01

    The increasing pressure of climate change has inspired two normative agendas; socio-technical transitions and socio-ecological resilience, both sharing a complex-systems epistemology (Gillard et al. 2016). Socio-technical solutions include a continuous, massive data gathering exercise now underway in urban places under the guise of developing a 'smart'(er) city. This has led to the creation of data-rich environments where large data sets have become central to monitoring and forming a response to anomalies. Some have argued that these kinds of data sets can help in planning for resilient cities (Norberg and Cumming 2008; Batty 2013). In this paper, we focus on a more nuanced, ecologically based, socio-environmental perspective of resilience planning that is often given less consideration. Here, we broadly discuss (and model) the tightly linked, mutually influenced, social and biophysical subsystems that are critical for understanding urban resilience. We argue for the need to incorporate these sub system linkages into the resilience planning lexicon through the integration of systems models and planning support systems. We make our case by first providing a context for urban resilience from a socio-ecological and planning perspective. We highlight the data needs for this type of resilient planning and compare it to currently collected data streams in various smart city efforts. This helps to define an approach for operationalizing socio-environmental resilience planning using robust systems models and planning support systems. For this, we draw from our experiences in coupling a spatio-temporal land use model (the Landuse Evolution and impact Assessment Model (LEAM)) with water quality and quantity models in Stockholm Sweden. We describe the coupling of these systems models using a robust Planning Support System (PSS) structural framework. We use the coupled model simulations and PSS to analyze the connection between urban land use transformation (social) and water

  1. Data-Driven Photovoltaic System Modeling Based on Nonlinear System Identification

    Directory of Open Access Journals (Sweden)

    Ayedh Alqahtani

    2016-01-01

    Full Text Available Solar photovoltaic (PV energy sources are rapidly gaining potential growth and popularity compared to conventional fossil fuel sources. As the merging of PV systems with existing power sources increases, reliable and accurate PV system identification is essential, to address the highly nonlinear change in PV system dynamic and operational characteristics. This paper deals with the identification of a PV system characteristic with a switch-mode power converter. Measured input-output data are collected from a real PV panel to be used for the identification. The data are divided into estimation and validation sets. The identification methodology is discussed. A Hammerstein-Wiener model is identified and selected due to its suitability to best capture the PV system dynamics, and results and discussion are provided to demonstrate the accuracy of the selected model structure.

  2. Model documentation report: Industrial sector demand module of the National Energy Modeling System

    International Nuclear Information System (INIS)

    1997-01-01

    This report documents the objectives, analytical approach, and development of the National Energy Modeling System (NEMS) Industrial Demand Model. The report catalogues and describes model assumptions, computational methodology, parameter estimation techniques, and model source code. This document serves three purposes. First, it is a reference document providing a detailed description of the NEMS Industrial Model for model analysts, users, and the public. Second, this report meets the legal requirement of the Energy Information Administration (EIA) to provide adequate documentation in support of its models. Third, it facilitates continuity in model development by providing documentation from which energy analysts can undertake model enhancements, data updates, and parameter refinements as future projects. The NEMS Industrial Demand Model is a dynamic accounting model, bringing together the disparate industries and uses of energy in those industries, and putting them together in an understandable and cohesive framework. The Industrial Model generates mid-term (up to the year 2015) forecasts of industrial sector energy demand as a component of the NEMS integrated forecasting system. From the NEMS system, the Industrial Model receives fuel prices, employment data, and the value of industrial output. Based on the values of these variables, the Industrial Model passes back to the NEMS system estimates of consumption by fuel types

  3. Nonlinear Model Predictive Control for Cooperative Control and Estimation

    Science.gov (United States)

    Ru, Pengkai

    Recent advances in computational power have made it possible to do expensive online computations for control systems. It is becoming more realistic to perform computationally intensive optimization schemes online on systems that are not intrinsically stable and/or have very small time constants. Being one of the most important optimization based control approaches, model predictive control (MPC) has attracted a lot of interest from the research community due to its natural ability to incorporate constraints into its control formulation. Linear MPC has been well researched and its stability can be guaranteed in the majority of its application scenarios. However, one issue that still remains with linear MPC is that it completely ignores the system's inherent nonlinearities thus giving a sub-optimal solution. On the other hand, if achievable, nonlinear MPC, would naturally yield a globally optimal solution and take into account all the innate nonlinear characteristics. While an exact solution to a nonlinear MPC problem remains extremely computationally intensive, if not impossible, one might wonder if there is a middle ground between the two. We tried to strike a balance in this dissertation by employing a state representation technique, namely, the state dependent coefficient (SDC) representation. This new technique would render an improved performance in terms of optimality compared to linear MPC while still keeping the problem tractable. In fact, the computational power required is bounded only by a constant factor of the completely linearized MPC. The purpose of this research is to provide a theoretical framework for the design of a specific kind of nonlinear MPC controller and its extension into a general cooperative scheme. The controller is designed and implemented on quadcopter systems.

  4. Model-Based Systems Engineering in Concurrent Engineering Centers

    Science.gov (United States)

    Iwata, Curtis; Infeld, Samantha; Bracken, Jennifer Medlin; McGuire, Melissa; McQuirk, Christina; Kisdi, Aron; Murphy, Jonathan; Cole, Bjorn; Zarifian, Pezhman

    2015-01-01

    Concurrent Engineering Centers (CECs) are specialized facilities with a goal of generating and maturing engineering designs by enabling rapid design iterations. This is accomplished by co-locating a team of experts (either physically or virtually) in a room with a narrow design goal and a limited timeline of a week or less. The systems engineer uses a model of the system to capture the relevant interfaces and manage the overall architecture. A single model that integrates other design information and modeling allows the entire team to visualize the concurrent activity and identify conflicts more efficiently, potentially resulting in a systems model that will continue to be used throughout the project lifecycle. Performing systems engineering using such a system model is the definition of model-based systems engineering (MBSE); therefore, CECs evolving their approach to incorporate advances in MBSE are more successful in reducing time and cost needed to meet study goals. This paper surveys space mission CECs that are in the middle of this evolution, and the authors share their experiences in order to promote discussion within the community.

  5. Modelling of cardiovascular system: development of a hybrid (numerical-physical) model.

    Science.gov (United States)

    Ferrari, G; Kozarski, M; De Lazzari, C; Górczyńska, K; Mimmo, R; Guaragno, M; Tosti, G; Darowski, M

    2003-12-01

    Physical models of the circulation are used for research, training and for testing of implantable active and passive circulatory prosthetic and assistance devices. However, in comparison with numerical models, they are rigid and expensive. To overcome these limitations, we have developed a model of the circulation based on the merging of a lumped parameter physical model into a numerical one (producing therefore a hybrid). The physical model is limited to the barest essentials and, in this application, developed to test the principle, it is a windkessel representing the systemic arterial tree. The lumped parameters numerical model was developed in LabVIEW environment and represents pulmonary and systemic circulation (except the systemic arterial tree). Based on the equivalence between hydraulic and electrical circuits, this prototype was developed connecting the numerical model to an electrical circuit--the physical model. This specific solution is valid mainly educationally but permits the development of software and the verification of preliminary results without using cumbersome hydraulic circuits. The interfaces between numerical and electrical circuits are set up by a voltage controlled current generator and a voltage controlled voltage generator. The behavior of the model is analyzed based on the ventricular pressure-volume loops and on the time course of arterial and ventricular pressures and flow in different circulatory conditions. The model can represent hemodynamic relationships in different ventricular and circulatory conditions.

  6. Novel simplified hourly energy flow models for photovoltaic power systems

    International Nuclear Information System (INIS)

    Khatib, Tamer; Elmenreich, Wilfried

    2014-01-01

    Highlights: • We developed an energy flow model for standalone PV system using MATLAB line code. • We developed an energy flow model for hybrid PV/wind system using MATLAB line code. • We developed an energy flow model for hybrid PV/diesel system using MATLAB line code. - Abstract: This paper presents simplified energy flow models for photovoltaic (PV) power systems using MATLAB. Three types of PV power system are taken into consideration namely standalone PV systems, hybrid PV/wind systems and hybrid PV/diesel systems. The logic of the energy flow for each PV power system is discussed first and then the MATLAB line codes for these models are provided and explained. The results prove the accuracy of the proposed models. Such models help modeling and sizing PV systems

  7. Structural equation modeling and natural systems

    Science.gov (United States)

    Grace, James B.

    2006-01-01

    This book, first published in 2006, presents an introduction to the methodology of structural equation modeling, illustrates its use, and goes on to argue that it has revolutionary implications for the study of natural systems. A major theme of this book is that we have, up to this point, attempted to study systems primarily using methods (such as the univariate model) that were designed only for considering individual processes. Understanding systems requires the capacity to examine simultaneous influences and responses. Structural equation modeling (SEM) has such capabilities. It also possesses many other traits that add strength to its utility as a means of making scientific progress. In light of the capabilities of SEM, it can be argued that much of ecological theory is currently locked in an immature state that impairs its relevance. It is further argued that the principles of SEM are capable of leading to the development and evaluation of multivariate theories of the sort vitally needed for the conservation of natural systems.

  8. Stochastic Modelling of Hydrologic Systems

    DEFF Research Database (Denmark)

    Jonsdottir, Harpa

    2007-01-01

    In this PhD project several stochastic modelling methods are studied and applied on various subjects in hydrology. The research was prepared at Informatics and Mathematical Modelling at the Technical University of Denmark. The thesis is divided into two parts. The first part contains...... an introduction and an overview of the papers published. Then an introduction to basic concepts in hydrology along with a description of hydrological data is given. Finally an introduction to stochastic modelling is given. The second part contains the research papers. In the research papers the stochastic methods...... are described, as at the time of publication these methods represent new contribution to hydrology. The second part also contains additional description of software used and a brief introduction to stiff systems. The system in one of the papers is stiff....

  9. Systematic modelling and simulation of refrigeration systems

    DEFF Research Database (Denmark)

    Rasmussen, Bjarne D.; Jakobsen, Arne

    1998-01-01

    The task of developing a simulation model of a refrigeration system can be very difficult and time consuming. In order for this process to be effective, a systematic method for developing the system model is required. This method should aim at guiding the developer to clarify the purpose...... of the simulation, to select appropriate component models and to set up the equations in a well-arranged way. In this paper the outline of such a method is proposed and examples showing the use of this method for simulation of refrigeration systems are given....

  10. Time domain series system definition and gear set reliability modeling

    International Nuclear Information System (INIS)

    Xie, Liyang; Wu, Ningxiang; Qian, Wenxue

    2016-01-01

    Time-dependent multi-configuration is a typical feature for mechanical systems such as gear trains and chain drives. As a series system, a gear train is distinct from a traditional series system, such as a chain, in load transmission path, system-component relationship, system functioning manner, as well as time-dependent system configuration. Firstly, the present paper defines time-domain series system to which the traditional series system reliability model is not adequate. Then, system specific reliability modeling technique is proposed for gear sets, including component (tooth) and subsystem (tooth-pair) load history description, material priori/posterior strength expression, time-dependent and system specific load-strength interference analysis, as well as statistically dependent failure events treatment. Consequently, several system reliability models are developed for gear sets with different tooth numbers in the scenario of tooth root material ultimate tensile strength failure. The application of the models is discussed in the last part, and the differences between the system specific reliability model and the traditional series system reliability model are illustrated by virtue of several numerical examples. - Highlights: • A new type of series system, i.e. time-domain multi-configuration series system is defined, that is of great significance to reliability modeling. • Multi-level statistical analysis based reliability modeling method is presented for gear transmission system. • Several system specific reliability models are established for gear set reliability estimation. • The differences between the traditional series system reliability model and the new model are illustrated.

  11. Modelling of pathologies of the nervous system by the example of computational and electronic models of elementary nervous systems

    Energy Technology Data Exchange (ETDEWEB)

    Shumilov, V. N., E-mail: vnshumilov@rambler.ru; Syryamkin, V. I., E-mail: maximus70sir@gmail.com; Syryamkin, M. V., E-mail: maximus70sir@gmail.com [National Research Tomsk State University, 634050, Tomsk, Lenin Avenue, 36 (Russian Federation)

    2015-11-17

    The paper puts forward principles of action of devices operating similarly to the nervous system and the brain of biological systems. We propose an alternative method of studying diseases of the nervous system, which may significantly influence prevention, medical treatment, or at least retardation of development of these diseases. This alternative is to use computational and electronic models of the nervous system. Within this approach, we represent the brain in the form of a huge electrical circuit composed of active units, namely, neuron-like units and connections between them. As a result, we created computational and electronic models of elementary nervous systems, which are based on the principles of functioning of biological nervous systems that we have put forward. Our models demonstrate reactions to external stimuli and their change similarly to the behavior of simplest biological organisms. The models possess the ability of self-training and retraining in real time without human intervention and switching operation/training modes. In our models, training and memorization take place constantly under the influence of stimuli on the organism. Training is without any interruption and switching operation modes. Training and formation of new reflexes occur by means of formation of new connections between excited neurons, between which formation of connections is physically possible. Connections are formed without external influence. They are formed under the influence of local causes. Connections are formed between outputs and inputs of two neurons, when the difference between output and input potentials of excited neurons exceeds a value sufficient to form a new connection. On these grounds, we suggest that the proposed principles truly reflect mechanisms of functioning of biological nervous systems and the brain. In order to confirm the correspondence of the proposed principles to biological nature, we carry out experiments for the study of processes of

  12. Modelling of pathologies of the nervous system by the example of computational and electronic models of elementary nervous systems

    International Nuclear Information System (INIS)

    Shumilov, V. N.; Syryamkin, V. I.; Syryamkin, M. V.

    2015-01-01

    The paper puts forward principles of action of devices operating similarly to the nervous system and the brain of biological systems. We propose an alternative method of studying diseases of the nervous system, which may significantly influence prevention, medical treatment, or at least retardation of development of these diseases. This alternative is to use computational and electronic models of the nervous system. Within this approach, we represent the brain in the form of a huge electrical circuit composed of active units, namely, neuron-like units and connections between them. As a result, we created computational and electronic models of elementary nervous systems, which are based on the principles of functioning of biological nervous systems that we have put forward. Our models demonstrate reactions to external stimuli and their change similarly to the behavior of simplest biological organisms. The models possess the ability of self-training and retraining in real time without human intervention and switching operation/training modes. In our models, training and memorization take place constantly under the influence of stimuli on the organism. Training is without any interruption and switching operation modes. Training and formation of new reflexes occur by means of formation of new connections between excited neurons, between which formation of connections is physically possible. Connections are formed without external influence. They are formed under the influence of local causes. Connections are formed between outputs and inputs of two neurons, when the difference between output and input potentials of excited neurons exceeds a value sufficient to form a new connection. On these grounds, we suggest that the proposed principles truly reflect mechanisms of functioning of biological nervous systems and the brain. In order to confirm the correspondence of the proposed principles to biological nature, we carry out experiments for the study of processes of

  13. Aerodynamic and Mechanical System Modelling

    DEFF Research Database (Denmark)

    Jørgensen, Martin Felix

    This thesis deals with mechanical multibody-systems applied to the drivetrain of a 500 kW wind turbine. Particular focus has been on gearbox modelling of wind turbines. The main part of the present project involved programming multibody systems to investigate the connection between forces, moments...

  14. Information system success model for customer relationship management system in health promotion centers.

    Science.gov (United States)

    Choi, Wona; Rho, Mi Jung; Park, Jiyun; Kim, Kwang-Jum; Kwon, Young Dae; Choi, In Young

    2013-06-01

    Intensified competitiveness in the healthcare industry has increased the number of healthcare centers and propelled the introduction of customer relationship management (CRM) systems to meet diverse customer demands. This study aimed to develop the information system success model of the CRM system by investigating previously proposed indicators within the model. THE EVALUATION AREAS OF THE CRM SYSTEM INCLUDES THREE AREAS: the system characteristics area (system quality, information quality, and service quality), the user area (perceived usefulness and user satisfaction), and the performance area (personal performance and organizational performance). Detailed evaluation criteria of the three areas were developed, and its validity was verified by a survey administered to CRM system users in 13 nationwide health promotion centers. The survey data were analyzed by the structural equation modeling method, and the results confirmed that the model is feasible. Information quality and service quality showed a statistically significant relationship with perceived usefulness and user satisfaction. Consequently, the perceived usefulness and user satisfaction had significant influence on individual performance as well as an indirect influence on organizational performance. This study extends the research area on information success from general information systems to CRM systems in health promotion centers applying a previous information success model. This lays a foundation for evaluating health promotion center systems and provides a useful guide for successful implementation of hospital CRM systems.

  15. Information System Success Model for Customer Relationship Management System in Health Promotion Centers

    Science.gov (United States)

    Choi, Wona; Rho, Mi Jung; Park, Jiyun; Kim, Kwang-Jum; Kwon, Young Dae

    2013-01-01

    Objectives Intensified competitiveness in the healthcare industry has increased the number of healthcare centers and propelled the introduction of customer relationship management (CRM) systems to meet diverse customer demands. This study aimed to develop the information system success model of the CRM system by investigating previously proposed indicators within the model. Methods The evaluation areas of the CRM system includes three areas: the system characteristics area (system quality, information quality, and service quality), the user area (perceived usefulness and user satisfaction), and the performance area (personal performance and organizational performance). Detailed evaluation criteria of the three areas were developed, and its validity was verified by a survey administered to CRM system users in 13 nationwide health promotion centers. The survey data were analyzed by the structural equation modeling method, and the results confirmed that the model is feasible. Results Information quality and service quality showed a statistically significant relationship with perceived usefulness and user satisfaction. Consequently, the perceived usefulness and user satisfaction had significant influence on individual performance as well as an indirect influence on organizational performance. Conclusions This study extends the research area on information success from general information systems to CRM systems in health promotion centers applying a previous information success model. This lays a foundation for evaluating health promotion center systems and provides a useful guide for successful implementation of hospital CRM systems. PMID:23882416

  16. Flexible Design for α-Duplex Communications in Multi-Tier Cellular Networks

    KAUST Repository

    Alammouri, Ahmad; Elsawy, Hesham; Alouini, Mohamed-Slim

    2016-01-01

    the foreseen FD gains. This paper presents flexible and tractable modeling framework for multi-tier cellular networks with FD BSs and FD/HD UEs. The presented model is based on stochastic geometry and accounts for the intrinsic vulnerability of uplink

  17. Prototype models for the MOIRA computerised system

    Energy Technology Data Exchange (ETDEWEB)

    Monte, Luigi [ENEA, Centro Ricerche Casaccia, Rome (Italy). Dipt. Ambiente; Hakanson, Lars [Uppsala, Univ. (Sweden). Institute of Earth Sciences; Brittain, John [Oslo, Univ. (Norway). Zoological Museum

    1997-06-01

    The main aim of the present report is to describe selected models and the principles of the Decision Analysis theory that will be applied to develop the model-based computerised system `MOIRA`. A dose model and a model for predicting radiocaesium migration in lakes and the effects of countermeasures to reduce the contamination levels in the components of lacustrine system are described in detail. The principles for developing prototype models for predicting the migration of {sup 90}Sr in lake abiotic and biotic components are discussed. The environmental models described in the report are based on the use of `collective parameters` which due to mutual compensation effects of different phenomena occurring in complex systems, show low variability when the environmental conditions change. Use of such `collective parameters` not only increases the predictive power of the models, but also increases the practical applicability of the model. Among the main results described in the report, the development of an objective hierarchy table for evaluating the effectiveness of a countermeasure when the economic, social and ecological impacts are accounted for, deserves special attention.

  18. Development of an EVA systems cost model. Volume 3: EVA systems cost model

    Science.gov (United States)

    1975-01-01

    The EVA systems cost model presented is based on proposed EVA equipment for the space shuttle program. General information on EVA crewman requirements in a weightless environment and an EVA capabilities overview are provided.

  19. Imaging system models for small-bore DOI-PET scanners

    International Nuclear Information System (INIS)

    Takahashi, Hisashi; Kobayashi, Tetsuya; Yamaya, Taiga; Murayama, Hideo; Kitamura, Keishi; Hasegawa, Tomoyuki; Suga, Mikio

    2006-01-01

    Depth-of-interaction (DOI) information, which improves resolution uniformity in the field of view (FOV), is expected to lead to high-sensitivity PET scanners with small-bore detector rings. We are developing small-bore PET scanners with DOI detectors arranged in hexagonal or overlapped tetragonal patterns for small animal imaging or mammography. It is necessary to optimize the imaging system model because these scanners exhibit irregular detector sampling. In this work, we compared two imaging system models: (a) a parallel sub-LOR model in which the detector response functions (DRFs) are assumed to be uniform along the line of responses (LORs) and (b) a sub-crystal model in which each crystal is divided into a set of smaller volumes. These two models were applied to the overlapped tetragonal scanner (FOV 38.1 mm in diameter) and the hexagonal scanner (FOV 85.2 mm in diameter) simulated by GATE. We showed that the resolution non-uniformity of system model (b) was improved by 40% compared with that of system model (a) in the overlapped tetragonal scanner and that the resolution non-uniformity of system model (a) was improved by 18% compared with that of system model (b) in the hexagonal scanner. These results indicate that system model (b) should be applied to the overlapped tetragonal scanner and system model (a) should be applied to the hexagonal scanner. (author)

  20. Test-Driven, Model-Based Systems Engineering

    DEFF Research Database (Denmark)

    Munck, Allan

    Hearing systems have evolved over many years from simple mechanical devices (horns) to electronic units consisting of microphones, amplifiers, analog filters, loudspeakers, batteries, etc. Digital signal processors replaced analog filters to provide better performance end new features. Central....... This thesis concerns methods for identifying, selecting and implementing tools for various aspects of model-based systems engineering. A comprehensive method was proposed that include several novel steps such as techniques for analyzing the gap between requirements and tool capabilities. The method...... was verified with good results in two case studies for selection of a traceability tool (single-tool scenario) and a set of modeling tools (multi-tool scenarios). Models must be subjected to testing to allow engineers to predict functionality and performance of systems. Test-first strategies are known...